How to Convert Entity Attribute Buses into Signal for RL Agent Observation Input?

Dear Matlab Simulink Experts,
I am having troubles with my model, right now I have designed a model to emulate a single part that goes through operations and maintenance procedures. I am utilizing bus elements in an entity generator, the next step of my model is to connect the intended output of the model from entity server 'damage' to an observation input port of the RL agent block. The RL agent block will then produce an output through the action port, connected again to an option of entity servers. How do I convert and entity attribute into a signal that the RL agent can receive, and vice versa, how do I revert back the signal into an entity attribute, after the RL agent has made its decision? Or what are other options other than converting entities to signal?
The attempts I have made were using a signal conversion block, and a Matlab function block, but my efforts hasn't been rewarding so far.
Your help would be very much appreciated. Attached are my files:
  • ModelV1.slx (the simulink model)
  • BusProfile.mat (the bus elements containing the entity attribute)
  • initialize.m (to initialize the bus editor)
  • RLScriptV4.mlx (the Reinforcement Learning script)
And this is the error generated:
Thank you in advance.

 Risposta accettata

Hi Aaron,
To convert an entity attribute into a signal, you can use a Simulink function. For example, to get the D attribute as a signal:
Add a Simulink Function "getD" as below:
This Simulink Function simply contains:
And you call it from your server complete action, see the last command below:

7 Commenti

Dear Laurent,
Thank you once again for your answer, your solution worked, and right now my simulation is running.
I made some adjustments, because I need all 16 entities to be transferred, so the getD function actually contains 16 elements that are combined as an observation for the RL Agent.
But it does beg the question, how do I connect the action port to the entity output switch next to it? Right now I have it in the picture below, therefore it could run. Otherwise the entities would not reach the input port on the far left. But how can I connect the action port so that it could implement the decision making that the RL Agent makes?
Thank you very much in advance.
I'm not sure to fully understand what the action should do on your system. Do you mean the action should control the route applied by the entity output switch?
If so, I see 2 ways to do that. Please note that I'm not sure at all if any of these 2 solutions will fit for the reinforcement learning task. I'm skilled with SimEvents but not at all with this reinforcement topic unfortunately. The combination of this reinforcement agent with event-based modeling is not a trivial work!
First solution: insert a Entity Server to change the MRO.Type to match the action value:
Second solution: change the Output Switch parameter to "From control port", and generate a message via a Simulink function to control this Switch:
The Simulink function contains:
HTH, let me know how it goes.
Dear Laurent,
Thank you for your reply. Yes, I intend on having the action control the route applied on the entity switch.
I completely understand of your background, I also understand that no past examples can be found to combine SimEvents and RL online. I am nonetheless grateful for your help, and I agree completely that combining these 2 has not been a trivial task at all.
I am here to inform you on how it went, unfortunately the 2 solutions are not successful yet.
The first and second solution you proposed both resulted in this error message:
Error using rl.train.SeriesTrainer/run
An error occurred while running the simulation for model 'ModelV4' with the following
RL agent blocks:
ModelV4/RL Agent
Error in rl.train.TrainingManager/train (line 429)
run(trainer);
Error in rl.train.TrainingManager/run (line 218)
train(this);
Error in rl.agent.AbstractAgent/train (line 83)
trainingResult = run(trainMgr,checkpoint);
Caused by:
Error using rl.env.internal.reportSimulinkSimError
Error due to multiple causes.
Error using rl.env.internal.reportSimulinkSimError Input data dependency violation due to function-call or action subsystems.
See valid and invalid examples of function-call subsystems in 'sl_subsys_semantics' for additional information.
Error using rl.env.internal.reportSimulinkSimError Input ports (FcnCall) of 'ModelV4/ExtractEntities' are involved in the loop.
Error using rl.env.internal.reportSimulinkSimError Input ports (1) of 'ModelV4/RL Agent/Policy Process Experience/TmpSignal
ConversionAtPolicy Process Experience InternalInport1' are involved in the loop.
This message is related to a hidden SignalConversion block.
This block is added for block 'ModelV4/RL Agent/Policy Process Experience/Policy Process Experience Internal' at input port 1 as result of block insertion or expansion.
The hidden block's parameter 'Output' is set to 'Signal copy' Consider manually inserting such a block to debug the problem.
Error using rl.env.internal.reportSimulinkSimError Input ports (1) of 'ModelV4/RL Agent/Policy Process Experience/Policy Process Experience Internal' are involved in the loop.
Error using rl.env.internal.reportSimulinkSimError Input ports (1, FcnCall) of 'ModelV4/getAction' are involved in the loop.
Based on the error, I understand that inside of the RL Agent could not be modified (ex. ModelV4/RL Agent/Policy Process Experience/Policy Process Experience Internal), so I tried to insert a signal conversion block right before the observation input port, but that does not seem to solve the problem.
Thank you very much!
Could you please attach the last version of your model so that I can reproduce the error on my side?
Sure thing, here you go
Best regards,
Aaron.
I think there's an algebraic loop that you should break. Try inserting a Memory block just before the Simulink function getAction.
Hi Laurent,
Thank you for your reply, your solution worked!
The RL Agent was able to implement the actions onto the SimEvents blocks, although the results are not what I expected as I would get from a dry model (with SimEvents only and without RL), but I will find out why the results differ.
Once again, thank you for your time, consideration, and expertise, it is very much appreciated.
Best,
Aaron.

Accedi per commentare.

Più risposte (0)

Categorie

Scopri di più su Discrete-Event Simulation in Centro assistenza e File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by