Azzera filtri
Azzera filtri

Giving Different Input to actor and critic in simulink

4 visualizzazioni (ultimi 30 giorni)
Hi, I am trying to design a Reinorcement Learning model with environment designed in simulink.
I need to fed actor and critic with different input vectors to see if the model performs better. I know this is possibile and it has been done with others coders as python. If I use different input in network generation, the system gives back error. Also in RL Agent block in simulink there is no way to add a different input port than external action (which is a different case).
Any suggestion on how to do? There is something I need to consider, or should I write all the code in Matlab instead of Simulink?
Thanks a lot!

Risposte (1)

Aditya
Aditya il 16 Gen 2024
Modificato: Aditya il 16 Gen 2024
In MATLAB and Simulink, creating a reinforcement learning model where the actor and critic networks receive different input vectors can be a bit tricky because the default setup is designed for them to share the same observation input. However, there are ways to work around this limitation.
Here are some suggestions on how to feed different input vectors to the actor and critic in a Simulink environment.
The first option is custom MATLAB code:
  1. Define Custom Networks in MATLAB
  2. Create Actor and Critic Representations
  3. Create a Custom Training Loop
  4. Integrate with Simulink
The second option is modifying simulink model:
  1. Split Observation Vector
  2. Use Subsystems
  3. Combine Outputs
  4. Customize Training Algorithm
If you are encountering specific errors when trying to implement different inputs for the actor and critic, it would be helpful to look at the error messages and the documentation to understand the constraints of the RL Agent block and the reinforcement learning toolbox in MATLAB. Depending on the nature of the errors, you might be able to adjust your model or code to work within those constraints.
In summary, while it is possible to feed different input vectors to the actor and critic in a reinforcement learning model in Simulink, it requires a more advanced setup and potentially custom MATLAB code to manage the training process.

Prodotti


Release

R2021b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by