Doubts in developing Simulink model for Reinforcement learning
Mostra commenti meno recenti
Hello, I am new to the RL in matlab. From the onramp course and online sources I understood that I should have a simulink environment with an RL agent to use this. My system is described by a set of differential equation. How can I vary the initial condition in the integrator during training? Also it is the initial condition for output, but how to set the initial condition for input? Currently I developed a Simulink model with integrator block having output's initial condition to be zero, but for input I did not specify. I tried to train a DQN agent and I observed that it tracks the setpoint reasonably, however, when I plotted the input values given by RL agent, it started from the point 50 and then evolved with time, however I did not specify this value. Similarly, for a different output condition of 12 in the integrator block, the input value from RL agent block started from 10, this value of 10 also not specified by me. Can you please help me with two things.
- How to give the initial condition for output externally to the integrator block during training?
- How to specify the initial condition of input value corresponding to the output initial condition? (I don't want it to be randomly considered).
1 Commento
K.R
il 24 Giu 2024
Risposta accettata
Più risposte (0)
Categorie
Scopri di più su Reinforcement Learning in Centro assistenza e File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!

