Training a DDPG, and observation values are zero. How do I initialize the first episode to have initial values to the action?

Hello,
I am training a DDPG agent with four actions. My observations are zero for more than 1000 episodes. I suspect because the action values have been zero, that is affecting the observations. How do I set the action values for the first episode to some values at start.
Actions are torque input with min and max (200) and later multiplied with gain 100. Is there something, I need to do to properly to get the observations to not stay as zeros.

4 Commenti

There is probably something else going on and simply getting a nonzero initial value for the actions won't solve it. If you can share a reproduction model I may be able to take a look
Thank you. I rechecked again and identified that the isdone conditions had an error which was causing that.
I have a followup question,
This is what I know: during training, the episode ends at the end of the simulation time, tf.
If you have an RL problem, and there are no isdone condition because you just want agent to learn the "optimal" solution to maximize a - reward, but you want the RL to know that the only termination condition is the specific set time, tf. (Tf =5, is fixed and does not change). How do you set the isdone condition. Do you connect a time clock to the isDone or you just leave it unconnected. If it is left unconnected, how does the agent know that that time is the terminating condition? Any recommendation to ensure I am properly training the agent would be appreciated.
Not very clear why you would want the agent to learn when the termination time of the episode? After training you can always choose to 'unplug' the agent as you see fit.

Accedi per commentare.

Risposte (0)

Categorie

Scopri di più su Reinforcement Learning Toolbox in Centro assistenza e File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by