Issue with Q0 Convergence during Training using PPO Agent

7 visualizzazioni (ultimi 30 giorni)
Hi guys,
I have developed my model and trained using PPO agent. Overall, the training process has been successful. However, I have encountered an issue with the Q0 values. The maximum achieable rewards is 6000. I set to stop my training at 98.5% of the maximum rewards (5910).
During the training, I have noticed that the Q0 values did not converge as expected. In fact, they seem to be capped at 100, as indicated by the figures. I am currently seeking an explanation for this behavior and trying to understand why the Q0 values are not reaching the desired convergence.
My agent option is as follow:
If anyone has any insights or explanations regarding the behavior of Q0 during training with the PPO agent, I would greatly appreciate your input. Your expertise and guidance would be invaluable in helping me understanding and addressing this issue.
Thank you.
  2 Commenti
Muhammad Fairuz Abdul Jalal
Thanks @Emmanouil Tzorakoleftherakis for the reply.
As per request. The snapshot of the code.
The action is set between -1 and 1. However, in the model, each action has their own gain.
The critic
The actor
Training Options
Thank you in advance. Really appreciate your help and support.

Accedi per commentare.

Risposta accettata

Emmanouil Tzorakoleftherakis
Modificato: Emmanouil Tzorakoleftherakis il 12 Lug 2023
It seems you set the training to stop when the episode reward reaches the value of 0.985*(Tf/Ts)*3. I cannot comment on the value itself, but usually it's better to use average reward values as an indicator of when to stop training because it will helps filter out outlier episodes.
Aside fromt hat, in case it wasn't clear, the stopping criteria is not based on Q0, but the light blue value (individual episode reward) that you see in the plots you shared above. The value of Q0 will get better based on how well the critic is trained, but it does not necessarily need to "converge" in order to stop training. Better critic means better more stable training, but at the end of the day you only care about your actor. This is usually why it takes a few trials to see what stopping critiria make sense.
  1 Commento
Muhammad Fairuz Abdul Jalal
Thank you for highlighting the better way for stopping criteria. I will do the changes accordingly. Will update here soon.

Accedi per commentare.

Più risposte (0)

Prodotti


Release

R2022b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by