Modify weights update rule of DDPG traning

2 visualizzazioni (ultimi 30 giorni)
Vu Thang
Vu Thang il 20 Set 2024
Commentato: Vu Thang il 24 Set 2024
Currently, i have studied DDPG algorithm for my control project. I want to modify a bit in the weight update rule (gradient descent) to reduce the steady state error of system. How can i do this?

Risposte (1)

Shubham
Shubham il 22 Set 2024
Hey Vu,
In order to make modifications in your gradient descent function, you can make changes in "rlOptimizerOptions" object as mentioned in the documentation for DDPG agent: https://www.mathworks.com/help/releases/R2023b/reinforcement-learning/ug/ddpg-agents.html
In order to make changes in the weight update rule,
  • modify the "LearnRate" to change the rate at which the the model is converged.
  • modify the "L2RegularizationFactor" to avoid overfitting.
Have a look at the "rlOptimizerOptions" for more details:
Happy coding!
  1 Commento
Vu Thang
Vu Thang il 24 Set 2024
Thank you for your comment. I have tried modify the 'LearnRate' but it seem not work, the steady state error still available. I want to add an Integral component to weight update rule as some papers suggest. Is there any way to do it?

Accedi per commentare.

Prodotti


Release

R2023b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by