Control the exploration in soft actor-critic
2 visualizzazioni (ultimi 30 giorni)
Mostra commenti meno recenti
What is the best way to control the exploration in SAC agent. For TD3 agent I used to control the exploration by adjusting the variance parameter of the agent. Is there any such option for the SAC agent. Currently it seems that the agent is exploring more than required.
0 Commenti
Risposte (1)
Ahmed R. Sayed
il 4 Ott 2022
Hi Mukherjee,
You can control the agent exploration by adjusting the entropy temperature options "EntropyWeightOptions" from the rlSACAgentOptions
For example, large values of EntropyWeight encourage the agent to explore the environment or control it by adjusting the temperature learning rate "LearnRate" to reach the target entropy "TargetEntropy" value [1]. In other words, you can use a fixed weight with zero learning rate and so on.
[1] Haarnoja, Tuomas, Aurick Zhou, Kristian Hartikainen, George Tucker, Sehoon Ha, Jie Tan, Vikash Kumar, et al. "Soft Actor-Critic Algorithms and Application." Preprint, submitted January 29, 2019. https://arxiv.org/abs/1812.05905.
0 Commenti
Vedere anche
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!