Stopping conditions for DQN training

10 visualizzazioni (ultimi 30 giorni)
Zonghao zou
Zonghao zou il 18 Ott 2020
Risposto: Madhav Thakker il 25 Nov 2020
Hello all,
I am currently playing around with DQN trainning. I am trying to find a systemic way to stop the trainning process rather than to stop it mannually. However, for my trainning process, I have no idea what the end rewards will be and I don't have a target point to reach. Therefore, I do not know when to stop.
Is there a way for me to stop DQN agent without those information and guarentee some type of convergence?
Thanks for helping!

Risposte (1)

Madhav Thakker
Madhav Thakker il 25 Nov 2020
Hi Zonghao zou,
One possible parameter to consider when stopping training is Q-Values. If the Q-Values are saturated, it means that no learning is happening in the network. You can perhaps look at your Q-values and decide a threshold, to perform early-stopping in the network. You don't need the end-reward or target-point to perform early stopping based on Q-Values.
Hope thi helps.

Tag

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by