RL Toolbox: DQN epsilon greedy exploration with epsilon=1 does not act random

5 visualizzazioni (ultimi 30 giorni)
Setup:
  • Costum Simulink Environment
  • DQN Agent
To get a baseline of the environment I started training with a DQN Agent with:
opt.EpsilonGreedyExploration.Epsilon=1;
opt.EpsilonGreedyExploration.EpsilonDecay=0.0;
opt.EpsilonGreedyExploration.EpsilonMin=1;
This means that the Agent should not exploit the greedy action at all.
Sstated by the documentation (https://de.mathworks.com/help/reinforcement-learning/ug/dqn-agents.html):
During each control interval, the agent either selects a random action with probability ϵ or selects an action greedily with respect to the value function with probability 1-ϵ.
--> Epsilon=1 means probability of zero to have the greedy agent. It is not clearly stated how the random action is sampled, but it should be uniform.
Now with the above setting, the DQN Agent should never exploit the greedy policy during training. However, when starting the Simulation and watching the output of the episodes, it is clear that the Agent does in fact exploit the policy and does not act random.
  • What is going on here? Why does the agent not act random during training?
  • Is the sampling of the actions uniform? (Not related to the epsilon=1 behavior)
  • When exactly is the decay executed? I think i read somewhere in the doc that it is every training step, i.e., for DQN every time step of the simulation with the SampleTime of rlDQNAgentOptions? Would be handy to just have this information clearly stated in the part of the doc that expxlains epsilon greedy
I quite like the toolbox so far, there are just some implementation details that are a bit hard to grasp,i.e., its not 100 % clear to me how it is done by MATLAB.

Risposta accettata

Emmanouil Tzorakoleftherakis
Modificato: Emmanouil Tzorakoleftherakis il 9 Feb 2021
Hello,
Maybe I misread the question, but you are saying "when starting the Simulation and watching the output of the episodes...". Just to clarify, if you hit the "play" button in Simulink or if you use the "sim" command, exploration is out of the picture - Simulink will only do inference on the agent. Exploration is used only when you call "train".
To your other question, sampling in DQN is indeed uniform for exploration
  8 Commenti
Tobias Schindler
Tobias Schindler il 9 Feb 2021
Thanks for checking! I'll try to reproduce it with the example as well and check for differences between my original model and the example.
Will be reporting back!
Tobias Schindler
Tobias Schindler il 5 Ott 2021
Forgot about this question and I did not encounter this problem anymore in other models / setups, not sure what the problem was.

Accedi per commentare.

Più risposte (0)

Prodotti


Release

R2020b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by