Simulating environment while Training rlAgent

plot(env);
trainingStats = train(agent,env,trainOpts);
We use this to simulate agent while training. But this takes a lot of time as we are simulating each episode. What if I wanted to simulate every 100 episodes or so. How to do that?

Risposte (1)

Emmanouil Tzorakoleftherakis
Modificato: Emmanouil Tzorakoleftherakis il 20 Feb 2021
I can interpret your question in 3 ways so I will put my thoughts here and hopefully they will be sufficient.
1) Depending on the RL algorithm, training works differently. For example, DQN and DDPG do an optimization step at each time step (which generally takes more time), whereas, e.g., PPO can work with batches of data. The latter seems to be closer to what you are referring to
2) There is something called offline/batch reinforcement learning where you already have collected data and you use the data to train offline. This also seems close to what you are talking about but there is no out-of-the-box way to do this in Reinforcement Learning Toolbox currently, i.e., you would have to write the implementation yourself.
3) Is you question perhaps about visualization (and not simulation)? If that's the case, indeed visualizing the environment does slow things down so I would recommend only using it at the beginning to get an idea of whether the training setup is ok.
There is no standard way of visualizing an agent after N epidoes but you can probably create a timer and plot/visualize what you need when you need it. Take a look at how the visualization is set up for this custom MATLAB environment. You can use the IsDone flag to increment the counter and every 100 episodes you should call the 'updateplot' method.

5 Commenti

Hello Emmanouil
Thank you for your response. I apologise for not asking a clearer question.
I am not asking about the algorithm. I am using DDPG algorithm. I only ask about visualization, i.e., if there is a way to visualize the simulation after every N episodes of training.
There is no standard way of doing this but you can probably create a timer and plot/visualize what you need when you need it. The other option is to save the agent after N episodes and visualize their behavior afterwards offline. I added that in my response above
That might work actually. I will try that. Thank you.
trainOpts.SaveAgentCriteria = "EpisodeCount";
trainOpts.SaveAgentValue = 100;
trainOpts.SaveAgentDirectory = pwd + "\Agents";
This saves all the agents after 100th episode. Is there any way to save agents at every 100th episode?
Hmm you are right that wouldn't work. I created an enhancement request for this feature.
In the meantime, since your question is about visualization, you should be able to do what you want by implementing a counter. Take a look at how the visualization is set up for this custom MATLAB environment. You can use the IsDone flag to increment the counter and every 100 episodes you should call the 'updateplot' method.

Accedi per commentare.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by