Reinforcement learning agent not being saved during training

18 visualizzazioni (ultimi 30 giorni)
I am trying to train my model using TD3 agent. During the training process I am trying to save the agent above a certain episode reward threshold using the "SaveAgentCriteria" option.
'SaveAgentCriteria','EpisodeReward',...
'SaveAgentValue',100
However, I ma getting the following Error
When I am trying to load the saved agent I am getting the following error
Error using load
Cannot read file savedAgents\Agent3585.mat.
There is more than enough space as I am running the code on my server. Is there any way to just save the saved_agent part of the Agent.mat file and not the savedAgentResultStruct during the training process. I feel the later file is increasing the file size.

Risposte (1)

Aditya
Aditya il 27 Feb 2024
In MATLAB, when using the reinforcement learning toolbox and the train function, there is an option to save the agent when certain criteria are met. The SaveAgentCriteria and SaveAgentValue options allow you to specify when to save the agent based on the training progress. However, the train function by default saves not only the agent but also the training statistics, which can be quite large and lead to the error you're experiencing if the files become too big.
As of my knowledge cutoff date in early 2023, there is no built-in option in the train function to save only the agent without the training statistics. However, you can work around this limitation by implementing a custom training loop where you manually save the agent when your criteria are met.
Here's an example of how you might implement a custom training loop and save only the agent:
% Assuming you have your environment 'env' and agent 'agent' already set up
% Training options
maxEpisodes = 10000;
maxSteps = 500;
thresholdReward = 100;
% Loop over episodes
for episode = 1:maxEpisodes
% Reset the environment and the agent
observation = reset(env);
episodeReward = 0;
% Loop over steps
for step = 1:maxSteps
% Generate action from the agent
action = agent.act(observation);
% Simulate the environment with the action
[nextObservation, reward, isDone, info] = env.step(action);
% Accumulate episode reward
episodeReward = episodeReward + reward;
% Save experience tuple (S, A, R, S')
experience = {observation, action, reward, nextObservation, isDone};
agent.remember(experience);
% Learn from the experience tuple
if mod(step, agent.AgentOptions.ExperienceBufferLength) == 0
agent.learn();
end
% Update the observation
observation = nextObservation;
% Check for episode termination
if isDone
break;
end
end
% Save the agent if the reward threshold is met
if episodeReward >= thresholdReward
saveAgentOnly(agent, episode); % Custom function to save the agent
end
end
% Custom function to save only the agent
function saveAgentOnly(agent, episodeNumber)
filename = sprintf('savedAgents/AgentEpisode%d.mat', episodeNumber);
agentToSave = agent; % Copy the agent to a new variable
save(filename, 'agentToSave');
end
In this custom training loop, you manually handle the training process and save the agent to a file using the save function. The saveAgentOnly function is a custom function that you define to save only the agent object without the training statistics.
  1 Commento
Lars Meijer
Lars Meijer il 13 Mar 2024
hi! I am currently also working on a RL agent. I wanted to use the way you coded the agent.remember and agent.learn, but Matlab R2023b does not recognise the function. the error:
Unrecognized method, property, or field 'remember' for class 'rl.agent.rlDQNAgent'.
Error in MainDJSPFinal (line 80)
agent.remember(experience);
Is this due to it being a DQN Agent? Also, my passed data when only using the agent.learn(experience) in the way you structured it gives me the following error:
Error using rl.replay.rlReplayMemory/appendWithoutSampleValidation
Invalid argument at position 2. Value must be of type struct or be convertible to struct.
Error in rl.agent.AbstractOffPolicyAgent/learn_ (line 101)
appendWithoutSampleValidation(this.ExperienceBuffer,exp);
Error in rl.agent.AbstractAgent/learn (line 29)
this = learn_(this,experience);
Can you help me out here?

Accedi per commentare.

Categorie

Scopri di più su Training and Simulation in Help Center e File Exchange

Prodotti


Release

R2021a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by