Azzera filtri
Azzera filtri

RL Training Slows Down With Full Experience Buffer

8 visualizzazioni (ultimi 30 giorni)
Alvin Allen
Alvin Allen il 17 Ago 2021
Commentato: il 14 Giu 2024
I'm working on an RL project using a DQN agent with a custom environment. I tried training the agent without an LSTM layer with limited success and now I'm trying with an LSTM layer. I very quickly noticed that after the experience buffer is filled, i.e. the Total Number of Steps counter reaches the ExperienceBuffer length set in the agent properties, the training slows to a crawl. Where the first several training episodes complete in seconds, once this value of total steps is reached the next episode takes several minutes to train, and future ones don't seem to recover the speed. This hasn't happened in all my iterations of the agent and learning settings before I enabled the LSTM layer.
I'm wondering if this is expected behavior, why this might be happening given my settings, etc. My agent options are below, more details can be given if needed. Thanks!
agentOpts = rlDQNAgentOptions;
agentOpts.SequenceLength = 16;
agentOpts.MiniBatchSize = 100; % grab 100 "trajectories" of length 16 from the buffer at a time for training
agentOpts.NumStepsToLookAhead = 1; % forced for LSTM enabled critics
agentOpts.ExperienceBufferLength = 10000; % once the total steps reaches this number, training slows down dramatically
The critic network setup is:
ObservationInfo = rlNumericSpec([16 1]); % 16 tile game board
ActionInfo = rlFiniteSetSpec([1 2 3 4]); % 4 possible actions per turn
% Input Layer
layers = [
sequenceInputLayer(16, "Name", "input_1")
fullyConnectedLayer(1024,"Name","fc_1", "WeightsInitializer", "he")
reluLayer("Name","relu_body")
];
% Body Hidden Layers
for i = 1:3
layers = [layers
fullyConnectedLayer(1024,"Name", join(["fc_" num2str(i)]), "WeightsInitializer", "he")
reluLayer("Name", join(["body_output_" num2str(i)]))
];
end
% LSTM Layer
layers = [layers
lstmLayer(1024,"Name","lstm", "InputWeightsInitializer", "he", "RecurrentWeightsInitializer", "he")
];
% Output Layer
layers = [layers
fullyConnectedLayer(4,"Name","fc_action")
regressionLayer("Name","output")
];
dqnCritic = rlQValueRepresentation(layers, ObservationInfo, ActionInfo, "Observation", "input_1");
  3 Commenti
Giancarlo Storti Gajani
Giancarlo Storti Gajani il 18 Mag 2023
I have a similar problem, but training slows down after a number of steps much smaller than the experience buffer lenght.
Initially each episode takes about 0.5s, after some iterations (about 1000) episodes need more than 30s to complete. I am using a 128 sized minibatch and 1e6 experience buffer length.
轩
il 14 Giu 2024
Same question and still waiting the answer.....
In my training example, the slow down speed seems to be linear, it doesn't about if my experience buffer is reach to be filled.
And if you do the training without opening the simulink model window, training will be more faster

Accedi per commentare.

Risposte (0)

Categorie

Scopri di più su Training and Simulation in Help Center e File Exchange

Prodotti


Release

R2020a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by