Reinforcement learning and Paralle computation

2 visualizzazioni (ultimi 30 giorni)
I am condcting reinforcement learning with Sac agent.
I tried to use GPU and Parallel computation, but in case of using paralle computaion, the training result changed.
At all, learning is much worse than without parallel processing. 
Do you know what caused it?
%% AGENT setting
agentOptions = rlSACAgentOptions;
agentOptions.SampleTime = Ts;
agentOptions.DiscountFactor = 0.90;
agentOptions.TargetSmoothFactor = 1e-3;
agentOptions.ExperienceBufferLength = 500;
agentOptions.MiniBatchSize = 256;
agentOptions.EntropyWeightOptions.TargetEntropy = -2;
agentOptions.NumStepsToLookAhead = 1;
agentOptions.ResetExperienceBufferBeforeTraining = false;
agent = rlSACAgent(actor,[critic1 critic2],agentOptions);
%% Learning setting
maxepisodes = 10000;
maxsteps = 1e6;
trainingOptions = rlTrainingOptions(...
'MaxEpisodes',maxepisodes,...
'MaxStepsPerEpisode',maxsteps,...
'StopOnError','on',...
'Verbose',true,...
'Plots','training-progress',...
'StopTrainingCriteria','AverageReward',...
'StopTrainingValue',Inf,...
'ScoreAveragingWindowLength',10);
trainingOptions.UseParallel = true;
trainingOptions.ParallelizationOptions.Mode = 'async';
trainingOptions.ParallelizationOptions.StepsUntilDataIsSent = 32;
trainingOptions.ParallelizationOptions.DataToSendFromWorkers = 'Experiences';
  1 Commento
Takeshi Takahashi
Takeshi Takahashi il 18 Apr 2022
agentOptions.ExperienceBufferLength seems too short, which may indirectly affect the parallel training. Can you increase ExperienceBufferLength to 1e6 or more?

Accedi per commentare.

Risposte (0)

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by