An Error in LSTM Network Training Using Reinforcement Learning Toolbox

4 visualizzazioni (ultimi 30 giorni)
I wrote tthe following code for using LSTM,
for idx = 1:type
actorNetWork = [
imageInputLayer(obsSize,Normalization="none")
convolution2dLayer(8,16, ...
Stride=1,Padding=1,WeightsInitializer="he")
reluLayer
convolution2dLayer(4,8, ...
Stride=1,Padding="same",WeightsInitializer="he")
reluLayer
fullyConnectedLayer(256,WeightsInitializer="he")
reluLayer
fullyConnectedLayer(128,WeightsInitializer="he")
reluLayer
fullyConnectedLayer(64,WeightsInitializer="he")
reluLayer
fullyConnectedLayer(numAct)
softmaxLayer
];
actorNetWork = dlnetwork(actorNetWork);
criticNetwork = [
sequenceInputLayer(obsSize)
sequenceFoldingLayer('Name','fold')
convolution2dLayer(8,16, ...
Stride=1,Padding=1,WeightsInitializer="he")
reluLayer
convolution2dLayer(4,8, ...
Stride=1,Padding="same",WeightsInitializer="he")
reluLayer
fullyConnectedLayer(368,WeightsInitializer="he")
reluLayer
fullyConnectedLayer(256,WeightsInitializer="he")
reluLayer
sequenceUnfoldingLayer('Name','unfold')
flattenLayer('Name','flatten')
lstmLayer(64)
fullyConnectedLayer(1)];
lgraph = layerGraph(criticNetwork);
lgraph = connectLayers(lgraph,'fold/miniBatchSize','unfold/miniBatchSize');
actor(idx) = rlDiscreteCategoricalActor(actorNetWork,oinfo,ainfo);
critic(idx) = rlValueFunction(lgraph,oinfo);
end
opt = rlPPOAgentOptions(...
'ActorOptimizerOptions',actorOpts,...
'CriticOptimizerOptions',criticOpts,...
'ExperienceHorizon',128,...
'ClipFactor',0.2,...
'EntropyLossWeight',0.1,...
'MiniBatchSize',128,...
'NumEpoch',3,...
'AdvantageEstimateMethod','gae',...
'GAEFactor',0.95,...
'SampleTime',Ts,...
'DiscountFactor',0.995);
agentA = rlPPOAgent(actor(1),critic(1),opt);
but I get the following error. I apprecaite your help.
Error using rl.agent.util.checkFcnRNNCompatibility
To train an agent that has states, all actor and critic representations for that agent must have states.
Error in rl.agent.AbstractOnPolicyPGAgent/setCritic (line 148)
rl.agent.util.checkFcnRNNCompatibility(critic,this.Actor_)
Error in rl.agent.rlPPOAgent (line 21)
this = setCritic(this,critic);
Error in rlPPOAgent (line 95)
Agent = rl.agent.rlPPOAgent(Actor, Critic, AgentOptions);

Risposta accettata

Emmanouil Tzorakoleftherakis
I believe that both the actor and the critic need to be LSTM networks. In your case only the critic is

Più risposte (0)

Tag

Prodotti


Release

R2022a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by