Hi,
I have been using the RL toolbox within R2021a, using a TD3 agent, with a fully connect network (NON LSTM) to control a PMSM, with no issue.
I recently updated my install to R2022a, now my RL code (which runs OK in R2021a) flags the following error when running the code in R2022a, which does not make sense as it is a non LSTM network. The output shows:
observationInfo =
LowerLimit: [20×1 double]
UpperLimit: [20×1 double]
Name: [0×0 string]
Description: [0×0 string]
Dimension: [20 1]
DataType: "double"
actionInfo =
LowerLimit: -Inf
UpperLimit: Inf
Name: [0×0 string]
Description: [0×0 string]
Dimension: [2 1]
DataType: "double"
env =
Model : MyPMLSM_RL_Single_Vel
ResetFcn : []
UseFastRestart : on
'SequenceLength' option value must be greater than 1 for agent using recurrent neural networks.
rl.agent.util.checkOptionFcnCompatibility(this.AgentOptions,actor);
this = setActor(this,actor);
Agent = rl.agent.rlTD3Agent(Actor, Critic, AgentOptions);
The actor and agent options and code are below, has anyone encountered this error when updating to R2022a with code originally written and used in R2021a?
Thanks in advance
Patrick
sequenceInputLayer(numObservations,'Normalization','none','Name','observation')
fullyConnectedLayer(200,'Name','ActorFC1')
reluLayer('Name','ActorRelu1')
fullyConnectedLayer(100,'Name','ActorFC2')
reluLayer('Name','ActorRelu2')
fullyConnectedLayer(numActions,'Name','ActorFC3')
tanhLayer('Name','ActorTanh1')
actorOptions = rlRepresentationOptions('Optimizer','adam','LearnRate',2e-4,...
actor = rlDeterministicActorRepresentation(actorNet,observationInfo,actionInfo,...
'Observation',{'observation'},'Action',{'ActorTanh1'},actorOptions);
Ts_agent = Ts;agentOptions = rlTD3AgentOptions("SampleTime",Ts_agent, ...
"ExperienceBufferLength",2e6, ...
"TargetSmoothFactor",0.005, ...
"SaveExperienceBufferWithAgent",true);