rlStochasticActorRepresentation
(Not recommended) Stochastic actor representation for reinforcement learning agents
Since R2020a
rlStochasticActorRepresentation
is not recommended. Use either rlDiscreteCategoricalActor
or rlContinuousGaussianActor
instead. For more information, see rlStochasticActorRepresentation is not recommended.
Description
This object implements a function approximator to be used as a stochastic actor
within a reinforcement learning agent. A stochastic actor takes the observations as inputs and
returns a random action, thereby implementing a stochastic policy with a specific probability
distribution. After you create an rlStochasticActorRepresentation
object, use
it to create a suitable agent, such as an rlACAgent
or rlPGAgent
agent. For
more information on creating representations, see Create Policies and Value Functions.
Creation
Syntax
Description
Discrete Action Space Stochastic Actor
creates a stochastic actor with a discrete action space, using the deep neural network
discActor
= rlStochasticActorRepresentation(net
,observationInfo
,discActionInfo
,'Observation',obsName
)net
as function approximator. Here, the output layer of
net
must have as many elements as the number of possible discrete
actions. This syntax sets the ObservationInfo
and ActionInfo
properties of discActor
to the inputs
observationInfo
and discActionInfo
,
respectively. obsName
must contain the names of the input layers of
net
.
creates a discrete space stochastic actor using a custom basis function as underlying
approximator. The first input argument is a two-elements cell in which the first element
contains the handle discActor
= rlStochasticActorRepresentation({basisFcn
,W0
},observationInfo
,actionInfo
)basisFcn
to a custom basis function, and the
second element contains the initial weight matrix W0
. This syntax
sets the ObservationInfo
and ActionInfo
properties of discActor
to the inputs
observationInfo
and actionInfo
,
respectively.
creates the discrete action space, stochastic actor discActor
= rlStochasticActorRepresentation(___,options
)discActor
using
the additional options set options
, which is an rlRepresentationOptions
object. This syntax sets the Options
property of discActor
to the
options
input argument. You can use this syntax with any of the
previous input-argument combinations.
Continuous Action Space Gaussian Actor
creates a Gaussian stochastic actor with a continuous action space using the deep neural
network contActor
= rlStochasticActorRepresentation(net
,observationInfo
,contActionInfo
,'Observation',obsName
)net
as function approximator. Here, the output layer of
net
must have twice as many elements as the number of dimensions
of the continuous action space. This syntax sets the ObservationInfo
and ActionInfo
properties of contActor
to the inputs
observationInfo
and contActionInfo
respectively. obsName
must contain the names of the input layers of
net
.
Note
contActor
does not enforce constraints set by the action
specification, therefore, when using this actor, you must enforce action space
constraints within the environment.
creates the continuous action space, Gaussian actor contActor
= rlStochasticActorRepresentation(___,options
)contActor
using
the additional options
option set, which is an rlRepresentationOptions
object. This syntax sets the Options
property of contActor
to the
options
input argument. You can use this syntax with any of the
previous input-argument combinations.
Input Arguments
Properties
Object Functions
rlACAgent | Actor-critic (AC) reinforcement learning agent |
rlPGAgent | Policy gradient (PG) reinforcement learning agent |
rlPPOAgent | Proximal policy optimization (PPO) reinforcement learning agent |
rlSACAgent | Soft actor-critic (SAC) reinforcement learning agent |
getAction | Obtain action from agent, actor, or policy object given environment observations |