Options for PG agent
rlPGAgentOptions object to specify options for policy
gradient (PG) agents. To create a PG agent, use
For more information on PG agents, see Policy Gradient Agents.
For more information on the different types of reinforcement learning agents, see Reinforcement Learning Agents.
opt = rlPGAgentOptions
rlPGAgentOptions object for use as an argument when creating a PG
agent using all default settings. You can modify the object properties using dot
UseBaseline — Use baseline for learning
true (default) |
Option to use baseline for learning, specified as a logical value. When
true, you must specify a critic
network as the baseline function approximator.
In general, for simpler problems with smaller actor networks, PG agents work better without a baseline.
EntropyLossWeight — Entropy loss weight
0 (default) | scalar value between
Entropy loss weight, specified as a scalar value between
1. A higher entropy loss weight value promotes agent exploration by
applying a penalty for being too certain about which action to take. Doing so can help the
agent move out of local optima.
When gradients are computed during training, an additional gradient component is computed for minimizing this loss function.
ActorOptimizerOptions — Actor optimizer options
CriticOptimizerOptions — Critic optimizer options
Critic optimizer options, specified as an
rlOptimizerOptions object. It allows you to specify training parameters of
the critic approximator such as learning rate, gradient threshold, as well as the
optimizer algorithm and its parameters. For more information, see
SampleTime — Sample time of agent
1 (default) | positive scalar |
Sample time of agent, specified as a positive scalar or as
-1. Setting this
-1 allows for event-based simulations.
Within a Simulink® environment, the RL Agent block
in which the agent is specified to execute every
of simulation time. If
block inherits the sample time from its parent subsystem.
Within a MATLAB® environment, the agent is executed every time the environment advances. In
SampleTime is the time interval between consecutive
elements in the output experience returned by
-1, the time interval between
consecutive elements in the returned output experience reflects the timing of the event
that triggers the agent execution.
DiscountFactor — Discount factor
0.99 (default) | positive scalar less than or equal to 1
Discount factor applied to future rewards during training, specified as a positive scalar less than or equal to 1.
|Policy gradient reinforcement learning agent|
Create PG Agent Options Object
This example shows how to create and modify a PG agent options object.
Create a PG agent options object, specifying the discount factor.
opt = rlPGAgentOptions('DiscountFactor',0.9)
opt = rlPGAgentOptions with properties: UseBaseline: 1 EntropyLossWeight: 0 ActorOptimizerOptions: [1x1 rl.option.rlOptimizerOptions] CriticOptimizerOptions: [1x1 rl.option.rlOptimizerOptions] SampleTime: 1 DiscountFactor: 0.9000 InfoToSave: [1x1 struct]
You can modify options using dot notation. For example, set the agent sample time to
opt.SampleTime = 0.5;