Main Content

rlQAgentOptions

Options for Q-learning agent

Description

Use an rlQAgentOptions object to specify options when creating a Q-learning agent. To create a Q-learning agent, use rlQAgent.

For more information on Q-learning agents, see Q-Learning Agent.

For more information on the different types of reinforcement learning agents, see Reinforcement Learning Agents.

Creation

Description

opt = rlQAgentOptions creates an rlQAgentOptions object for use as an argument when creating a Q-learning agent using all default settings. You can modify the object properties using dot notation.

opt = rlQAgentOptions(Name=Value) creates the options object opt and sets its properties using one or more name-value arguments. For example, rlQAgentOptions(DiscountFactor=0.95) creates an option set with a discount factor of 0.95. You can specify multiple name-value arguments.

Properties

expand all

Sample time of the agent, specified as a positive scalar or as -1.

Within a MATLAB® environment, the agent is executed every time the environment advances, so, SampleTime does not affect the timing of the agent execution. If SampleTime is set to -1, in MATLAB environments, the time interval between consecutive elements in the returned output experience is considered equal to 1.

Within a Simulink® environment, the RL Agent block that uses the agent object executes every SampleTime seconds of simulation time. If SampleTime is set to -1 the block inherits the sample time from its input signals. Set SampleTime to -1 when the block is a child of an event-driven subsystem.

Set SampleTime to a positive scalar when the block is not a child of an event-driven subsystem. Doing so ensures that the block executes at appropriate intervals when input signal sample times change due to model variations. If SampleTime is a positive scalar, this value is also the time interval between consecutive elements in the output experience returned by sim or train, regardless of the type of environment.

If SampleTime is set to -1, in Simulink environments, the time interval between consecutive elements in the returned output experience reflects the timing of the events that trigger the RL Agent block execution.

This property is shared between the agent and the agent options object within the agent. If you change this property in the agent options object, it also changes in the agent, and vice versa.

Example: SampleTime=-1

Discount factor applied to future rewards during training, specified as a nonnegative scalar less than or equal to 1.

Example: DiscountFactor=0.9

Options for epsilon-greedy exploration, specified as an EpsilonGreedyExploration object with these properties.

PropertyDescriptionDefault Value
EpsilonInitial value of the probability threshold to either randomly select an action or select the action that maximizes the state-action value function. A larger Epsilon value means that the agent randomly explores the action space at a higher rate.1
EpsilonMinMinimum value of Epsilon0.01
EpsilonDecayDecay rate0.0050

At each interaction with the environment (that is, at each training step), if Epsilon is greater than EpsilonMin, then it is updated using this formula.

Epsilon = Epsilon*(1-EpsilonDecay)

Epsilon is conserved between the end of an episode and the start of the next one. So, Epsilon decreases uniformly over multiple episodes until it reaches EpsilonMin.

If your agent converges on a local optimum too quickly, you can promote agent exploration by increasing the value of Epsilon.

To specify exploration options, use dot notation after creating the rlQAgentOptions object opt. For example, set the initial epsilon value to 0.9.

opt.EpsilonGreedyExploration.Epsilon = 0.9;

Note

The Epsilon property of an EpsilonGreedyExploration object represents the initial value of Epsilon at the beginning of the first episode.

Critic optimizer options, specified as an rlOptimizerOptions object. It allows you to specify training parameters of the critic approximator such as learning rate, gradient threshold, as well as the optimizer algorithm and its parameters. For more information, see rlOptimizerOptions and rlOptimizer.

Example: CriticOptimizerOptions = rlOptimizerOptions(LearnRate=5e-3)

Options to save additional agent data, specified as a structure containing the following fields.

  • Optimizer

  • PolicyState

You can save an agent object using one of these methods:

  • Use the save command.

  • Specify saveAgentCriteria and saveAgentValue in an rlTrainingOptions object.

  • Specify an appropriate logging function within a FileLogger object.

When you save an agent using any method, the fields in the InfoToSave structure determine whether the corresponding data saves with the agent. For example, if you set the PolicyState field to true, then the policy state saves along with the agent.

You can modify the InfoToSave property only after you create the agent options object.

Example: options.InfoToSave.Optimizer=true

Option to save the critic optimizer, specified as a logical value. For example, if you set the Optimizer field to false, then the critic optimizer (which is a hidden property of the agent and can contain internal states) is not saved along with the agent, therefore saving disk space and memory. However, when the optimizers contains internal states, the state of the saved agent is not identical to the state of the original agent.

Example: true

Option to save the state of the explorative policy, specified as a logical value. If you set the PolicyState field to false, then the state of the explorative policy (which is a hidden agent property) is not saved along with the agent. In this case, the state of the saved agent is not identical to the state of the original agent.

Example: true

Object Functions

rlQAgentQ-learning reinforcement learning agent

Examples

collapse all

Create a rlQAgentOptions object that specifies the agent sample time.

opt = rlQAgentOptions(SampleTime=0.5)
opt = 
  rlQAgentOptions with properties:

                  SampleTime: 0.5000
              DiscountFactor: 0.9900
    EpsilonGreedyExploration: [1×1 rl.option.EpsilonGreedyExploration]
      CriticOptimizerOptions: [1×1 rl.option.rlOptimizerOptions]
                  InfoToSave: [1×1 struct]

You can modify options using dot notation. For example, set the agent discount factor to 0.95.

opt.DiscountFactor = 0.95;

Version History

Introduced in R2019a