rlQAgentOptions
Options for Q-learning agent
Description
Use an rlQAgentOptions object to specify options when creating a
      Q-learning agent. To create a Q-learning agent, use rlQAgent.
For more information on Q-learning agents, see Q-Learning Agent.
For more information on the different types of reinforcement learning agents, see Reinforcement Learning Agents.
Creation
Description
opt = rlQAgentOptionsrlQAgentOptions object for use as an argument when creating a
          Q-learning agent using all default settings. You can modify the object properties using
          dot notation.
opt = rlQAgentOptions(Name=Value)opt and sets its properties using one
          or more name-value arguments. For example,
            rlQAgentOptions(DiscountFactor=0.95) creates an option set with a
          discount factor of 0.95. You can specify multiple name-value
          arguments.
Properties
Sample time of the agent, specified as a positive scalar or as -1.
Within a MATLAB® environment, the agent is executed every time the environment advances,
            so, SampleTime does not affect the timing of the agent execution.
            If SampleTime is set to -1, in MATLAB environments, the time interval between consecutive elements in the
            returned output experience is considered equal to 1.
Within a Simulink® environment, the RL Agent block
            that uses the agent object executes every SampleTime seconds of
            simulation time. If SampleTime is set to -1 the
            block inherits the sample time from its input signals. Set
                SampleTime to -1 when the block is a child
            of an event-driven subsystem.
Set SampleTime to a positive scalar when the block is not a child
            of an event-driven subsystem. Doing so ensures that the block executes at appropriate
            intervals when input signal sample times change due to model variations. If
                SampleTime is a positive scalar, this value is also the time
            interval between consecutive elements in the output experience returned by sim or
                train,
            regardless of the type of environment.
If SampleTime is set to -1, in Simulink environments, the time interval between consecutive elements in the
            returned output experience reflects the timing of the events that trigger the RL Agent block
            execution.
This property is shared between the agent and the agent options object within the agent. If you change this property in the agent options object, it also changes in the agent, and vice versa.
Example: SampleTime=-1
Discount factor applied to future rewards during training, specified as a nonnegative scalar less than or equal to 1.
Example: DiscountFactor=0.9
Options for epsilon-greedy exploration, specified as an
                EpsilonGreedyExploration object with these properties.
| Property | Description | Default Value | 
|---|---|---|
| Epsilon | Initial value of the probability threshold to either randomly select an action or select the
                            action that maximizes the state-action value function. A larger Epsilonvalue means that the agent randomly
                            explores the action space at a higher rate. | 1 | 
| EpsilonMin | Minimum value of Epsilon | 0.01 | 
| EpsilonDecay | Decay rate | 0.0050 | 
At each interaction with the environment (that is, at each training step), if
                Epsilon is greater than EpsilonMin, then
            it is updated using this formula.
Epsilon = Epsilon*(1-EpsilonDecay)
Epsilon is conserved between the end of an episode and the start
            of the next one. So, Epsilon decreases uniformly over multiple
            episodes until it reaches EpsilonMin.
If your agent converges on a local optimum too quickly, you can promote agent exploration by
            increasing the value of  Epsilon.
To specify exploration options, use dot notation after creating the rlQAgentOptions object opt. For example, set the
            initial epsilon value to 0.9.
opt.EpsilonGreedyExploration.Epsilon = 0.9;
Note
The Epsilon property of an
                    EpsilonGreedyExploration object represents the
                    initial value of Epsilon at the
                beginning of the first episode.
Critic optimizer options, specified as an rlOptimizerOptions object. It allows you to specify training parameters of
            the critic approximator such as learning rate, gradient threshold, as well as the
            optimizer algorithm and its parameters. For more information, see rlOptimizerOptions and rlOptimizer.
Example: CriticOptimizerOptions =
            rlOptimizerOptions(LearnRate=5e-3)
Options to save additional agent data, specified as a structure containing the following fields.
- Optimizer
- PolicyState
You can save an agent object using one of these methods:
- Use the - savecommand.
- Specify - saveAgentCriteriaand- saveAgentValuein an- rlTrainingOptionsobject.
- Specify an appropriate logging function within a - FileLoggerobject.
When you save an agent using any method, the fields in the
                                InfoToSave structure determine whether the
                        corresponding data saves with the agent. For example, if you set the
                                PolicyState field to true,
                        then the policy state saves along with the agent.
You can modify the InfoToSave property only after you
                        create the agent options object.
Example: options.InfoToSave.Optimizer=true
Option to save the critic optimizer, specified as a
                                                logical value. For example, if you set the
                                                  Optimizer field to
                                                  false, then the critic
                                                optimizer (which is a hidden property of the agent
                                                and can contain internal states) is not saved along
                                                with the agent, therefore saving disk space and
                                                memory. However, when the optimizers contains
                                                internal states, the state of the saved agent is not
                                                identical to the state of the original agent.
Example: true
Option to save the state of the explorative policy,
                                                specified as a logical value. If you set the
                                                  PolicyState field to
                                                  false, then the state of the
                                                explorative policy (which is a hidden agent
                                                property) is not saved along with the agent. In this
                                                case, the state of the saved agent is not identical
                                                to the state of the original agent.
Example: true
Object Functions
| rlQAgent | Q-learning reinforcement learning agent | 
Examples
Create a rlQAgentOptions object that specifies the agent sample time. 
opt = rlQAgentOptions(SampleTime=0.5)
opt = 
  rlQAgentOptions with properties:
                  SampleTime: 0.5000
              DiscountFactor: 0.9900
    EpsilonGreedyExploration: [1×1 rl.option.EpsilonGreedyExploration]
      CriticOptimizerOptions: [1×1 rl.option.rlOptimizerOptions]
                  InfoToSave: [1×1 struct]
You can modify options using dot notation. For example, set the agent discount factor to 0.95.
opt.DiscountFactor = 0.95;
Version History
Introduced in R2019a
See Also
Objects
MATLAB Command
You clicked a link that corresponds to this MATLAB command:
Run the command by entering it in the MATLAB Command Window. Web browsers do not support MATLAB commands.
Seleziona un sito web
Seleziona un sito web per visualizzare contenuto tradotto dove disponibile e vedere eventi e offerte locali. In base alla tua area geografica, ti consigliamo di selezionare: .
Puoi anche selezionare un sito web dal seguente elenco:
Come ottenere le migliori prestazioni del sito
Per ottenere le migliori prestazioni del sito, seleziona il sito cinese (in cinese o in inglese). I siti MathWorks per gli altri paesi non sono ottimizzati per essere visitati dalla tua area geografica.
Americhe
- América Latina (Español)
- Canada (English)
- United States (English)
Europa
- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)
- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- United Kingdom (English)