rlRepresentationOptions
(Not recommended) Options set for reinforcement learning agent representations (critics and actors)
rlRepresentationOptions
is not recommended. Use an rlOptimizerOptions
object within an agent options object instead. For more information, see rlRepresentationOptions is not recommended.
Description
Use an rlRepresentationOptions
object to specify an options object for critics (rlValueRepresentation
,
rlQValueRepresentation
)
and actors (rlDeterministicActorRepresentation
, rlStochasticActorRepresentation
).
Creation
Description
creates a
default option set to use as a last argument when creating a reinforcement learning actor
or critic. You can modify the object properties using dot notation.repOpts
= rlRepresentationOptions
creates an options object with the specified Properties using one or more
name-value pair arguments.repOpts
= rlRepresentationOptions(Name,Value
)
Properties
Object Functions
rlValueRepresentation | (Not recommended) Value function critic representation for reinforcement learning agents |
rlQValueRepresentation | (Not recommended) Q-Value function critic representation for reinforcement learning agents |
rlDeterministicActorRepresentation | (Not recommended) Deterministic actor representation for reinforcement learning agents |
rlStochasticActorRepresentation | (Not recommended) Stochastic actor representation for reinforcement learning agents |