Policies and Value Functions

Define policy and value function representations, such as deep neural networks and Q tables

A reinforcement learning policy is a mapping that selects an action to take based on observations from the environment. During training, the agent tunes the parameters of its policy representation to maximize the long-term reward.

Reinforcement Learning Toolbox™ software provides objects for actor and critic representations. The actor represents the policy that selects the best action to take. The critic represents the value function that estimates the value of the current policy. Depending on your application and selected agent, you can define policy and value functions using deep neural networks, linear basis functions, or look-up tables. For more information, see Create Policy and Value Function Representations.

Functions

expand all

rlRepresentationModel representation for reinforcement learning agents
rlRepresentationOptionsCreate options for reinforcement learning agent representations
scalingLayerScaling layer for actor or critic network
quadraticLayerQuadratic layer for actor or critic network
rlTableValue table or Q table
getActorGet actor representation from reinforcement learning agent
setActorSet actor representation of reinforcement learning agent
getCriticGet critic representation from reinforcement learning agent
setCriticSet critic representation of reinforcement learning agent
getLearnableParameterValuesObtain learnable parameter values from policy or value function representation
setLearnableParameterValuesSet learnable parameter values of policy or value function representation

Topics

Create Policy and Value Function Representations

Specify policy and value function representations using function approximators, such as deep neural networks.

Import Policy and Value Function Representations

You can import existing policies from other deep learning frameworks using the ONNX™ model format.