rlNumericSpec

Create continuous action or observation data specifications for reinforcement learning environments

Description

An rlNumericSpec object specifies continuous action or observation data specifications for reinforcement learning environments.

Creation

Description

example

spec = rlNumericSpec(dimension) creates a data specification for continuous actions or observations and sets the Dimension property.

spec = rlNumericSpec(dimension,Name,Value) sets Properties using name-value pair arguments.

Properties

expand all

Lower limit of the data space, specified as a scalar or matrix of the same size as the data space. When LowerLimit is specified as a scalar, rlNumericSpec applies it to all entries in the data space.

Upper limit of the data space, specified as a scalar or matrix of the same size as the data space. When UpperLimit is specified as a scalar, rlNumericSpec applies it to all entries in the data space.

Name of the rlNumericSpec object, specified as a string.

Description of the rlNumericSpec object, specified as a string.

This property is read-only.

Dimension of the data space, specified as a numeric vector.

This property is read-only.

Information about the type of data, specified as a string.

Object Functions

rlSimulinkEnvCreate a reinforcement learning environment using a dynamic model implemented in Simulink
rlFunctionEnvSpecify custom reinforcement learning environment dynamics using functions
rlRepresentationModel representation for reinforcement learning agents

Examples

collapse all

For this example, consider the rlSimplePendulumModel Simulink model. The model is a simple frictionless pendulum that is initially hanging in a downward position.

Open the model.

mdl = 'rlSimplePendulumModel';
open_system(mdl)

Assign the agent block path information, and create rlNumericSpec and rlFiniteSetSpec objects for the observation and action information. You can use dot notation to assign property values of the rlNumericSpec and rlFiniteSetSpec objects.

agentBlk = [mdl '/RL Agent'];
obsInfo = rlNumericSpec([3 1])
obsInfo = 
  rlNumericSpec with properties:

     LowerLimit: -Inf
     UpperLimit: Inf
           Name: [0x0 string]
    Description: [0x0 string]
      Dimension: [3 1]
       DataType: "double"

actInfo = rlFiniteSetSpec([2 1])
actInfo = 
  rlFiniteSetSpec with properties:

       Elements: [2x1 double]
           Name: [0x0 string]
    Description: [0x0 string]
      Dimension: [1 1]
       DataType: "double"

obsInfo.Name = 'observations';
actInfo.Name = 'torque';

Create the reinforcement learning environment for the Simulink model using information extracted in the previous steps.

env = rlSimulinkEnv(mdl,agentBlk,obsInfo,actInfo)
env = 
  SimulinkEnvWithAgent with properties:

             Model: "rlSimplePendulumModel"
        AgentBlock: "rlSimplePendulumModel/RL Agent"
          ResetFcn: []
    UseFastRestart: 'on'

You can also include a reset function using dot notation. For this example, consider randomly initializing theta0 in the model workspace.

env.ResetFcn = @(in) setVariable(in,'theta0',randn,'Workspace',mdl)
env = 
  SimulinkEnvWithAgent with properties:

             Model: "rlSimplePendulumModel"
        AgentBlock: "rlSimplePendulumModel/RL Agent"
          ResetFcn: @(in)setVariable(in,'theta0',randn,'Workspace',mdl)
    UseFastRestart: 'on'

Introduced in R2019a