Main Content

reset

Reset environment, agent, experience buffer, or policy object

Since R2022a

    Description

    example

    initialObs = reset(env) resets the specified MATLAB® environment to an initial state and returns the resulting initial observation value.

    Do not use reset for Simulink® environments, which are implicitly reset when running a new simulation. Instead, customize the reset behavior using the ResetFcn property of the environment.

    example

    reset(agent) resets the specified agent. Resetting a built-in agent performs the following actions, if applicable.

    • Empty experience buffer.

    • Set recurrent neural network states of actor and critic networks to zero.

    • Reset the states of any noise models used by the agent.

    agent = reset(agent) also returns the reset agent as an output argument.

    example

    resetPolicy = reset(policy) returns the policy object resetPolicy in which any recurrent neural network states are set to zero and any noise model states are set to their initial conditions. This syntax has no effect if the policy object does not use a recurrent neural network and does not have a noise model with state.

    example

    reset(buffer) resets the specified replay memory buffer by removing all the experiences.

    Examples

    collapse all

    Create a reinforcement learning environment. For this example, create a continuous-time cart-pole system.

    env = rlPredefinedEnv("CartPole-Continuous");

    Reset the environment and return the initial observation.

    initialObs = reset(env)
    initialObs = 4×1
    
             0
             0
        0.0315
             0
    
    

    Create observation and action specifications.

    obsInfo = rlNumericSpec([4 1]);
    actInfo = rlNumericSpec([1 1]);

    Create a default DDPG agent using these specifications.

    initOptions = rlAgentInitializationOptions(UseRNN=true);
    agent = rlDDPGAgent(obsInfo,actInfo,initOptions);

    Reset the agent.

    agent = reset(agent);

    Create observation and action specifications.

    obsInfo = rlNumericSpec([4 1]);
    actInfo = rlNumericSpec([1 1]);

    Create a replay memory experience buffer.

    buffer = rlReplayMemory(obsInfo,actInfo,10000);

    Add experiences to the buffer. For this example, add 20 random experiences.

    for i = 1:20
        expBatch(i).Observation = {obsInfo.UpperLimit.*rand(4,1)};
        expBatch(i).Action = {actInfo.UpperLimit.*rand(1,1)};
        expBatch(i).NextObservation = {obsInfo.UpperLimit.*rand(4,1)};
        expBatch(i).Reward = 10*rand(1);
        expBatch(i).IsDone = 0;
    end
    expBatch(20).IsDone = 1;
    
    append(buffer,expBatch);

    Reset and clear the buffer.

    reset(buffer)

    Create observation and action specifications.

    obsInfo = rlNumericSpec([4 1]);
    actInfo = rlFiniteSetSpec([-1 0 1]);

    To approximate the Q-value function within the critic, use a deep neural network. Create each network path as an array of layer objects.

    % Create Paths
    obsPath = [featureInputLayer(4) 
               fullyConnectedLayer(1,Name="obsout")];
    
    actPath = [featureInputLayer(1) 
               fullyConnectedLayer(1,Name="actout")];
    
    comPath = [additionLayer(2,Name="add")  ...
               fullyConnectedLayer(1)];
    
    % Create dlnetwork object and add Layers
    net = dlnetwork;
    net = addLayers(net,obsPath); 
    net = addLayers(net,actPath); 
    net = addLayers(net,comPath);
    net = connectLayers(net,"obsout","add/in1");
    net = connectLayers(net,"actout","add/in2");
    
    % Initialize network
    net = initialize(net);
    
    % Diplay the number of weights
    summary(net)
       Initialized: true
    
       Number of learnables: 9
    
       Inputs:
          1   'input'     4 features
          2   'input_1'   1 features
    

    Create an epsilon-greedy policy object using a Q-value function approximator.

    critic = rlQValueFunction(net,obsInfo,actInfo);
    policy = rlEpsilonGreedyPolicy(critic)
    policy = 
      rlEpsilonGreedyPolicy with properties:
    
                QValueFunction: [1x1 rl.function.rlQValueFunction]
            ExplorationOptions: [1x1 rl.option.EpsilonGreedyExploration]
                 Normalization: ["none"    "none"]
        UseEpsilonGreedyAction: 1
            EnableEpsilonDecay: 1
               ObservationInfo: [1x1 rl.util.rlNumericSpec]
                    ActionInfo: [1x1 rl.util.rlFiniteSetSpec]
                    SampleTime: -1
    
    

    Reset the policy.

    policy = reset(policy);

    Input Arguments

    collapse all

    Reinforcement learning environment, specified as one of the following objects.

    Reinforcement learning agent, specified as one of the following objects.

    Note

    agent is a handle object, so it is reset whether it is returned as an output argument or not. For more information about handle objects, see Handle Object Behavior.

    Experience buffer, specified as one of the following replay memory objects.

    Reinforcement learning policy, specified as one of the following objects:

    Output Arguments

    collapse all

    Initial environment observation after reset, returned as one of the following:

    • Array with dimensions matching the observation specification for an environment with a single observation channel.

    • Cell array with length equal to the number of observation channel for an environment with multiple observation channels. Each element of the cell array contains an array with dimensions matching the corresponding element of the environment observation specifications.

    Reset policy, returned as a policy object of the same type as agent but with its recurrent neural network states set to zero.

    Reset agent, returned as an agent object. Note that agent is a handle object. Therefore, if it contains any recurrent neural network, its state is reset whether agent is returned as an output argument or not. For more information about handle objects, see Handle Object Behavior.

    Version History

    Introduced in R2022a

    See Also

    Functions