Main Content

inspectTrainingResult

Plot training information from a previous training session

Since R2021a

    Description

    By default, the train function shows the training progress and results in the Reinforcement Learning Training Monitor during training. If you configure training to not show the Reinforcement Learning Training Monitor or you close the Reinforcement Learning Training Monitor after training, you can view the training results using the inspectTrainingResult function, which opens the Reinforcement Learning Training Monitor. You can also use inspectTrainingResult to view the training results for agents saved during training.

    example

    inspectTrainingResult(trainResults) opens the Reinforcement Learning Training Monitor and plots the training results from a previous training session.

    example

    inspectTrainingResult(agentResults) opens the Reinforcement Learning Training Monitor and plots the training results from a previously saved agent structure.

    Examples

    collapse all

    For this example, assume that you have trained the agent in the Train Reinforcement Learning Agent in MDP Environment example and subsequently closed the Reinforcement Learning Training Monitor.

    Load the training information returned by the train function.

    load mdpTrainingStats trainingStats

    Reopen the Reinforcement Learning Training Monitor for this training session.

    inspectTrainingResult(trainingStats)

    For this example, load the environment and agent for the Train Reinforcement Learning Agent in MDP Environment example.

    load mdpAgentAndEnvironment

    Specify options for training the agent. Configure the SaveAgentCriteria and SaveAgentValue options to save all agents after episode 30.

    trainOpts = rlTrainingOptions;
    trainOpts.MaxStepsPerEpisode = 50;
    trainOpts.MaxEpisodes = 50;
    trainOpts.Plots = "none";
    trainOpts.SaveAgentCriteria = "EpisodeCount";
    trainOpts.SaveAgentValue = 30;

    Train the agent. During training, when an episode has a reward greater than or equal to 13, a copy of the agent is saved in a savedAgents folder.

    rng("default") % for reproducibility
    trainingResult = train(qAgent,env,trainOpts);

    Load the training results for one of the saved agents. This command loads both the agent and a training result object that contains the corresponding training results.

    load savedAgents/Agent50

    View the training results from the saved training result object.

    inspectTrainingResult(savedAgentResult)

    The Reinforcement Learning Training Monitor shows the training progress up to the episode in which the agent was saved.

    Input Arguments

    collapse all

    Training episode data, specified as a structure or structure array returned by the train function.

    Saved agent results, specified as a structure previously saved by the train function. The train function saves agents when you specify the SaveAgentCriteria and SaveAgentValue options in the rlTrainingOptions object used during training.

    When you load a saved agent, the agent and its training results are added to the MATLAB® workspace as saved_agent and savedAgentResultStruct, respectively. To plot the training data for this agent, use the following command.

    inspectTrainingResult(savedAgentResultStruct)

    For multi-agent training, savedAgentResultStruct contains structure fields with training results for all the trained agents.

    Version History

    Introduced in R2021a