Learning rate schedule - Reinforcement Learning Toolbox

24 visualizzazioni (ultimi 30 giorni)
The current version of Reinforcement Learning Toolbox requires to set a fixed learning rate for both the actor and critic neural networks.
Since I would like to try use a variable (decreasing) learning rate, I was thinking that maybe I could manually update the value of ths property after each simulation, using the "ResetFcn" handle or something like that, in order to decrease its value gradually. Is it possible to perform this operation somehow?

Risposte (1)

Shubham
Shubham il 20 Mar 2024
Hi Federico,
Yes, adjusting the learning rate dynamically during training in reinforcement learning (RL) scenarios with MATLAB's Reinforcement Learning Toolbox can be a useful technique for improving the convergence of the training process. Although the toolbox does not natively support an automatic decaying learning rate for the training of actor and critic networks, you can implement a custom approach to modify the learning rate at various points during the training process.
One way to achieve this is by using a custom training loop or modifying the training options within callback functions, such as the ResetFcn. However, a more straightforward approach might involve directly accessing and modifying the options of the agent between training episodes. Here's a conceptual outline of how you might implement a decaying learning rate using a custom training loop or callbacks:
Step 1: Define Your Custom Training Loop or Callback Function
You would start with defining a loop or a function that allows you to adjust the learning rates of the optimizer for both the actor and critic networks. The exact implementation depends on the type of agent you are using (e.g., DDPG, SAC, PPO, etc.), as the actor and critic networks might be accessed differently.
Step 2: Adjust Learning Rates Dynamically
During training, you can adjust the learning rates based on your criteria (e.g., after a certain number of episodes or based on the performance).
Using Callbacks
If the training function you're using supports callbacks (e.g., train function with rlTrainingOptions), you can specify a custom function to adjust the learning rates. This function can be executed at certain stages of the training process (e.g., after every episode).
This approach requires manual intervention in the training process and a good understanding of how learning rates affect the convergence of RL algorithms. It's also essential to test and validate that the dynamic adjustment of learning rates leads to improved training outcomes. Keep in mind that the exact implementation details can vary based on the MATLAB version and the specific RL agent you are using.
  1 Commento
Federico Toso
Federico Toso il 20 Mar 2024
Thank you for the answer. I've tried with ResetFcn, but I don't know how to adjust the learning reates within the function, since it only accepts a single argument of type Simulink.SimulationInputbut.
Could you please give me a hint?

Accedi per commentare.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by