- Create a “monitor” object using “trainingProgressMonitor” function:
How to show the loss change of critic network or actor network when training with DDPG algorithm
    31 visualizzazioni (ultimi 30 giorni)
  
       Mostra commenti meno recenti
    
How to show the loss change of critic network or actor network when training with DDPG algorithm?
0 Commenti
Risposte (1)
  Poorna
      
 il 29 Set 2023
        Hi,  
I understand that you would like to view/show the change in the loss values of actor/critic networks of a DDPG agent during training. 
You can achieve this by utilizing the “MonitorLogger” functionality. Follow these steps: 
%create a monitor object 
monitor = trainingProgressMonitor(); 
         2. Create a “logger” object using the “rlDataLogger” function with the “monitor” as input: 
%create a logger 
logger = rlDataLogger(monitor); 
        3. Use the “AgentLearnFinishedFcn” callback property of the “monitor” object to log the losses. Create a custom callback function that receives a structure containing the actor and critic losses, as well as other useful information. Customize the callback function to extract and return the data you want to log. 
        4. At the end of the training, you can access the logged data for further analysis or visualization. 
For more information on these functions, please refer to the following documentation: 
MonitorLogger: https://www.mathworks.com/help/reinforcement-learning/ref/rl.logging.monitorlogger.html 
trainingProgressMonitor: https://www.mathworks.com/help/deeplearning/ref/deep.trainingprogressmonitor.html 
Hope this Helps. 
0 Commenti
Vedere anche
Categorie
				Scopri di più su Policies and Value Functions in Help Center e File Exchange
			
	Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!

