MBPO with simulink env,will the reward defined in simulink model overwrite the rewardFcn handle defined in .m file?
    4 visualizzazioni (ultimi 30 giorni)
  
       Mostra commenti meno recenti
    
i am currently using matlab 2023a, in the MBPO for cartpole example,the reward function and isDone function are defined in .m file,  
this is the following code in example:
lgenerativeEnv = rlNeuralNetworkEnvironment(obsInfo,actInfo, ...[transitionFcn,transitionFcn2,transitionFcn3],@myRewardFunction,@myIsDoneFunction);
now i want to use a simulink model,will the reward  defined in simulink model overwrite the rewardFcn handle  defined in .m file?
0 Commenti
Risposte (1)
  Yatharth
      
 il 11 Ott 2023
        Hi Bin, 
I understand that you have a custom "Reward" and "IsDone" function defined in MATLAB, and you have created an environment using the "rlNeuralNetworkEnvironment" function. 
Since you are mentioning you have defined a reward function in the Simulink Model too, I am curious how you are able to achieve that. 
However, ideally the reward function defined in the Simulink model will not overwrite the reward function defined in the .m file. In the code you provided, the reward function defined in the .m file is explicitly passed as an argument to the “rlNeuralNetworkEnvironment” constructor.
The “reward” function defined in the .m file will be used by the “rlNeuralNetworkEnvironment” object when computing the reward during the training or simulation process. Since the reward is calculated in the environment itself. 
You can refer to the following page to check your reward function in the simulation. 
I hope this helps.
0 Commenti
Vedere anche
Categorie
				Scopri di più su Environments in Help Center e File Exchange
			
	Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!