![](https://www.mathworks.com/matlabcentral/answers/uploaded_files/1514579/image.png)
Tune PI Controller Using Reinforcement Learning
6 visualizzazioni (ultimi 30 giorni)
Mostra commenti meno recenti
嘻嘻
il 18 Ott 2023
Risposto: Emmanouil Tzorakoleftherakis
il 23 Ott 2023
How is the initial value of the weight of this neural network determined? If I want to change my PI controller to a PID controller, do I just add another weight to this row that is initialGain = single([1e-3 2])?
This code is from the demo "Tune PI Controller Using Reinforcement Learning."
initialGain = single([1e-3 2]);
actorNet = [
featureInputLayer(numObs)
fullyConnectedPILayer(initialGain,'ActOutLyr')
];
actorNet = dlnetwork(actorNet);
actor = rlContinuousDeterministicActor(actorNet,obsInfo,actInfo);
Can my network be changed to look like the following:
actorNet= [
featureInputLayer(numObs)
fullyConnectedPILayer(randi([-60,60],1,3), 'Action')]
3 Commenti
Risposta accettata
Emmanouil Tzorakoleftherakis
il 23 Ott 2023
I also replied to the other thread. The fullyConnectedPILayer is a custom layer provided in the example - you can open it and see how it's implemented. So you can certainly add a third weight for the D term, but you will most likely run into other issues (e.g. how to approximate the error derivative)
0 Commenti
Più risposte (0)
Vedere anche
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!