Action value exceeds the boundry of the final layer activation fucntion of the actor
Mostra commenti meno recenti
Hi,
I'm using DDPG agent for my RL application with Matlab 2022a version.
I want to take action between 0 and 1 value. To do this, i use SigmoidLayer function at the final layer of the action. However, it exceed the 0-1 boundry. I also tried to use tanh with
scalingLayer(Scale=0.5,Bias=0.5);
,but it exceed the boundry again. How it can be possible?
Meanwhile, i also tried to use
actInfo = rlNumericSpec([1 1],LowerLimit=0,UpperLimit=1);
to limit action, yes it limits the action value but it doesn't scale it. it just act as a saturation block (like putting a saturation block in simulink in front of the action output). So, with this way, the RL works wrong.
How can achive to take action between 0 and 1?
3 Commenti
Kautuk Raj
il 18 Giu 2023
It is unexpected that the sigmoid and tanh functions would produce values outside their respective ranges of [0, 1] and [-1, 1]. However, if you are experiencing this issue, you can enforce the bounds on the output of your actor network by applying a custom layer with element-wise clipping.
awcii
il 18 Giu 2023
awcii
il 19 Giu 2023
Risposte (1)
Harsh
il 16 Lug 2025
0 voti
Hi @awcii
I understand that you're seeing action values exceed the [0, 1] range—even when using "sigmoidLayer" or "tanhLayer" with "scalingLayer". Most probable reason for this is because the DDPG agent adds exploration noise after the actor network output. This noise bypasses the bounding effect of the final activation layer, causing the actual actions to fall outside the desired range. Additionally, using "rlNumericSpec" with "LowerLimit" and "UpperLimit" only clips the final action values—it does not scale or constrain the network’s internal outputs, which can interfere with learning by distorting gradients.
To fix this, you should create a custom noise layer that adds Gaussian noise during training and passes data unchanged during inference. Place this layer just before the final "sigmoidLayer" in your actor network. This ensures that the noise is applied to the pre-activation values, and the "sigmoidLayer" guarantees the final output remains strictly within (0, 1), preserving both proper exploration and stable gradient flow.
Please refer to the following documentation to understand more about respective topics:
Categorie
Scopri di più su Agents in Centro assistenza e File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!