Azzera filtri
Azzera filtri

Why do I get "Encountered complex value when computing gradient..." error in my custom network training

3 visualizzazioni (ultimi 30 giorni)
I'm training a network with a custom loss fuction, but I got this error when traning the network:
Error using dlarray/dlgradient
Encountered complex value when computing gradient with respect to an output of fullyconnect. Convert all outputs of fullyconnect to real.
The following is my modelloss function:
function [loss,gradients] = modelLoss(net,X,A)
[Nt,Ng] = size(A);
[~,~,miniBatchSize] = size(X);
[Y1,Y2] = forward(net,X); % the network output
Nrf = size(Y1,1)/2;
% convert real value outputs to complex valued vector & matrices : f1, F2
f1_split = reshape(Y1,[Nrf,2,miniBatchSize]);
f1 = squeeze(complex(f1_split(:,1,:),f1_split(:,2,:)));
F2_phase = reshape(Y2,[Nt,Nrf,miniBatchSize]);
F2 = exp(1i*F2_phase)/sqrt(Nt);
% Compute loss.
loss = dlarray(0);
for batch = 1:miniBatchSize % calculate the loss (as temp_loss) in every batch
f = F2(:,:,batch)*f1(:,batch); % f is a complex valued vector
temp_loss = sqrt(sum( ( abs(A'*f).^2-X(:,:,batch) ).^2 )); % temp_loss sould always be real
% A above is a complex matrices of size Ng*Nt
% X is real valued vector of size Ng*1
loss = loss + temp_loss;
end
loss = loss/(miniBatchSize*Ng);
% Compute gradients.
gradients = dlgradient(loss,net.Learnables);
end
And update loss and calculating gradients using dlfeval() as usual.
[loss,gradients] = dlfeval(@modelLoss,net,X,A);
I think I made the traced loss in every calculation as real valued.
Please help me find out where the problem is, thanks!
  1 Commento
Chun-Yen Chuang
Chun-Yen Chuang il 26 Mag 2024
Modificato: Chun-Yen Chuang il 26 Mag 2024
Update: I just figured how to fix this...
By just take the real part of the network output
Y1 = real(Y1);
Y2 = real(Y2);
but I'm still confused why does the fully connected layer has the output of complex value...

Accedi per commentare.

Risposte (1)

surya venu
surya venu il 19 Giu 2024
Hi,
The error you encountered suggests that the deep learning framework you're using does not support complex numbers directly in the computation of gradients. This limitation arises because many deep learning frameworks are optimized for operations on real numbers, given that most common applications and neural network components (like activation functions, loss functions, and optimization algorithms) are defined in the real number domain.
In your custom loss function, you explicitly convert parts of your network's output into complex numbers for further processing:
f1 = squeeze(complex(f1_split(:,1,:),f1_split(:,2,:)));
F2 = exp(1i*F2_phase)/sqrt(Nt);
This conversion is integral to your model's operation, as you're working with complex numbers to compute the loss. However, when the gradient computation occurs (dlgradient(loss,net.Learnables);), the presence of complex numbers causes the error because the underlying computation for gradients does not support complex derivatives directly.
Why might the fully connected layer produce complex outputs?
In your original code, the fully connected layer itself does not directly produce complex outputs. Instead, it outputs real numbers that you later convert into complex numbers as part of your processing. The fully connected layer (or any standard layer in most deep learning frameworks) operates with real-valued weights and biases and expects real-valued inputs, producing real-valued outputs accordingly. Any complex arithmetic or representation is a result of post-processing applied to these outputs, as you've done in your model.
Hope it helps.

Prodotti


Release

R2022b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by