Why is my custom loss function extremly slow?
5 visualizzazioni (ultimi 30 giorni)
Mostra commenti meno recenti
Hi,
I would like to train a physics-informed neural network (PINN). I have used the following example as a basis: https://de.mathworks.com/help/deeplearning/ug/solve-partial-differential-equations-with-lbfgs-method-and-deep-learning.html
I use a fairly simple feedforward network as the neural network. I have created the following custom loss function (here n_DOF=3, nMiniBatch=n_B=500):
function [loss,gradients] = modelLossModalMDOF(net,Freq,LAMBDA,PHI,M,k,logicParam)
% === Input
% Freq - (n_DOF,n_B) Natural frequencies (dlarray)
% LAMBDA - (n_DOF,n_DOF,n_B) Natural frequencies Matrix (numerical array)
% PHI - (n_DOF,n_DOF,n_B) mode shapes (numerical array)
% M - (n_DOF,n_DOF) massmatrix (numerical array)
% k - (1,n_DOF) real Stiffnesses (numerical array)
% logicParam - (1,n_DOF) Vector with DOFs to be estimated (numerical array)
% === Output
% loss - (1,1) PINN-Loss
% gradients - (...) gradients
% Initialization
nMiniBatch = size(Freq,2); % Minibatchsize
dof = size(M,1); % DOFs
k_mod = dlarray(nan(nMiniBatch,dof),"BC");
tempLoss = dlarray(nan(dof,dof));
f = dlarray(nan(dof,nMiniBatch*dof));
% Prediction
kPred = forward(net,Freq);
% Loop over all models in the batch
for j=1:nMiniBatch
counter = 1;
for j2=1:dof
if logicParam(j2)==1
k_mod(j2,j) = kPred(counter,j);
counter=counter+1;
else
k_mod(j2,j) = k(j2);
end
end
% global stiffness matrix
K_mod = dlgenK_MDOF(k_mod(:,j));
% eigenvalue problem (with correct stiffnesses = 0)
for j2=1:dof
f(:,j2+dof*(j-1)) = (K_mod-LAMBDA(j2,j2,j)*M)*PHI(:,j2,j);
end
end
% Set data format again
f = dlarray(f,"CB");
% eigenvalue problem-Loss
zeroTarget = zeros(size(f),"like",f);
loss = l2loss(f,zeroTarget);
gradients = dlgradient(loss,net.Learnables);
end
I have noticed that the loss function is extremely slowed down, especially when calculating the gradient. A few iterations take minutes. The loss does not really decrease (order of magnitude 10e+6). During training, more and more RAM is used, so that after some time I get 90% utilization even with 32 GB RAM.
I have already tried ADAM and L-BFGS. Is there a way to speed up the training significantly?
Thank you in advance!
3 Commenti
Venu
il 20 Nov 2023
Modificato: Venu
il 20 Nov 2023
If you have implemented your loss function with ADAM optimizer try adjusting your learning rate for the Adam optimizer.
Check the parameters if you have used L-BFGS-B optimizer. Adjust the maximum number of iterations and the convergence threshold to see if it affects the training speed.
If the issue still persists, feel free to provide the code that you have tried implementing ADAM and LBFGS optimizers in your loss function.
Risposta accettata
Più risposte (0)
Vedere anche
Categorie
Scopri di più su Sequence and Numeric Feature Data Workflows in Help Center e File Exchange
Prodotti
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!