Training Option! How can we use a new defined algorithm (as a training function) to train a Deep Neural Network?!

2 visualizzazioni (ultimi 30 giorni)
Hi every one. This is the first time I am designing a deep network in Matlab. I do not want use SGD, Adam or default solver. I have a new algorithm (proposed unconstrained optimizar) that I am interested in using it to train the network to chech its performance in training. How can I do it? Is it possible to do, how?

Risposta accettata

Srivardhan Gadila
Srivardhan Gadila il 30 Ott 2020
  2 Commenti
MAHSA YOUSEFI
MAHSA YOUSEFI il 10 Nov 2020
Dear Srivardhan,
following with your answer to use training loop, I have another problem.
I am trying to train a CNN with my own optimizer through costum training loop:
[loss,gradient]= dlfeval(@modelgradient,dlnet, Xtrian,YTrain)
myFun = @(dlnet,gradient,loss)myOptimizer(dlnet,gradient,loss,...)
dlnet = dlupdate(myFun,dlnet,gradient,loss)
My optimizer needs w (current parameter vector), g (its corresponding gradient vector), f (its corresponding loss value) and… as inputs. This optimizer needs many computations with w, g, f inside to give w = w + p, p is a optimal vector that my optimizer has to compute it by which I can update my w.
I need something by which I can convert the parameters and gradient of dl format to vectors for those computations inside of my optimizer, then to use above syntax I need to convert vector to dl formats required in loop and in my optimizer as well. This back and forth is necessary for my job for using training loop. Can you help to find functions in the toolbox to do these jobs (vector to table (because gradient and dlnet’s parameters are tables with dlarray cells) and vice versa), or any other solutions?

Accedi per commentare.

Più risposte (0)

Categorie

Scopri di più su Sequence and Numeric Feature Data Workflows in Help Center e File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by