Is it necessary to initialize weights and biases while training neural network? If yes then how to initialize it?

3 visualizzazioni (ultimi 30 giorni)
  • I am using "patternnet" function for classification purpose. Is it necessary to initialize weights and biases with a proper number? If so how to initialize it and what numbers to choose.
  • If don't initialize it, does it take random number whenever and every time I train the network?
  • Are weights and biases directly proportional to hidden layers? How to take of these things into consideration if at all the weights and biases are going to be set manual?
I am new to the neural networks, kindly let me know if any references are available. Thank You.
  1 Commento
Tijmen Wartenberg
Tijmen Wartenberg il 10 Mag 2017
Modificato: Tijmen Wartenberg il 10 Mag 2017
I am also a novice in this field, but I think that it depends on the algorithm. If the neural network makes use of a stochastic gradient decent for example, then random initial weights (between 0 and 1) will work just fine, as the weights will converge automatically each learning step. This number of learning steps required is often in the same range, but may depend on some parameters in your model like learning rate and connection strength etc., while random starting weights will make sure that there is an output of each layer in the network for every input pattern.
I am not sure about the term, bias. From what I know, bias is often used in a more Bayesian type of framework. I would advice you to read up on any material from Machine Learning/ Computational Neuroscience courses. This can be a text book or any other online material.

Accedi per commentare.

Risposte (1)

Greg Heath
Greg Heath il 11 Mag 2017
1. See the patternnet documentation.
help patternnet
and
doc patternnet
2. See my patternnet tutorials in the NEWSGROUP and ANSWERS. Search with
greg patternnet
and
greg patternnet tutorial
3. To determine the training goal. I use
vart1 = mean(var(target',1))
MSEgoal = 0.01*vart1
This yields a training subset Rsquare of 0.99
4. To prevent overtraining an overfit net, minimize the number of hidden nodes needed to reach that goal.
5. Find a sufficient set of initial random weights to accomplish 4. I do this by using a double loop search
a. Outer loop determines the number of hidden nodes
b. Inner loop finds the initial random weights
Hope this helps.
Thank you for formally accepting my answer
Greg

Categorie

Scopri di più su Sequence and Numeric Feature Data Workflows in Help Center e File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by