[ I N ] = size(input)
[ O N ] = size(target)
if
net.divideFcn = 'dividetrain';
Ntrneq = Ntrn*O
For an I-H-O MLP, the number of unknown weights to be estimated using the Ntrneq equations is
If Ntrneq > Nw, then H <= Hub where
Hub = -1 + ceil( (Ntrneq-O) / (I+O+1))
When the data contains noise and measurement errors, it is desired that Ntrneq >> Nw resulting in H << Hub. If contamination is not too severe, Hub/10 <= H <= Hub/2 or smaller is a reasonable range to look for a good value of H. The smaller, the better.
If higher values of H are necessary, validation stopping (Nval >>1) or regularization (net.trainFcn = 'trainbr') is recommended.
If validation stopping is used, use one of the other divide functions. The resulting default sizes are
[Ntrn/N, Nval/N, Ntst/N [~ [0.7,0.15,0.15]
Bottom line: Starting with H = 1000 is much to high. I recommend trying H = 10:10:100 (10 random weight initializations each ==> 100 trial designs). If any design looks promising, test it on ALL of the data. Be sure to record the state of the random number generator before each design so you can recreate it if the design is a good candidate.
Hope this helps.
Q1: Does I = O = 1?
Q2: Are the 10 data sets chosen randomly? If not, why not?
Thank you for formally accepting my answer
Greg