Future value prediction with neural network method and right input and target format data
4 visualizzazioni (ultimi 30 giorni)
Mostra commenti meno recenti
Hello, Could anyone explain, how to do following with matlab neural network NARX method I have six variables numbers (they depends on each other) for each day and for 10 days.
- day1 x1, x2, x3, x4, x5, x6
- day2 x1, x2, x3, x4 ,x5,x6
- ..........................
- day10 x1, x2, x3, x4, x5, x6
and I want to predict these six variables for 11th day using matlab neural network So prediction:
- day11 y1,y2,y3,y4,y5, y6
I am starting to work with neural networks, so I know that I have to use
ntstool and to select NARX method, but there I stopped with right format of data at matlab work space variables table. Could anyone please help how to enter input and target data in right format for this case at matlab work space variables table that could allow to simulate this.
3 Commenti
AKHILA GOUDA
il 20 Set 2017
Hello sir, If you got your ans then please help me . I have same problem that how can I predict next day data.. Thank you
Greg Heath
il 20 Set 2017
try searching
narx greg
in both
NEWSREADER and ANSWERS
Hope this helps
Greg
Risposta accettata
Greg Heath
il 30 Mag 2013
Your inputs are outputs. Therefore you should be using NAR, not NARX.
How many days, weeks, months or years of data do you have?
Did you look at the output auto and cross-correlation functions to determine that you need feedback lags of 10 days ?
Hope this helps.
Greg
1 Commento
Kranthi Kumar
il 30 Ott 2015
Sir, I have also same kind of query. I have 5 variables.I need to predict those 5 variables in future. i have some 259 data points of 259 days.What would be my input and target values? Those 5 variables are some what interrelated to each other.I am using nts tool NAR. Please help me with this.
Più risposte (8)
Greg Heath
il 5 Giu 2013
Modificato: Greg Heath
il 5 Giu 2013
This is an example of using a double loop to choose as small a number of hidden nodes as possible to mitigate overtraining an overfit net and to mitigate the failure of random initial weights by obtaining multiple designs.
To make it easier to understand, I used 'dividetrain' which is valid for this data because the number of training equations, Neq, is much greater than the number of unknown weights that have to be estimated, Nw.
For small data sets where Neq >> Nw is not possible, regularization (see help mse and doc mse) or validation set stopping using nonrandom data division (see help divideblock and divideind) should be used. If validation stopping is used, the validation performance is used to choose the best net. A completely unbiased estimate of performance on new data is then obtained from the corresponding test set.
Hope this helps.
Greg
close all,clear all, clc, plt=0;
tic
T = simplenar_dataset;
t = cell2mat(T);
whos
[ O N ] = size(T) % [ 1 100]
Neq = prod(size(T)) % 100
rng(0)
for k = 1:100
n = randn(1,N);
autocorrn = nncorr(n,n,N-1,'biased');
sortabsautocorrn = sort(abs(autocorrn));
M = floor(0.95*(2*N-1)) % 189
thresh95(k) = sortabsautocorrn(M);
end
sigthresh95 = mean(thresh95) % 0.2194
zt = zscore(t,1);
autocorrzt = nncorr(zt,zt,N-1,'biased');
lags = 0:N-1;
siglags = -1+find(abs(autocorrzt(N:end))>sigthresh95);
plt = plt+1, figure(plt) % Fig 1
hold on
plot( zeros(1,N), 'k--', 'LineWidth', 2 )
plot(sigthresh95*ones(1,N), 'r--', 'LineWidth', 2 )
plot(-sigthresh95*ones(1,N), 'r--', 'LineWidth', 2 )
plot( lags, autocorrzt(N:end), 'LineWidth', 2 )
plot( siglags, autocorrzt(N+siglags), 'o', 'LineWidth', 2 )
FD = 1:2
NFD = length(FD) % 2
LDB = max(FD) % 2
Ns = N-LDB % 98
Nseq = Ns*O % 98
% Nw = (NFD*O+1)*H+(H+1)*O
Hub = -1+ceil( (Nseq-O) / (NFD*O+O+1)) % 24
Hmax = floor(Hub/10) % Hmax = 2 ==> Nseq >>Nw :
Hmin = 0
dH = 1
Ntrials = 10
j=0
rng(4151941)
for h = Hmin:dH:Hmax
j = j+1
if h == 0
net = narnet( FD, [] );
Nw = ( NFD*O + 1)*O
else
net = narnet( FD, h );
Nw = ( NFD*O + 1)*h + ( h + 1)*O
end
Ndof = Nseq-Nw
[ Xs Xi Ai Ts ] = preparets(net,{},{},T);
ts = cell2mat(Ts);
MSE00s = mean(var(ts',1))
MSE00as = mean(var(ts'))
MSEgoal = 0.01*Ndof*MSE00as/Neq
MinGrad = MSEgoal/10
net.trainParam.goal = MSEgoal;
net.trainParam.min_grad = MinGrad;
net.divideFcn = 'dividetrain';
for i = 1:Ntrials
net = configure(net,Xs,Ts);
[ net tr Ys ] = train(net,Xs,Ts,Xi,Ai);
ys = cell2mat(Ys);
stopcrit{i,j} = tr.stop;
bestepoch(i,j) = tr.best_epoch;
MSE = mse(ts-ys);
MSEa = Nseq*MSE/Ndof;
R2(i,j) = 1-MSE/MSE00s;
R2a(i,j) = 1-MSEa/MSE00as;
end
end
stopcrit = stopcrit %Min grad reached (for all).
bestepoch = bestepoch
R2 = R2
R2a = R2a
Totaltime = toc
% H = 0 1 2
%
% bestepoch = 1 7 16
% 1 7 7
% 1 7 4
% 1 5 8
% 1 6 5
% 1 5 11
% 1 8 5
% 1 4 16
% 1 5 6
% 1 3 6
%
% R2 = 0.8885 0.9948 0.9951
% 0.8885 0.9954 0.9968
% 0.8885 0.9950 0.9983
% 0.8885 0.9946 0.9958
% 0.8885 0.9951 0.9951
% 0.8885 0.9929 0.9915
% 0.8885 0.9908 0.9956
% 0.8885 0.9926 0.9914
% 0.8885 0.9922 0.9972
% 0.8885 -0.0000 0.9934
%
% R2a = 0.8861 0.9945 0.9947
% 0.8861 0.9952 0.9965
% 0.8861 0.9947 0.9981
% 0.8861 0.9944 0.9955
% 0.8861 0.9949 0.9946
% 0.8861 0.9926 0.9908
% 0.8861 0.9904 0.9952
% 0.8861 0.9923 0.9907
% 0.8861 0.9919 0.9969
% 0.8861 -0.0430 0.9928
6 Commenti
Tomas Simonson
il 1 Lug 2015
Mr. Heath, I am also getting a different number for the value of sigthresh95. I am using matlab 2015a, is it possible that the random number generator I am using is different than when if was tested? I tried it with rng(o,'v4') and the result was still not what was posted in the comments. Any help is greatly appreciated. -Tomas
Greg Heath
il 1 Lug 2015
T = simplenar_dataset;
t = cell2mat(T);
[I N ] = size(t)
meant = mean(t)
stdtb = std(t,1) % biased( div by N)
ztb = (t-meant)/stdtb;
minmaxdztb = minmax(ztb-zscore(t,1))
stdtu = std(t,0) % unbiased( div by N-1)
ztu = (t-meant)/stdtu;
minmaxdztu = minmax(ztu-zscore(t,0))
I = 1
N = 100
meant = 0.72345
stdtb = 0.25161
minmaxdztb = [ 0 0 ]
stdtu = 0.25287
minmaxdztu = [ 0 0 ]
Povi Nike
il 30 Mag 2013
1 Commento
Greg Heath
il 31 Mag 2013
Modificato: Greg Heath
il 31 Mag 2013
Please do not use the answer box for comments. Use the comment box.
13 input examples - 2 delays = 11 output examples
Greg Heath
il 31 Mag 2013
Your comments are not clear.
You have data for N = 212 consecutive days of the same year?
You want to calculate the O=6 autocorrelation functions and 6*5/2= 15 cross-correlation functions to find a good range of delays to use?
To avoid confusion, I suggest first concentrating on the positive lags of the 6 autocorrelation functions. If final results need improvement take a look at the cross-correlation functions of the worst predicted variables.
Apparently you used the NAR defaults FD = 1:2 (ND = 2 delays) and H = 10 (hidden nodes); With O = 6 outputs, there are
Nw = (ND*O+1)*H+(H+1)*O = 130+66 = 196
unknown weights weights to estimate with
Ntrneq = Ntrn*O = 6*Ntrn
training equations. For Ntrneq >> Nw, you need
Ntrn >> ~33
However, you only had the default Ntrn = 13-2*round(0.15*13)= 9.
Therefore I suggest
1. Use all of the data Ntrn = 221-(0.3*221)=155
2. Replace dividerand with divideblock
3. Design 10 nets to mitigate the random weight initialization.
If you need more help, please post your code.
Hope this helps.
Greg
Greg Heath
il 3 Giu 2013
%Where and how I should to replace dividerand with divideblock.
net = narnet % No semicolon
See the entry for net.divideFcn? To change it use
net.divideFcn = 'divideblock';
Similarly for any other property of the net that you wish to change.
%How to indicate 10 nets to mitigate the random weight initialization at code?
Search for my double loop codes using
greg Ntrials
I did not look at the rest of your post.
Come back with specifics if you have problems.
You might want to search for more info using
greg narnet
Hope this helps.
Greg
0 Commenti
Povi Nike
il 5 Giu 2013
1 Commento
Greg Heath
il 1 Ott 2013
Yes, the outputs are predicted values.
However, if you wish to continue the predictions beyond the current data,
1. Make sure the current output with target feedback yields a very low error rate
2. Convert to a closeloop configuration.
3. Test to see if the closeloop configuration with output (not target) feedback also yields a sufficiently low error.
4. If the closeloop configuration doesn't measure up, use train to modify the weights and improve performance.
5. Finally, run the closeloop net beyond the original data.
Greg Heath
il 8 Giu 2013
help closeloop
doc closeloop
help removedelay
doc removedelay
Also search for these terms in NEWSGROUP and ANSWERS.
Greg
0 Commenti
akhilesh
il 27 Giu 2016
Using Time series toolbox I have generated a network model and it takes 4 delay input and gives 4 delay output. Confusion is, what 4 delay output values represent. Is they are 4 predicted values, if so then which one is more accurate. Please clear.
0 Commenti
Vedere anche
Categorie
Scopri di più su Sequence and Numeric Feature Data Workflows in Help Center e File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!