Is my neural network also using testing data to predict and not only the training and validation data?

I made a neural time series analysis similar to the default one in the nnstart wizard. The only change I made to the code was to use divideblock instead of dividerand for my data to be divided into blocks. I have my time series analysis use the earliest data as training, then later data as validation, and finally the latest data as testing, which is why I used divideblock instead of dividerand. Using dividerand would scatter testing targets throughout the time frame rather than at the very end. I'm more interested in seeing if it can make accurate test outputs after the last training data point rather than between two training points.
Out of the 7000+ input data points, I set my training + validation to the first 99.8% of data and my test to the last 0.2% since I only want to see how well it can predict in the future short term (only 14 data points ahead). The results looked okay. I wanted to test the data manually to make sure it wasn't using the test data to learn even though the tutorial stated the test data was independent. The exact words of the tutorial for test data: "These have no effect on training and so provide an independent measure of network performance during and after training."
I set my last 4 data inputs to 0 (normally would be in the thousands like almost all the inputs prior). So the first 10 values of my test data were normal (in the thousands) and the very last 4 were set to 0. I ran the code and data returned the model showing the last 4 test outputs were now fitted to fall to 0 by the end. I went back and set them to their normal values (in the thousands) and ran it and the test outputs were nearly spot on (in the thousands) and not falling to 0 like they were the other runs.
If the test targets were truly independent of the training and validation data, how is this happening? Every time I ran it with the last 4 test targets at 0, the test outputs drop to 0. If they were reset to their normal values, they would be again almost spot on. To me, that means it's using the test set as part of training or at the very least using the most recent test target to make make the next test output (as if to correct a mistake if it was way off on the last data point prediction). Is there something I'm missing? Are the test targets being used incrementally for the next test output if it gets it wrong or is it being used in the training set somehow? How can I set it to completely ignore test targets in making a test output?
Edit: Screenshot of graph attached.

Risposte (2)

1. a. I'm glad that you agree with me that DIVIDEBLOCK should be the default for time series prediction ( search NEWSGROUP and ANSWERS with GREG DIVIDEBLOCK ).
b. The current default of DIVIDERAND is obviously only good for interpolation.
c. I disagree with you w.r.t. the test subset data being involved in training.
It definitely should not be. You can prove this by replacing the entire the
test subset with zeros and setting the rng to the same initial state. You
should get the same weight values.
2. You have provided insufficient information. It is very rare that someone can get their point across clearly without posting code with comments.
3. You use the word input. I hope you are using NARNET for which there is no applied input. Otherwise things don't make sense.
4.
[ I N ] = size(input) = ?
[ O N ] = size(target) = ?
ID = input delays = ?
FD = feedback delays = ?
H = No. of hidden nodes = ?
[ Ntrn Nval Ntst]/N = 0.7/0.15/0.15 ?
5. What is the data point number for the 1st test point?
6. Please post your code.
Hope this helps.
Greg

3 Commenti

I didn't post code as it's only the default nnstart one for time series, but here it is. All I did to the code was change dividerand to divideblock. This example I'm posting uses 70% as training, 15% as validation, and 15% as testing. To make my example even easier here, the first 90 values of my 100 value data set are set to 100. The last 10 are set to 0. I did not set the last 15% (test targets, should be 15 of them) to 0. I don't want any possible overlap at all with the validation set (should be values 71-85), which is what happens when I set all last 15 data points to the test.
Attached is the graph showing how it picked up its errors when it missed the value of 0 earlier in the test points and then it started to autocorrect itself. In my opinion, it shouldn't be doing that if it was truly unaware of what the test target was. To me, the test target is an easy visual test for the user to make sure if it's off or not and not something the function should be using to guide its output.
Another thing I'm noticing on the graph, the net appears to use 1 less validiation point and 1 less target point than I specified. Out of the 100 data points I loaded, I set the first 70 to train, next 15 to validate, and last 15 to test. The output graph shows the first 70 are used as training, but then uses only 14 to validate, and 14 to test. So there are only 98 points on that graph...I'm assuming it has something to do with the 2 delays and therefore 2 data points are not used/displayed...
Attached is also my data set.
Edit: Every time I run the model, the test output is always 100 at the point where the first test target is set to 0 (which is that massive error shown on the graph at that change in test target). Once it realized the test target was 0 and that it was wrong (so including and after the second test target set to 0), it starts to deviate from 100 (the errors are less as the graph shows). So it's somehow using the test target to learn from its mistake and I want it to not do that to see how well it can predict 15 spots ahead. If the test data was truly not used to predict, those last 10 values in my graph should all have errors of 100 (predictions should be 100 and targets 0). It only does that for the earliest test target so the subsequent test targets are being used for predictions. I guess the simple question I'm asking is why does the graph look like that for the last 9 values. The 10th to last point makes sense when it predicts 100, but target is 0. The ones after do not if it was a truly independent test.
% Solve an Autoregression Time-Series Problem with a NAR Neural Network
% Script generated by Neural Time Series app
% Created 25-Jun-2017 00:25:14
%
% This script assumes this variable is defined:
%
% Targets - feedback time series.
T = tonndata(Targets,false,false);
% Choose a Training Function
% For a list of all training functions type: help nntrain
% 'trainlm' is usually fastest.
% 'trainbr' takes longer but may be better for challenging problems.
% 'trainscg' uses less memory. Suitable in low memory situations.
trainFcn = 'trainlm'; % Levenberg-Marquardt backpropagation.
% Create a Nonlinear Autoregressive Network
feedbackDelays = 1:2;
hiddenLayerSize = 10;
net = narnet(feedbackDelays,hiddenLayerSize,'open',trainFcn);
% Choose Feedback Pre/Post-Processing Functions
% Settings for feedback input are automatically applied to feedback output
% For a list of all processing functions type: help nnprocess
net.input.processFcns = {'removeconstantrows','mapminmax'};
% Prepare the Data for Training and Simulation
% The function PREPARETS prepares timeseries data for a particular network,
% shifting time by the minimum amount to fill input states and layer
% states. Using PREPARETS allows you to keep your original time series data
% unchanged, while easily customizing it for networks with differing
% numbers of delays, with open loop or closed loop feedback modes.
[x,xi,ai,t] = preparets(net,{},{},T);
% Setup Division of Data for Training, Validation, Testing
% For a list of all data division functions type: help nndivide
net.divideFcn = 'divideblock'; % Divide data by block
net.divideMode = 'time'; % Divide up every sample
net.divideParam.trainRatio = 70/100;
net.divideParam.valRatio = 15/100;
net.divideParam.testRatio = 15/100;
% Choose a Performance Function
% For a list of all performance functions type: help nnperformance
net.performFcn = 'mse'; % Mean Squared Error
% Choose Plot Functions
% For a list of all plot functions type: help nnplot
net.plotFcns = {'plotperform','plottrainstate', 'ploterrhist', ...
'plotregression', 'plotresponse', 'ploterrcorr', 'plotinerrcorr'};
% Train the Network
[net,tr] = train(net,x,t,xi,ai);
% Test the Network
y = net(x,xi,ai);
e = gsubtract(t,y);
performance = perform(net,t,y)
% Recalculate Training, Validation and Test Performance
trainTargets = gmultiply(t,tr.trainMask);
valTargets = gmultiply(t,tr.valMask);
testTargets = gmultiply(t,tr.testMask);
trainPerformance = perform(net,trainTargets,y)
valPerformance = perform(net,valTargets,y)
testPerformance = perform(net,testTargets,y)
% View the Network
view(net)
% Plots
% Uncomment these lines to enable various plots.
%figure, plotperform(tr)
%figure, plottrainstate(tr)
%figure, ploterrhist(e)
%figure, plotregression(t,y)
%figure, plotresponse(t,y)
%figure, ploterrcorr(e)
%figure, plotinerrcorr(x,e)
% Closed Loop Network
% Use this network to do multi-step prediction.
% The function CLOSELOOP replaces the feedback input with a direct
% connection from the outout layer.
netc = closeloop(net);
netc.name = [net.name ' - Closed Loop'];
view(netc)
[xc,xic,aic,tc] = preparets(netc,{},{},T);
yc = netc(xc,xic,aic);
closedLoopPerformance = perform(net,tc,yc)
% Multi-step Prediction
% Sometimes it is useful to simulate a network in open-loop form for as
% long as there is known data T, and then switch to closed-loop to perform
% multistep prediction. Here The open-loop network is simulated on the
% known output series, then the network and its final delay states are
% converted to closed-loop form to produce predictions for 5 more
% timesteps.
[x1,xio,aio,t] = preparets(net,{},{},T);
[y1,xfo,afo] = net(x1,xio,aio);
[netc,xic,aic] = closeloop(net,xfo,afo);
[y2,xfc,afc] = netc(cell(0,5),xic,aic);
% Further predictions can be made by continuing simulation starting with
% the final input and layer delay states, xfc and afc.
% Step-Ahead Prediction Network
% For some applications it helps to get the prediction a timestep early.
% The original network returns predicted y(t+1) at the same time it is
% given y(t+1). For some applications such as decision making, it would
% help to have predicted y(t+1) once y(t) is available, but before the
% actual y(t+1) occurs. The network can be made to return its output a
% timestep early by removing one delay so that its minimal tap delay is now
% 0 instead of 1. The new network returns the same outputs as the original
% network, but outputs are shifted left one timestep.
nets = removedelay(net);
nets.name = [net.name ' - Predict One Step Ahead'];
view(nets)
[xs,xis,ais,ts] = preparets(nets,{},{},T);
ys = nets(xs,xis,ais);
stepAheadPerformance = perform(nets,ts,ys)
% Deployment
% Change the (false) values to (true) to enable the following code blocks.
% See the help for each generation function for more information.
if (false)
% Generate MATLAB function for neural network for application
% deployment in MATLAB scripts or with MATLAB Compiler and Builder
% tools, or simply to examine the calculations your trained neural
% network performs.
genFunction(net,'myNeuralNetworkFunction');
y = myNeuralNetworkFunction(x,xi,ai);
end
if (false)
% Generate a matrix-only MATLAB function for neural network code
% generation with MATLAB Coder tools.
genFunction(net,'myNeuralNetworkFunction','MatrixOnly','yes');
x1 = cell2mat(x(1,:));
xi1 = cell2mat(xi(1,:));
y = myNeuralNetworkFunction(x1,xi1);
end
if (false)
% Generate a Simulink diagram for simulation or deployment with.
% Simulink Coder tools.
gensim(net);
end
Thanks for the detail. I will read it ASAP.
Unfortunately my computer was sick and I lost my MATLAB code. So far I haven't been able to load the replacement yet. Hope to get to it this week.
UGH ...
The code produced by nnstart is NEVER recommended to newbies by me. It is too voluminous and doesn't emphasize the important points. For example: First there is a long explanation paragraph; then there is code that just assigns default values.
The code produced by the help and doc commands is SHORT and ALMOST SWEET.
help narnet
doc narnet
I explain the "ALMOST" in my recent SHORT & SWEET (;>) QUICKIES post
https://www.mathworks.com/matlabcentral/newsreader/view_thread/348883#954773
Later,
Greg
1. YOUR EXPERIMENT IS FAULTY
2. The basic assumption of NN learning is that the training, validation, and test subsets have similar summary secondary statistics:
mean, standard deviation and correlation function.
3. Obviously your test subset has a different mean.
4. In addition, if the training subset has no variance, the function can be modeled with zero value weights: the output can be soley created with the biases.
5. One way to prove that the test set does not affect learning is
a. Use the example data in the documentation
i) help narnet
ii) doc narnet
b. Assign an initial state to the RNG (e.g., RNG(0))
c. Design net1 with the documentation code in a
and DIVIDEBLOCK
d. Reassign the same initial RNG state as in a
e. Design net2 with DIVIDEBLOCK and Ntst = 0
f. Compare the two sets of weights and biases.
6. I have not demonstrated this. However, if you have the time ... (;>)
Hope this helps.
Greg

Accedi per commentare.

Thanks for your reply. The first time I ran it and noticed this bias, my training subset had 7000+ data points with a variance (that was in my first post with the first attached image). I created that second test with far less data points (only 100) and all training with a value of 100 to simplify my question as much as possible since the bias was reproducible and more obvious (the image in the second post I made). All I'm asking is why the test output is following the test targets when I have trained it not to do so.
I understand this statement you made: basic assumption of NN learning is that the training, validation, and test subsets have similar summary secondary statistics.
I wanted to actually see if that statement was true to the programming and from what I ran, it doesn't appear so. I intentionally trained my data to give an output of 100 every single time. Instead, it decided to use the test targets to adjust its response. I don't see anywhere in matlab's documentation how and why it's using my test target to adjust its test output when it should be independent.
I think using the training dataset with zero variance data set is actually putting their claim to the test as I know EXACTLY what the test output should be if I trained it exactly what to say. If I use a dataset with a lot of variance to train, I won't be able to identify if the variable test outputs are biased to the test target or not. The way I did it, it shows that it's biased and using the test target to adjust. I want to know how it's doing that and I want to remove that function so it gives an unbiased target. I may have to contact matlab directly as I think it's something in their function coding and something I can't change.

5 Commenti

The above is a comment, not an answer. If you cut and paste into a comment box , preceded with with the explanation of why you did it, I will delete the copy in the answers box.
Or, if you wish, I can do both.
Greg
There's no need to delete it. I contacted MATLAB and they agreed that the Neural Net was changing weights based on the target values when it shouldn't be. They've sent it to their developers to look into more.
Once I hear more, I'll post it here for future reference as I know other people have run into the same problem.
Edit: And MATLAB has the link to what I've posted in this thread to assist them in figuring out the problem that they acknowledge is happening. Please do not delete or edit any of the text I've already posted. I don't need any help from other users as this point as it's now in the hands of their developers. Thanks.
I do not edit posts unless there are glaring errors in English
However, I do remove comments from answer blocks to put them in comment blocks.
What you have posted is a comment, not an answer.
I respect your effort. That is why I asked: NOT to edit!, but to move it to a comment box.
Again, what you have posted uses faulty logic:
Default NARNET outputs are linearly dependent on
1. The bias weight b
2. The two previous values
3. [ 100 100 0 0 0 0 0 ...] ==> [ ok ok ok nope b b b ...]
Greg
To give an update, I've been going back and forth with Matlab on this. My conclusions are the test targets affected the test outputs when they should not have so Matlab and I changed the code I posted earlier to set the "net.divideParam.testRatio = 15/100;" to 0 and the other two to total 100 (70 and 30). Since the test data was supposed to be "independent" training, I wanted to make sure that it had zero affect on it by removing it entirely and I will make any comparison of test data to predicted offline in XLS.
At any rate, the issue I'm having now is running the net after it's trained to generate an output. I'm awaiting a response from Matlab, but the commands I was provided were generating outputs off what I expected and I'm not sure it was a time series input-based command now (the new net/function I trained was a simple linear y=x data set so time point 1=1, 2=2, etc up to 100).
My question is: how can I take the trained network (that used time points 1 to 100) and have it generate an output at time point 101? There is no input in a sense there is no data I'm entering to get an output other than a specific time point. I'd like a simple command where I ask the trained net to give me a value at time point 101 after having trained and validated using time points 1 to 100.
Hopefully such a simple command exists and it works and I can put this topic to rest.
I got the answer from matlab. After the net is trained, these commands need to be performed. I was able to get all my questions answered by matlab support and no longer need assistance in this thread. Thanks.
>> test = zeros(1, 5) %5 in this case is the number of values forward you want predicted
>> testData = tonndata(test, true, false)
>> testResult = netc(testData, xic, aic)

Accedi per commentare.

Richiesto:

il 25 Giu 2017

Commentato:

il 18 Lug 2017

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by