What is the difference among using fitnet(), configure() or just using narxnet()?

8 visualizzazioni (ultimi 30 giorni)
I am trying to optimize and characterize a NARX network with 2 inputs and 2 outputs based on the number of neurons, delays, and steps and I am incorporating a 10-fold cross-validation.
I am overwhelmed with all the codes I find in the ANSWERS. All of them pretend to do the same thing but they have subtle changes as using fitnet(), configure(), determination of significant lags, doing all the characterization and optimization in open-loop mode versus using closed-loop during testing, using closed-loop versus just removing the delay with removedelay(), using net.performParam.normalization = 'standard'; versus zscore for data normalization, using mse() versus perform(). Most of the examples are just pasted codes but do not explain the reasoning or from where formulas came from.
In easy words, I need help from someone that has extensive experience in neural networks so I can explain the purpose of my network and what I am trying to obtain so I can stop going into circles.
I would really appreciated.

Risposte (2)

Greg Heath
Greg Heath il 21 Lug 2018
Modificato: Greg Heath il 22 Lug 2018
>> I am trying to optimize and characterize a NARX network with 2 inputs and 2 outputs based on the number of neurons, delays, and steps and I am incorporating a 10-fold cross-validation.
1. I do not recommend 10-fold cross-validation for time-series. It does not work well when the order and spacing of data is fixed. Just search the NEWSGROUP and ANSWERS if you don't believe me.
Search words:
greg narxnet
greg narx
2. Best way to deal with the problem is
a. Determine the significant time delays from peaks in the absolute values of the input-target crosscorrelation function and the target autocorrelation function.
b. Determine the upper limit for number of hidden nodes by not letting the number of unknown weights exceed the number of training equations.
c. Search for the smallest number of hidden nodes that yields satisfactory results ( e.g.,
mean-square-error < 0.01 * average target variance
d. For each choice of the number of hidden nodes, design 10 separate networks that differ by the set of random initial weights.
3. I have written several tutorials on narxnet and other timeseries models. However:
In general, my approach tends to be successful,
without overfitting, for the open-loop
configuration. However, I have not been successful
with some of the openloop ==> closeloop conversions.
4. AHA!!! Perhaps, in these cases I should try overfitting!?
5. So, my advice is to check my narx/narxnet posts before starting on your own.
Hope this helps.
Thank you for formally accepting my answer
Greg
  1 Commento
ErikaZ
ErikaZ il 25 Lug 2018
Thanks Greg.
I am trying to follow this code
Which seems to have everything I should need from the list you gave me. However, the main issue I am encountering is that my data set is a set of multiple sequences becasue all my trials(sequences) are of different lengths and they represent different walking conditions.
So I do not know how to format or re-format my data to follow the code from the link.
Here is the code I originally have and I have attached a sample data set. I know you have recommended not to do 10-fold cross-validation but it is required for the gait analysis I will be doing.
- Originally, I have 2 - 1x3 cell array, each cell is a condition, and has a 1x(more than 11 trials) with 1x(bt 100 to 500) time series
- These are EMG = 2 Inputs; Ankle_Moment = 2 Targets
- You will notice that EMG has 3x(more than 11 trials) inside, but I will be using only 1 row at a time (pair=1; for now)
%%Definition of Constants
%1: Tib/Medial Gastroc, 2=: Tib/Lateral Gastroc, 3: RectusFemoris/Hamstrings
pair=1;
numgn=10;
%%Randomize data selection
%Use 10 trials of each condition to train the network and hold 1 to test the network and get an associated error value.
%Randomization trials
trial=cellfun(@(x) randperm(size(x,2)),EMG,'uni', false);
for i = 1:numel(EMG)
EMG{i} = EMG{i}(:,trial{i});
Angle_Moment{i} = Angle_Moment{i}(:,trial{i});
end
- After randomization, I do the 10-fold. Here, I separate 1 trial per condition (test_EMG_InputSeries, test_Angle_Moment_TargetSeries --> 3 sequences total) to run performance. Currently, we do the Testing of trial after training because we are interested in the performance of each condition as a whole trial.
%%Extract test data
%Extract one trial from cell array
test_EMG_InputSeries = cellfun(@(x) x(:,cv),EMG,'uni', false);
test_Angle_Moment_TargetSeries = cellfun(@(x) x(:,cv),Angle_Moment,'uni', false);
%Format data for NN and concatenate based on condition
test_EMG_InputSeries= horzcat(test_EMG_InputSeries{:});
test_Angle_Moment_TargetSeries= horzcat(test_Angle_Moment_TargetSeries{:});
- Then, I create my training data sequence using catsamples(). I end up with a {1x484}(2x30). And I organize them as : condition1-trial1, condition2-trial1, condition3-trial1, condition1-trial2, condition2-trial2, condition3-trial2 and so on ... This is important because I need to make sure I use that I use 24 complete trials during training, 8/condition.
%%Extract train data
EMG_InputSeries=cellfun(@(x) x(:,1:11),EMG,'uni', false);
Angle_Moment_TargetSeries=cellfun(@(x) x(:,1:11),Angle_Moment,'uni', false);
%Delete Test column
for i=1:numel(EMG_InputSeries)
EMG_InputSeries{i}(:,cv)=[];
Angle_Moment_TargetSeries{i}(:,cv)=[];
end
%Organize as walk,up,down,walk,up,down, and so on
EMG_InputSeries= reshape(vertcat(EMG_InputSeries{:}), [3 30]);
Angle_Moment_TargetSeries= reshape(vertcat(Angle_Moment_TargetSeries{:}), [1 30]);
% Format as Multiple Sequences
for i=1:size(EMG_InputSeries,1)
train_EMG_InputSeries{i}=catsamples( EMG_InputSeries{i,:},'pad');
end
train_Angle_Moment_TargetSeries=catsamples( Angle_Moment_TargetSeries{:},'pad');
-Then, create the NARXNET. Here is when I define my division as block, and my mode as 'sample' so I can use point 1-24 for training and 25-30 as validation. This way I take trials as a whole and not 80% of the time series. Here is also when I normalize my targets.
%%Create a Nonlinear Autoregressive Network with External Input
net= narxnet(inputDelays,feedbackDelays,hiddenLayerSize); % No need to initialize since we want the defalut of random weight and biases
%Set properties of NARX network
net.divideFcn = 'divideblock';
net.divideMode = 'sample';
net.divideParam.trainRatio=0.80; net.divideParam.valRatio=0.20; net.divideParam.testRatio=0;
net.performParam.normalization = 'standard'; % To train a NN to minimize mse the relative to both outputs instead the one with greater value ranges.
%net.trainParam.max_fail = 9;
%net.trainParam.epochs=40;
% Prepare data
[inputs,inputStates,layerStates,targets,EWs,SHIFT] = preparets(net,train_EMG_InputSeries{1},{},train_Angle_Moment_TargetSeries);
- Finally, I train 10 different networks to choose the best using the 3 trials held back.
% Find Best generalized network by testing the held_back test trial
for gn=1:numgn
fprintf('Training %d/%d\n', gn, numgn);
%Train network
[nets{gn},tr{gn}] = train(net,inputs,targets,inputStates,layerStates);
%Switch the network to predictive mode
newnets{gn} = removedelay(nets{gn},delay);
for icondition =1:3
% Single Performance to evaluate generalization
[held_back_inputs,held_back_inputStates,held_back_layerStates,held_back_targets,held_back_EWs,held_back_SHIFT] = preparets(nets{gn},...
test_EMG_InputSeries{pair,icondition},{},test_Angle_Moment_TargetSeries{1,icondition});
perfsgn(icondition,gn) = perform(newnets{gn},held_back_targets,nets{gn}(held_back_inputs,held_back_inputStates,held_back_layerStates));
end
end
%%Select best network
mean_error = mean(perfsgn,1);
indx= find(mean_error==min(mean_error));
net=nets{indx}; tr=tr{indx}; newnet=newnets{indx}; % save these networks
allnets{cv}=net; alltrs{cv}=tr;
I have attached a running code and a sample data set to follow the text easier.
Like I said initially, the main problem right now is re-format and/or change the code in the link to be able to follow it. I have a complex data set that I can't find similar examples.
Thank you!

Accedi per commentare.


Greg Heath
Greg Heath il 26 Lug 2018
> I am trying to optimize and characterize a NARX network with 2 inputs and 2 outputs based on the number of neurons, delays, and steps and I am incorporating a 10-fold cross-validation.
1. If you use matrix notation, the number of inputs and outputs doesn't change the code very much.
2. I don't recomment 10-fold XVAL for time-series because the original series is replaced by 10 series with 10-times larger spacings than the original.
> I am overwhelmed with all the codes I find in the ANSWERS.
3. Using GREG as an additional search word will reduce the number of neural searches
> All of them pretend to do the same thing but they have subtle changes as using fitnet(), configure(), determination of significant lags, doing all the characterization and optimization in open-loop mode versus using closed-loop during testing, using closed-loop versus just removing the delay with removedelay(), using net.performParam.normalization = 'standard'; versus zscore for data normalization, using mse() versus perform(). Most of the examples are just pasted codes but do not explain the reasoning or from where formulas came from.
>In easy words, I need help from someone that has extensive experience in neural networks so I can explain the purpose of my network and what I am trying to obtain so I can stop going into circles.
4. It's relatively straightforward.
A. Use the commands HELP and DOC on one of
FITNET regression & curvefitting
PATTERNNET classification & pattern-recognition
TIMEDELAYNET Timeseries w/o feedback
NARNET Feedback times series
NARXNET Feedback times series with external input
B. Then search in the NEWSGROUP (comp.soft-sys.matlab) and ANSWERS using
GREG FITNET
ETC
Hope this helps.
Greg

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by