- Since you are performing regression, you should replace the final 'fullyConnectedLayer' with a number of outputs corresponding to your prediction task (usually 1 for a single continuous variable) and remove the 'softmaxLayer' which is specific to classification tasks.
- For regression, a common loss function is 'Mean Squared Error (MSE)' instead of 'cross-entropy' loss used in classification. You would need to specify this in the 'trainnet' function.
- 'Accuracy' as a metric is not relevant for this task. Instead, you might track the loss or another regression-specific metric like 'RMSE (Root Mean Squared Error)' in the 'trainingOptions' function.
TCN model to predict continuous variable
20 visualizzazioni (ultimi 30 giorni)
Mostra commenti meno recenti
Hello there I am trying to build a TCN machine learning model for regression purposes (to predict a continuous variable). Simialar to the example here: https://www.mathworks.com/help/deeplearning/ug/sequence-to-sequence-classification-using-1-d-convolutions.html#SeqToSeqClassificationUsing1DConvAndModelFunctionExample-11. I have time series data in which I am using 3 input features (accelrometer measuments in x,y,z directions), but instead of classifying an acitivity, I am trying to estimate/predict a continuous variable. My data is stored in a table with: Time, Accel_x, Accel_Y, Accel_Z, and Response Variable. How would I modify the code here:
numFilters = 64;
filterSize = 5;
droupoutFactor = 0.005;
numBlocks = 4;
net = dlnetwork;
layer = sequenceInputLayer(NumFeatures,Normalization="rescale-symmetric",Name="input");
net = addLayers(net,layer);
outputName = layer.Name;
for i = 1:numBlocks
dilationFactor = 2^(i-1);
layers = [
convolution1dLayer(filterSize,numFilters,DilationFactor=dilationFactor,Padding="causal",Name="conv1_"+i)
layerNormalizationLayer
spatialDropoutLayer(droupoutFactor)
convolution1dLayer(filterSize,numFilters,DilationFactor=dilationFactor,Padding="causal")
layerNormalizationLayer
reluLayer
spatialDropoutLayer(dropoutFactor)
additionLayer(2,Name="add_"+i)];
% Add and connect layers.
net = addLayers(net,layers);
net = connectLayers(net,outputName,"conv1_"+i);
% Skip connection.
if i == 1
% Include convolution in first skip connection.
layer = convolution1dLayer(1,numFilters,Name="convSkip");
net = addLayers(net,layer);
net = connectLayers(net,outputName,"convSkip");
net = connectLayers(net,"convSkip","add_" + i + "/in2");
else
net = connectLayers(net,outputName,"add_" + i + "/in2");
end
% Update layer output name.
outputName = "add_" + i;
end
layers = [
fullyConnectedLayer(numClasses,Name="fc")
softmaxLayer];
net = addLayers(net,layers);
net = connectLayers(net,outputName,"fc");
options = trainingOptions("adam", ...
MaxEpochs=60, ...
miniBatchSize=1, ...
InputDataFormats="CTB", ...
Plots="training-progress", ...
Metrics="accuracy", ...
Verbose=0);
net = trainnet(DataTrain,DataTrain.ResonseVariable,net,"crossentropy",options);
0 Commenti
Risposte (1)
Ayush Aniket
il 13 Mag 2024
Modificato: Ayush Aniket
il 13 Mag 2024
Hi Isabelle,
For a regression task you will need to make the following changes to the architecture and training process of your Temporal Convolutional Network (TCN):
Refer to the documentation links below to read more about the loss functions and various metrics:
Vedere anche
Categorie
Scopri di più su Sequence and Numeric Feature Data Workflows in Help Center e File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!