Error in helperModClassTrainingOptions (line 29) 'CheckpointPath',checkpointPath,...
    2 visualizzazioni (ultimi 30 giorni)
  
       Mostra commenti meno recenti
    
    john karli
 il 16 Feb 2022
  
    
    
    
    
    Commentato: Joss Knight
    
 il 18 Feb 2022
            I want to train the model using following link
I want to save every epochs but when i run the following section 
checkpointPath = pwd;
maxEpochs = 20;
miniBatchSize = 128;
options = helperModClassTrainingOptions(maxEpochs,miniBatchSize,...
  numel(rxTrainLabels),rxValidFrames,rxValidLabels);
trainedNettime = trainNetwork(rxTrainFrames,rxTrainLabels,lgraph_1 ,options);
save trainedNettime
I got the error
Unrecognized function or variable 'checkpointPath'.
Error in helperModClassTrainingOptions (line 29)
'CheckpointPath',checkpointPath,...
my helperModClassTrainingOptions function is
function options = helperModClassTrainingOptions(maxEpochs,miniBatchSize,...
  trainingSize,rxValidFrames,rxValidLabels)
%helperModClassTrainingOptions Modulation classification training options
%   OPT = helperModClassTrainingOptions(MAXE,MINIBATCH,NTRAIN,Y,YLABEL)
%   returns the training options, OPT, for the modulation classification
%   CNN, where MAXE is the maximum number of epochs, MINIBATCH is the mini
%   batch size, NTRAIN is the number of training frames, Y is the
%   validation frames and YLABEL is the labels.
%
%   This function configures the training options to use an SGDM solver.
%   By default, the 'ExecutionEnvironment' property is set to 'auto', where
%   the trainNetwork function uses a GPU if one is available or uses the
%   CPU, if not. To use the GPU, you must have a Parallel Computing Toolbox
%   license. Set the initial learning rate to 2e-2. Reduce the learning
%   rate by a factor of 10 every 9 epochs. Set 'Plots' to
%   'training-progress' to plot the training progress.
%   
%   See also ModulationClassificationWithDeepLearningExample.
%   Copyright 2019 The MathWorks, Inc.
validationFrequency = floor(trainingSize/miniBatchSize);
options = trainingOptions('sgdm', ...
  'InitialLearnRate',1e-3, ...
  'MaxEpochs',maxEpochs, ...
  'MiniBatchSize',miniBatchSize, ...
  'Shuffle','every-epoch', ...
  'Plots','training-progress', ...
  'CheckpointPath',checkpointPath,...
  'ValidationData',{rxValidFrames,rxValidLabels}, ...
  'ValidationFrequency',validationFrequency, ...
  'Verbose',false, ...
   'LearnRateSchedule', 'piecewise', ...
  'LearnRateDropPeriod', 9, ...
  'LearnRateDropFactor', 0.1);
0 Commenti
Risposta accettata
  Joss Knight
    
 il 16 Feb 2022
        You need to pass the checkpointPath variable to your function.
5 Commenti
  Joss Knight
    
 il 18 Feb 2022
				The final validation is computed after a final epoch to compute the batch normalization statistics. Some networks are particularly sensitive to the difference between the mini-batch statistics and those of the whole dataset. Make sure your dataset is shuffled and your minibatch size is as large as possible. To avoid this (at a small additional performance cost), using moving averages (see BatchNormalizationStatistics training option).
I can't explain why it's not checkpointing the network every epoch.
Più risposte (0)
Vedere anche
Categorie
				Scopri di più su Deep Learning Toolbox in Help Center e File Exchange
			
	Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!


