Input data must be a formatted dlarray.
64 visualizzazioni (ultimi 30 giorni)
Mostra commenti meno recenti
Shilpa Sonawane
il 21 Feb 2023
Commentato: Shilpa Sonawane
il 24 Feb 2023
I have used the code of VAE to generate image. My aim is to find probaility distribution of mfcc signal. Input is MFCC matrix of size 40x24. I got the error:Input data must be a formatted dlarray.
plz provide guidance to resolve the above error.
clear all;
close all;
clc;
folder='D:\SAS WORK\CODING\MY_WORK_572021\datastore';
ADS = audioDatastore(folder);
load S1_0_02_tr_mfcc.mat;
load S1_1_02_tr_mfcc.mat;
XTrain=S1_0_02_mfcc;
XTest=S1_1_02_mfcc((1:40),:);
inputSize=[40 24];
numLatentChannels = 16;
imageSize = [40 24 1];
layersE = [
imageInputLayer(imageSize,Normalization="none")
convolution2dLayer(3,32,Padding="same",Stride=2)
reluLayer
convolution2dLayer(3,64,Padding="same",Stride=2)
reluLayer
fullyConnectedLayer(2*numLatentChannels)
samplingLayer];
projectionSize = [7 7 64];
numInputChannels = size(imageSize,1);
layersD = [
featureInputLayer(numLatentChannels)
projectAndReshapeLayer(projectionSize)
transposedConv2dLayer(3,64,Cropping="same",Stride=2)
reluLayer
transposedConv2dLayer(3,32,Cropping="same",Stride=2)
reluLayer
transposedConv2dLayer(3,numInputChannels,Cropping="same")
sigmoidLayer];
netE = dlnetwork(layersE);
netD = dlnetwork(layersD);
numEpochs = 30;
miniBatchSize = 128;
learnRate = 1e-3;
trailingAvgE = [];
trailingAvgSqE = [];
trailingAvgD = [];
trailingAvgSqD = [];
numObservationsTrain = 2;%size(XTrain,4);
numIterationsPerEpoch = ceil(numObservationsTrain / miniBatchSize);
numIterations = numEpochs * numIterationsPerEpoch;
monitor = trainingProgressMonitor( ...
Metrics="Loss", ...
Info="Epoch", ...
XLabel="Iteration");
epoch = 0;
iteration = 0;
% Loop over epochs.
while epoch < numEpochs %&& ~monitor.Stop
epoch = epoch + 1;
% Shuffle data.
%shuffle(mbq);
% Loop over mini-batches.
while (iteration<=size(XTrain,1))%hasdata(mbq) && ~monitor.Stop
iteration = iteration + 1;
% Read mini-batch of data.
X = XTrain(iteration,:)%next(mbq);
% Evaluate loss and gradients.
[loss,gradientsE,gradientsD] = dlfeval(@modelLoss,netE,netD,X);
% Update learnable parameters.
[netE,trailingAvgE,trailingAvgSqE] = adamupdate(netE, ...
gradientsE,trailingAvgE,trailingAvgSqE,iteration,learnRate);
[netD, trailingAvgD, trailingAvgSqD] = adamupdate(netD, ...
gradientsD,trailingAvgD,trailingAvgSqD,iteration,learnRate);
% Update the training progress monitor.
recordMetrics(monitor,iteration,Loss=loss);
updateInfo(monitor,Epoch=epoch + " of " + numEpochs);
monitor.Progress = 100*iteration/numIterations;
end
end
_____________________
Error
Error using dlnetwork/validateForwardInputs
Input data must be a formatted dlarray.
Error in dlnetwork/forward (line 761)
[x, doForwardExampleInputs] = validateForwardInputs(net, x, "forward");
Error in modelLoss (line 4)
[Z,mu,logSigmaSq] = forward(netE,X);
Error in deep.internal.dlfeval (line 17)
[varargout{1:nargout}] = fun(x{:});
Error in dlfeval (line 40)
[varargout{1:nargout}] = deep.internal.dlfeval(fun,varargin{:});
Error in SS_19_FEB_2023_datastore (line 77)
[loss,gradientsE,gradientsD] = dlfeval(@modelLoss,netE,netD,X);
0 Commenti
Risposta accettata
Matt J
il 22 Feb 2023
Modificato: Matt J
il 22 Feb 2023
You have not provided us the means to run your code (implementation of modelLoss is missing as is a sample of the input data). However, my guess is that your modelLoss function tries to evaluate dlgradient which requires its inputs to be of type dlarray, whereas X is an ordinary Matlab numeric array.
5 Commenti
Matt J
il 23 Feb 2023
Well, we cannot troubleshoot something we can't see, and we cannot see what you did. I repeat my advice that you add sufficient material to your post to allow us to run your code.
Più risposte (1)
Brian Hemmat
il 23 Feb 2023
As Matt J said, the mfcc function, and all feature extraction functions provided by Audio Toolbox, doesn't support dlarrays as of R2023a.
You can find an example implementation of a log mel spectrogram used in a loss function here:
Below is example code of mfcc feature extraction in a loss function. The mfcc extraction is in the supporting functions. I've couched it within a trivial example of denoising to minimize the loss between mfcc derived from clean and mfcc derived from noisy speech.
% This example was created with R2022b and requires Audio Toolbox(TM) and Deep Learning Toolbox(TM)
%% Ingest the Free Spoken Digit Dataset
loc = matlab.internal.examples.downloadSupportFile("audio","FSDD.zip");
unzip(loc,pwd)
ads = audioDatastore(pwd,IncludeSubfolders=true);
fs = 8e3; % sample rate of all files in dataset
% Split datastore into train and validation sets
train_idx = 1:round(numel(ads.Files)*0.8);
val_idx = round(numel(ads.Files)*0.8)+1:numel(ads.Files);
adsValidation = subset(ads,val_idx);
adsTrain = subset(ads,train_idx);
% Create combined datastores where the target is the clean signal and the
% predictor is the signal with added noise.
xadsTrain = transform(adsTrain,@(x)irescale(x+pinknoise(size(x))));
adsTrain = transform(adsTrain,@(x){irescale(x)});
cdsTrain = combine(xadsTrain,adsTrain);
xadsValidation = transform(adsValidation,@(x)irescale(x+pinknoise(size(x))));
adsValidation = transform(adsValidation,@(x){irescale(x)});
cdsValidation = combine(xadsValidation,adsValidation);
% Create minibatchqueue objects to create mini-batches of data and speed up
% training.
miniBatchSize = 128;
mbq = minibatchqueue(cdsTrain,...
MiniBatchSize=miniBatchSize, ...
MiniBatchFcn=@(x,t)preprocessMiniBatch(x,t), ...
MiniBatchFormat=["TCB","TCB"]);
mbqValidation = minibatchqueue(cdsValidation,...
MiniBatchSize=miniBatchSize, ...
MiniBatchFcn=@(x,t)preprocessMiniBatch(x,t), ...
MiniBatchFormat=["TCB","TCB"], ...
PartialMiniBatch="discard");
%% Verify dlmfcc implementation
% Use melSpectrogram followed by cepstralCoefficients. This is the same
% implementation that audioFeatureExtractor uses.
x = read(ads);
x = single(x);
S = melSpectrogram(x,fs,WindowNormalization=false);
exp = cepstralCoefficients(S);
% The extra permutes are necessary because dlmfcc expects input that has
% been moved into CBT order.
act = permute(extractdata(dlmfcc(dlarray(permute(x,[2,3,1])),fs)),[3,1,2]);
% Inspect the difference
norm(exp-act)
%% Define network
% Define network. This is basically a hello-world network, randomly made.
layers = [
sequenceInputLayer(1,MinLength=1e3)
bilstmLayer(32,OutputMode="sequence")
convolution1dLayer(5,1,Padding="same")
];
net = dlnetwork(layers);
analyzeNetwork(net)
%% Define Training Options
% Define training parameters and initialize variables
maxEpochs = 20;
iteration = 0;
averageGrad = [];
averageSqGrad = [];
learnRate = 0.001;
%% Train Network
% Create a progress monitor to visualize training
monitor = trainingProgressMonitor( ...
Metrics=["TrainingLoss","ValidationLoss"], ...
Info="Epoch");
groupSubPlot(monitor,"loss",["TrainingLoss","ValidationLoss"])
% Main training loop
for epoch = 1:maxEpochs
% Update plot info
updateInfo(monitor,Epoch=epoch)
% Shuffle dataset each epoch
shuffle(mbq)
while hasdata(mbq)
iteration = iteration + 1;
% Get next mini batch
[X,T] = next(mbq);
% Pass the predictors through the network and return the loss and
% gradients.
[loss,gradients] = dlfeval(@modelLoss,net,X,T);
% Update the network parameters using the ADAM optimizer.
[net,averageGrad,averageSqGrad] = adamupdate(net,gradients, ...
averageGrad,averageSqGrad,iteration,learnRate);
% Update training progress visualization
loss = gather(extractdata(loss));
recordMetrics(monitor,iteration,TrainingLoss=loss)
if monitor.Stop
break
end
end
if monitor.Stop
break
end
% Update validation progress visualization
shuffle(mbqValidation)
totalLoss = [];
while hasdata(mbqValidation)
[X,T] = next(mbqValidation);
Y = predict(net,X);
% Compute loss
Y = stripdims(Y);
T = stripdims(T);
Ym = dlmfcc(Y,fs);
Tm = dlmfcc(T,fs);
loss = mse(Ym,Tm)./(size(Tm,1)*size(Tm,3));
totalLoss = [totalLoss;loss]; %#ok<AGROW>
end
validationLoss = mean(totalLoss);
recordMetrics(monitor,iteration,ValidationLoss=validationLoss)
end
%% Supporting Functions
function [loss,gradients] = modelLoss(net,X,T)
% Forward through network
Y = forward(net,X);
% Compute loss
Ym = dlmfcc(Y,8e3);
Tm = dlmfcc(T,8e3);
loss = mse(Ym,Tm)./(size(Tm,1)*size(Tm,3));
% Compute gradients
gradients = dlgradient(loss,net.Learnables);
end
function z = dlmfcc(x,fs,options)
arguments
x
fs
options.Window = hamming(round(0.03*fs),"periodic")
options.OverlapLength = round(0.02*fs)
options.NumCoeffs = 13
options.NumBands = 32
end
x = stripdims(x);
dctmatrix = createDCTmatrix(options.NumCoeffs,options.NumBands);
M = dlmelspectrogram(x,fs, ...
Window=options.Window, ...
OverlapLength=options.OverlapLength, ...
NumBands=options.NumBands);
% Apply log10
M = log(M+eps)/log(10);
y = pagemtimes(dctmatrix,M);
y = reshape(y,size(y,1),size(y,3),size(y,4));
z = dlarray(y,"CBT");
end
function [x,t] = preprocessMiniBatch(xcell,tcell)
x = padsequences(xcell,1,Length="shortest");
t = padsequences(tcell,1,Length="shortest");
end
function y = dlmelspectrogram(x,fs,options)
%dlmelspectrogram Mel spectrogram compatible with dlarray
% y = dlmelspectrogram(x,fs) computes a mel spectrogram from the audio
% input.
arguments
x
fs
options.Window = hamming(round(0.03*fs),"periodic")
options.OverlapLength = round(0.02*fs)
options.NumBands = 32
options.SpectrumType {mustBeMember(options.SpectrumType,{'power','magnitude'})} = 'power'
end
filterBank = designAuditoryFilterBank(fs, ...
FFTLength=numel(options.Window), ...
NumBands=32); % NumBands-by-FFTLength
% Short-time Fourier transform
[yr,yi] = dlstft(x, ...
DataFormat="CBT", ...
Window=options.Window, ...
OverlapLength=options.OverlapLength);
% Power spectrum
y = abs(yr).^2 + abs(yi).^2; % FFTLength-by-1-by-BatchSize-by-NumHops
% Apply filter bank
y = pagemtimes(filterBank,y); % NumBands-by-1-by-BatchSize-by-NumHops
end
function matrix = createDCTmatrix(NumCoeffs,NumFilters)
N = NumCoeffs;
K = NumFilters;
matrix = zeros(N,NumFilters,'single');
A = sqrt(1/K);
B = sqrt(2/K);
C = 2*K;
piCCast = single(2*pi/C);
matrix(1,:) = A;
for k = 1:K
for n = 2:N
matrix(n,k) = B*cos(piCCast*(n-1)*(k-0.5));
end
end
end
function y = irescale(x)
y = x./max(abs(x(:)));
end
Vedere anche
Categorie
Scopri di più su Custom Training Loops in Help Center e File Exchange
Prodotti
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!