I find that minibatchqueue has a 'name,value' pair which is outputCast, 'single' as default. So I assign 'double' to it. But I haven't checked if the GPU training process does use single precision, as mentioned by @Walter Roberson.
minibatchqueue or arrayDatastore drops my data precision from double to single
1 visualizzazione (ultimi 30 giorni)
Mostra commenti meno recenti
I get XTrain from MNIST by processImagesMNIST and put it on GPU, so its type is gpuArray dlarray.
then I use these code to make minibatches:
```
miniBatchSize = 128;
dsTrain = arrayDatastore(XTrain,IterationDimension=4);
% numOutputs = 1;
mbqTest = minibatchqueue(dsTrain,1, ...
MiniBatchSize = miniBatchSize, ...
MiniBatchFcn=@preprocessMiniBatch, ...
MiniBatchFormat="SSCB", ...
PartialMiniBatch="discard");
% numObservationsTrain = size(XTrain,4);
% numIterationsPerEpoch = ceil(numObservationsTrain / miniBatchSize);
% numIterations = numEpochs * numIterationsPerEpoch;
%% test batch order
i=0;
while hasdata(mbqTest)
i = i+1;
x = next(mbqTest);
if ~hasdata(mbqTest)
disp(i)
end
end
```
And I find that x is single gpuArray dlarray, XTrain is gpuArray dlarray.
I wonder which part makes it lower the precision.
And how to avoid this?
Risposte (1)
Walter Roberson
il 28 Set 2022
Gpu training does not support double precision. If you look at the available options, precision cannot be selected.
0 Commenti
Vedere anche
Categorie
Scopri di più su Annotations in Help Center e File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!