fminunc : A VERY STRANGE PROBLEM!

10 visualizzazioni (ultimi 30 giorni)
Emiliano Rosso
Emiliano Rosso il 7 Nov 2022
Modificato: Bruno Luong il 9 Nov 2022
Hi
I use fminunc to solve a minimization problem. Fminunc make hundeads of calls of a simple function which I optimized for the gpu to improve the performance.
This is what happens when I make a comparison between cpu and gpu for a single call (test is external to fminuc):
TVD=tvd_sim2_mex(x,y, lam, Nit,t); % 0.018 s
TVD=tvd_sim2(x,y, lam, Nit,t); % 0.003 s 6x
As you can see the performance is 6X faster for the cpu.
The gpu profiling tells me the problem is about gpu malloc.
And this is what happen when I call the function 1000 times:
for i=1:1000
tic
TVD=tvd_sim2_mex(x,y, lam, Nit,t);
mytime(i)=toc;
end % 0.0005 s 6x
TVD=tvd_sim2(x,y, lam, Nit,t); % 0.003 s
As you can see the performance is 6X faster for the gpu.
Now....I can't know what exaclty happens inside fminunc function but surly I can say without doubt that
the only difference between the two situation is the function tvd_sim2. No modification to fminunc has been made.
The gpu always delete the memory at the end of every single call.
The function tvd_sim2 is compiled only once before fminunc.
This is what happens when I make a comparison between fminunc using tvd_sim2 and tvd_sim2_mex
(the function tvd_sim launches fminunc):
tic
[y, cost] = tvd_sim(x, lam, Nit,t); % Run whit tvd_sim2
toc
Solver stopped prematurely.
fminunc stopped because it exceeded the iteration limit,
options.MaxIterations = 5.000000e+01.
Elapsed time is 48.020835 seconds.
and:
tic
[y, cost] = tvd_sim(x, lam, Nit,t); % Run with tvd_sim2_mex
toc
Solver stopped prematurely.
fminunc stopped because it exceeded the iteration limit,
options.MaxIterations = 5.000000e+01.
Elapsed time is 179.953791 seconds.
In few words.... why does it go slower even if it go faster?
I thought....in my "1000 times for loop" the variable y is always equal but fminunc changes it every time.
This is due to the optimization.
But this is a false problem:
for i=1:1000
y=rand(4096,1);
tic
TVD=tvd_sim2_MEX_mex(x,y, lam, Nit,t);
mytime(i)=toc;
end
disp('mean time:');
disp(mean(mytime));
mean time:
5.5624e-04
The fact is the gpu reallocate the memory every function call,there's no difference between an equal input or a different one!
I add the screenshots of the function's run with and without the gpu. As you can see all the time is in charge of this function.
Which environmental variable (in the broad sense) can so substantially modify the performance of a gpu running the same code?
Or beyond the evidence is it not the same code?
Thanks!
  23 Commenti
Bruno Luong
Bruno Luong il 9 Nov 2022
Modificato: Bruno Luong il 9 Nov 2022
Please read the doc fminunc , especially the party where option 'SpecifyObjectiveGradient' is true.
If you don't provide the gradient MATLAB calls 4000-8000 time the objective to compute the gradients, if you do, then MATLAB do not need to evalutae 4000 time to estimate the gradient but get it from your function. Imagine the time you could save.
Emiliano Rosso
Emiliano Rosso il 9 Nov 2022
REMARKABLE !!!
I must take time...
Thanks!

Accedi per commentare.

Risposta accettata

Matt J
Matt J il 7 Nov 2022
Modificato: Matt J il 7 Nov 2022
If I had to guess, the GPU cannot achieve faster speeds because fminunc requires that you pull the results of GPU computation back to the CPU after every call to the objective function. This is because fminunc has to do intermediate computations of its own which must take place on the CPU. It is plausible that the overhead of the CPU-GPU transfers, given the simplicity of your objective, is dominating the computation time.
Why some of your timing experiments do not bear this out is unclear, but as Walter says, it is not clear that your timing methods are valid. tic and toc by themselves are not reliable unless you do something to synchronize the GPU with MATLAB. You should probably be using gputimeit instead, or something in your CUDA code, I guess (__syncthreads()?).
  10 Commenti
Matt J
Matt J il 9 Nov 2022
Modificato: Matt J il 9 Nov 2022
Yes. It would also be good to see the code that invokes fminunc, in particular what optimoptions are used.
Emiliano Rosso
Emiliano Rosso il 9 Nov 2022
Here the code :
function [xden,fval] = tvd_sim(y, lam, Nit,t)
rng default % For reproducibility
[n,m]=size(y);
y0=y;
ObjectiveFunction = @(y) tvd_sim2(y,y0,lam,Nit,t);
options = optimoptions('fminunc','MaxIter',50,'ObjectiveLimit',0,'MaxFunEvals',...
Inf,'TolFun',1e-06,'UseParallel',false);
[xden,fval] = fminunc(ObjectiveFunction,y,options);
end
function [TVD] = tvd_sim2(x,y, lam, Nit,t) %#codegen
coder.gpu.kernelfun
[n,m]=size(y); % x Nx1 columnwise is denoised
diffxx=x(2:n,1)-x(1:n-1,1);
TVD=1/2.*sum(abs(((y-x)./((double(abs(y)-t>0).*y./t)+double(~(double(abs(y)-t>0)...
.*y./t))).^2).^2)) + lam.*sum(abs(diffxx(2:n-1,1)-diffxx(1:n-2,1)));
end
and this is cpu timeinit:
t=zeros(1,1);
for i=1:1000
t=timeit(@()tvd_sim2(x,y, lam, Nit,t)); % 0.0014 s
tmean(i)=t;
end
disp(mean(t));
1.1071e-04
respect to 0.0012.
ratio gpu/cpu= 0.0012 / 1.1071e-04 = 10.83
gpu is x10 slower than cpu.
That's what I'm discovering here:
This is a big mistake!
So the only problem was tic toc which gives me a wrong illusion but really gpu performance is naturally slower than cpu?
So I would have solved the mystery simply by dissolving an illusion?

Accedi per commentare.

Più risposte (2)

Ram Kokku
Ram Kokku il 8 Nov 2022
Modificato: Walter Roberson il 8 Nov 2022
As my colleague Hariprasad mentioned, GPU Coder is a capable of
  1. Allocate memory once and reuse it for subsequent calls. Use cfg.GpuConfig.EnableMemoryManager = true; to enable this.
  2. Take MATLAB gpuArray as input. You are doing this already. But this may not always help. for example, if GPU Coder choices to keep the first use a particular input on CPU (for some reason), it would incur an additional copy.
Further,
  1. you may use gpucoder.profile ( https://www.mathworks.com/help/gpucoder/ref/gpucoder.profile.html ) to find the bottlenecks.
  2. Use of cell arrays and structures may not play will with GPU Coder with regards to copies. consider break the cell array elements to separate variables.
  3. Take a look at the generated code and see GPU Coder is able to parallelize the key piece of your code.
  4. If you are open to share your code, I can take a quick look.
  5 Commenti
Bruno Luong
Bruno Luong il 8 Nov 2022
Modificato: Bruno Luong il 9 Nov 2022
sum(abs((diffyx./ycut.^2).^2))
abs has no effect.
Both explicit casting to double is not necessary but it probably does make any harm either.
To summarize I would rather code like this (Warning: not tested)
ycut=(abs(y)-(t>0)).*y./t; % edit missing parenthesis
ycut=ycut+~ycut;
diffxx=x(2:n,1)-x(1:n-1,1);
diffxx=diffxx(2:n-1,1)-diffxx(1:n-2,1);
TVD=1/2.*sum(((y-x)./ycut.^2).^2) + lam.*sum(abs(diffxx));
Emiliano Rosso
Emiliano Rosso il 8 Nov 2022
Modificato: Emiliano Rosso il 8 Nov 2022
"abs has no effect."
Yes , it's true , I'll modify it !
I've seen your code now, I'll try and verify it!
Thanks!

Accedi per commentare.


Bruno Luong
Bruno Luong il 8 Nov 2022
Modificato: Bruno Luong il 8 Nov 2022
Just shooting in the dark here and wonder if you let the UseParallel option of fminunc to true or false? It could be that the gradient computation is efficient on CPU but not on GPU depending on this option.
Also your objective function is not very differentiable with all the logical and abs, it could be that fminunc have the hard time to optimize, and time is more sentitive to the numerical truncation, that is differently with gpu-mex and cpu-matlab.
BTW the objective function is simple enough to compute analytic gradient.
  11 Commenti
Emiliano Rosso
Emiliano Rosso il 9 Nov 2022
Modificato: Emiliano Rosso il 9 Nov 2022
section:
Measure and Improve GPU Performance
suggests to use tic toc in this way:
D = gpuDevice;
wait(D)
tic
[L,U] = lu(A);
wait(D)
toc
...if this can help...
Bruno Luong
Bruno Luong il 9 Nov 2022
"if this can help"
Certainly I'll remember doing the wait the next tic-toc woth GPU code.

Accedi per commentare.

Prodotti


Release

R2020b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by