I was trying to solve an eigenvalue problem of two Identical matrices computed in a different way? Surprisingly one solution is six to ten times faster than other (though the solutions are identical)? Can any body explain the reason?
3 visualizzazioni (ultimi 30 giorni)
Mostra commenti meno recenti
Avisek Mukherjee
il 27 Nov 2015
Modificato: Walter Roberson
il 27 Nov 2015
I was trying to solve an eigenvalue problem of two Identical matrix computed in a different way? Surprisingly one solution is six to ten times faster than other (though the solutions are identical). A sample problem is as follows:
clear all;
% CASE 1
rng default; A = rand(1000); M = A'*(eye(1000)*1.1)*A;
tic; eig(M); toc;
% CASE 2
clear all;
rng default; A = rand(1000); M = (A'*(eye(1000))*A)*1.1;
tic; eig(M); toc;
It shows the following result:
Elapsed time is 1.952121 seconds.
Elapsed time is 0.255389 seconds.
Theoretically in both cases, the 'M' matrix is identical. I need to implement the CASE-1 in one of my algorithm. Is there any way to improve the efficiency??
2 Commenti
Torsten
il 27 Nov 2015
If lambda is eigenvalue of M, a*lambda is eigenvalue of a*M ...
Best wishes
Torsten.
Risposta accettata
John D'Errico
il 27 Nov 2015
Modificato: John D'Errico
il 27 Nov 2015
Eig is probably faster when the matrix is symmetric.
A = rand(1000); M = A'*(eye(1000))*A*1.1;
norm(M - M.')
ans =
0
M = A'*(eye(1000)*1.1)*A;
norm(M - M.')
ans =
1.5208e-12
So, lets prove that by forcing this matrix to be TRULY symmetric.
tic,eig(M);toc
Elapsed time is 0.807391 seconds.
Q = (M + M.')/2;
tic,eig(Q);toc
Elapsed time is 0.115139 seconds.
Yes, I know that mathematically, M is symmetric, and should be so in both cases. But you need to learn that floating point arithmetic, as computed on a modern CPU, using tools like the BLAS, is NOT truly mathematics. Things that may be true in mathematics are not always so.
The parser is probably smart enough to force a product like the first case to be perfectly symmetric, via clever use of the BLAS. In the second case, it was apparently not that smart, so it simply does the matrix multiplies. And when you fool with the order of operations in floating point arithmetic, there is no assurance that two results will be identical.
Welcome to the wonderful, wacky world of floating point arithmetic.
0 Commenti
Più risposte (0)
Vedere anche
Categorie
Scopri di più su Linear Algebra in Help Center e File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!