MATLAB and LAPACK implementation of the SVD algorithm - not the same output?

9 visualizzazioni (ultimi 30 giorni)
I am using the LAPACK C library's singular value decomposition function
LAPACKE_cgesvd()
and trying to get the same result as MATLAB's svd() function (in case of complex input).
But the outputs are not the same.
Here is one example:
in = [1+2i 2+4i 3+6i 4+8i;
2+3i 4+6i 6+9i 8+12i;
3+4i 6+8i 9+12i 12+16i;
4+5i 8+10i 12+15i 16+20i];
[U,S,V] = svd(in);
This gives
U =
-0.109108945117997 - 0.218217890235993i 0.909365643461770 + 0.318130360459734i 0.036386779676407 - 0.084386315028456i 0.052751601014692 + 0.033100021335301i
-0.218217890235992 - 0.327326835353989i -0.037388878899514 - 0.059582067281587i -0.116960851562535 + 0.558535127042627i -0.249442499023482 + 0.672627129227888i
-0.327326835353989 - 0.436435780471985i -0.117020313474447 + 0.048160693713279i -0.345516960640487 + 0.358878427669969i 0.391668855331377 - 0.533654905367364i
-0.436435780471985 - 0.545544725589981i -0.219753961051779 - 0.050933678992021i 0.293035065226691 - 0.576080183457497i -0.205991407911317 + 0.029126130759720i
S =
50.199601592044544 0 0 0
0 0.000000000000004 0 0
0 0 0.000000000000000 0
0 0 0 0.000000000000000
V =
-0.182574185835055 + 0.000000000000000i -0.749839996855303 + 0.000000000000000i 0.635929749093960 + 0.000000000000000i 0.000000000000000 + 0.000000000000000i
-0.365148371670110 + 0.000000000000000i -0.099644835021179 - 0.055294049648946i -0.222326993896782 - 0.065198538162360i -0.274861567584794 - 0.851146943050864i
-0.547722557505166 + 0.000000000000000i 0.582096115689033 + 0.184313498829819i 0.529113396624621 + 0.217328460541199i -0.000000000000000 + 0.000000000000001i
-0.730296743340221 + 0.000000000000000i -0.199289670042359 - 0.110588099297891i -0.444653987793565 - 0.130397076324720i 0.137430783792397 + 0.425573471525431i
When I run the LAPACK C code, I get:
U
[0][0] = -0.109108955 -0.218217865i [0][1] = +0.719114125 -0.452113479i [0][2] = +0.022277016 +0.064515218i [0][3] = -0.409647375 +0.215580061i
[1][0] = -0.218217909 -0.327326864i [1][1] = +0.084657431 -0.228100479i [1][2] = +0.500972092 -0.310676992i [1][3] = +0.300640881 -0.590053499i
[2][0] = -0.327326834 -0.436435789i [2][1] = +0.109215073 +0.273434073i [2][2] = -0.344847411 -0.481100261i [2][3] = +0.325270653 +0.399385184i
[3][0] = -0.436435819 -0.545544684i [3][1] = -0.340743810 +0.128338531i [3][2] = +0.002677787 +0.545402348i [3][3] = -0.279374570 -0.061697662i
S
[0] = 50.1995964 [1] = 1.69897112e-06 [2] = 1.54533879e-07 [3] = 3.96336745e-14
Vtransposed
[0][0] = -0.182574213 -0.000000000i [0][1] = -0.365148425 -0.000000000i [0][2] = -0.547722638 +0.000000028i [0][3] = -0.730296850 -0.000000000i
[1][0] = -0.219238073 -0.000000000i [1][1] = +0.210008949 +0.144917369i [1][2] = -0.626949847 -0.483057886i [1][3] = +0.420017421 +0.289834768i
[2][0] = -0.958436847 -0.000000000i [2][1] = +0.021519713 -0.033149216i [2][2] = +0.247748464 +0.110497266i [2][3] = +0.043038044 -0.066298351i
[3][0] = -0.000000639 -0.000000000i [3][1] = -0.892759502 +0.054592617i [3][2] = +0.000000003 -0.000000026i [3][3] = +0.446379900 -0.027296286i
U: Seems like the first U column is good! But the rest is totally different.
S: All good, if we approximate small C outputs to be zero.
Vt: Only the first element seems to be the same, the rest doesn't match to Matlab's output.
What is the reason for this?
Am I doing something wrong?
  8 Commenti
Danijel Domazet
Danijel Domazet il 22 Feb 2022
Modificato: Danijel Domazet il 22 Feb 2022
Let's put it this way: is there an input for which LAPACK and MATLAB would give the same result?
Or, what's the best try?
Danijel Domazet
Danijel Domazet il 22 Feb 2022
Modificato: Danijel Domazet il 22 Feb 2022
OK, I used rand(4,4) + i*rand(4,4), and the result is much better:
in = ...
[0.814723686393179 + 0.421761282626275i 0.63235924622541 + 0.655740699156587i ...
0.957506835434298 + 0.678735154857773i 0.957166948242946 + 0.655477890177557i ...
;
0.905791937075619 + 0.915735525189067i 0.0975404049994095 + 0.0357116785741896i ...
0.964888535199277 + 0.757740130578333i 0.485375648722841 + 0.171186687811562i ...
;
0.126986816293506 + 0.792207329559554i 0.278498218867048 + 0.849129305868777i ...
0.157613081677548 + 0.743132468124916i 0.8002804688888 + 0.706046088019609i ...
;
0.913375856139019 + 0.959492426392903i 0.546881519204984 + 0.933993247757551i ...
0.970592781760616 + 0.392227019534168i 0.141886338627215 + 0.0318328463774207i ...
];
Result is:
U = ...
[-0.42312740984401409 -0.35959265421087588i
0.13117268904541174 +0.44787859162628668i ...
0.11250427785751983 -0.24706940551578091i
-0.38438224614546779 -0.50239884161662351i ...
-0.33759289967935291 -0.32429926333442366i
-0.06936700508106322 -0.47809332716460873i ...
0.62193638822998987 -0.24622726102506251i
0.02217454979080485 +0.31551793178629584i ...
-0.12989079970942036 -0.439785223293499i
0.30358385553377837 +0.4177223468250193i ...
-0.25124960829083876 -0.062014277681078311i
0.42385988499316413 +0.52576884915022781i ...
-0.38050005142283305 -0.34271619213843124i
-0.47813520137109283 -0.23139813043816407i ...
-0.46987090655301544 +0.43716810967614084i
-0.19832860352154408 +0.066167190021447483i ...
];
The LAPACK gives:
[0][0] = -0.423127532 -0.359592706i
[0][1] = +0.131172717 +0.447878778i
[0][2] = -0.112504244 +0.247069359i
[0][3] = +0.384382606 +0.502398610i
[1][0] = -0.337592900 -0.324299276i
[1][1] = -0.069367193 -0.478093356i
[1][2] = -0.621936560 +0.246227175i
[1][3] = -0.022174668 -0.315517843i
[2][0] = -0.129890800 -0.439785272i
[2][1] = +0.303584039 +0.417722374i
[2][2] = +0.251249373 +0.062014218i
[2][3] = -0.423860222 -0.525768697i
[3][0] = -0.380500108 -0.342716217i
[3][1] = -0.478135169 -0.231398359i
[3][2] = +0.469871104 -0.437167943i
[3][3] = +0.198328614 -0.066167258i
The results are OK, or very close, it's only the sign of the last two rows that are different. Why would the signs be different? I would hope it will not matter in my further calculations...

Accedi per commentare.

Risposte (1)

David Goodmanson
David Goodmanson il 23 Feb 2022
Modificato: David Goodmanson il 23 Feb 2022
Hi Danijel,
actually it's the columns that have different signs. That's for the following reason: Suppose the svd of the matrix 'in' is
in = USV'
and let
A =
[1 0 0 0
0 1 0 0
0 0 1 0
0 0 0 -1]
Then, since A*A = I (4x4 identity)
in = U(AA)S(AA)V'.
Since S is diagonal, ASA = S and
in = (UA)S(AV')
So if U is a solution, so is UA. But UA just multiplies the 4th column of U by a minus sign. AV' is the accomanying solution and it multiplies the 4th row of V" by a minus sign, which means the 4th column of V gets a minus sign. You can obviously do the same for any column of both U and V, meaning that the signs of the columns are arbitrary. In your case the 3rd and 4th columns of U have the minus sign compared to Lapack, so if you take a look at V, its 3rd and 4th columns should also have a minus sign compared to Lapack. Once the signs are set, you just have to stay consistent for the rest of the calculation.

Categorie

Scopri di più su Linear Algebra in Help Center e File Exchange

Tag

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by