# Obtain eigs from matrix and partially known eigenvector

1 visualizzazione (ultimi 30 giorni)
Jiali il 5 Lug 2023
Commentato: Jiali il 7 Lug 2023
The issue is that I have an input square matrix P, The values in some diagonal elements of P matrix are very big imaginary numbers which corresponds to the zero positions in eigenvectors. Thus, I remove the values in the diagonal of P matrix since no other elements are related to the pre-defined valule (0). Then I try to solve the eigenvalues and eigenvectors of P. However, the eigenvectors of original P and the deleted P are different (V2~=V). Why it happens? I got confused and please give me some suggestions.
Thank you for your help, Jeniffer.
N=length(P);
ind=10:51;
P2=P;
P2(ind,:)=[];
P2(:,ind)=[];
[V,D]=eigs(P,20);
[Vtmp,D2]=eigs(P2,20);
jj=[1:9,52:N];
V2=zeros(N,20);
V2(jj,:)=Vtmp;
##### 0 CommentiMostra -2 commenti meno recentiNascondi -2 commenti meno recenti

Accedi per commentare.

### Risposta accettata

Christine Tobler il 5 Lug 2023
Hi Jennifer,
You are right that matrix P here is a block-diagonal matrix with three blocks:
[A 0 0;
0 B 0
0 0 C]
And for such a matrix, the eigenvalues of the whole matrix are the union of the eigenvalues of matrices A, B and C. The eigenvectors of the whole matrix can also be computed from the eigenvectors of the submatrices, like you do in the code above.
Two issues to keep in mind:
1) The eigs call returns the 20 eigenvalues with largest absolute value. It seems to me that this is likely to be the ones in matrix B, since they have a very large imaginary part. That would mean that D and D2 are not the same, and you have to call eigs with an option that will return the eigenvalues in A and C instead.
2) Another thing to keep in mind is that the eigenvectors aren't uniquely defined, each eigenvector can be multiplied with any complex number of absolute value 1. The easiest way to check a matrix of eigenvectors is to compute norm(A*V - V*D), to see if they satisfy their definition to sufficient accuracy.
##### 3 CommentiMostra 1 commento meno recenteNascondi 1 commento meno recente
Christine Tobler il 6 Lug 2023
Hi Jennifer,
The reason is likely that the matrices A and C both have the same eigenvalues. Two eigenvectors with different eigenvalues
can't be recombined to form an eigenvector for either eigenvalue. But if above, any linear combination of v1 and v2 is just as valid of result for EIGS to return.
Here's an example:
n = 4;
Adifferent = diag(repelem([-2 10 -2.2], [n 4*n n]));
Arepeat = diag(repelem([-2 10 -2], [n 4*n n]));
[Ud, Dd] = eigs(Adifferent, 2*n, -3);
[Ur, Dr] = eigs(Arepeat, 2*n, -3);
diag(Dd)'
ans = 1×8
-2.2000 -2.2000 -2.2000 -2.2000 -2.0000 -2.0000 -2.0000 -2.0000
diag(Dr)'
ans = 1×8
-2.0000 -2.0000 -2.0000 -2.0000 -2.0000 -2.0000 -2.0000 -2.0000
tiledlayout(1, 2)
nexttile
spy(abs(Ud) > 1e-8)
nexttile
spy(abs(Ur) > 1e-8)
So getting separate blocks of eigenvectors for a matrix with blocks that have the same eigenvalues is a valid solution, but not the unique solution - and eigs will just converge to a solution, it doesn't take the block structure into account.
One thing that might contribute to the confusion here is that the eig function for dense matrices will notice a block diagonal structure and is likely to return eigenvectors in blocks for that case (although there is no guarantee of this). The eigs function is more generic, taking just a function handle that applies the matrix to a vector, which is the reason it acts differently.
A last source of confusion is that when eigs is called asking for a relatively large proportion of the eigenvalues of the matrix, or on a very small matrix, it falls back to calling eig, since that will be more efficient in those cases. You can call eigs with option Display=true to check on this:
eigs(speye(1000), 500, 'largestabs', Display=true);
=== Simple eigenvalue problem A*x = lambda*x === Computing 500 eigenvalues of type 'largestabs'. Parameters passed to Krylov-Schur method: Maximum number of iterations: 300 Tolerance: 1e-14 Subspace Dimension: 1000 Compute EIGS by calling EIG, because subspace dimension is equal to problem size.
Jiali il 7 Lug 2023
Hi Christine,
Awesome! Perfect explanations to clear my confusion. Thanks a lot.
Regards,
Jiali

Accedi per commentare.

### Più risposte (1)

Animesh il 5 Lug 2023
Hey @Jiali
The issue you're experiencing is likely due to the removal of diagonal elements from the matrix P. When you remove the values in the diagonal, you are essentially modifying the matrix P by deleting certain rows and columns. This modification can affect the eigenvectors and eigenvalues of the matrix.
Eigenvectors are determined by the relationships between the elements of the matrix. When you remove diagonal elements, you are altering these relationships and, consequently, the eigenvectors can change. Even though the deleted positions correspond to zero values, other elements in the matrix may still have an influence on the eigenvectors.
To address this issue, you can consider a different approach. Instead of removing the diagonal elements, you can set them to a small non-zero value, such as a small imaginary number or a small real number, rather than completely removing them. By doing so, you can preserve the structure of the matrix while minimizing the impact on the eigenvectors.
##### 0 CommentiMostra -2 commenti meno recentiNascondi -2 commenti meno recenti

Accedi per commentare.

### Categorie

Scopri di più su Logical in Help Center e File Exchange

R2015a

### Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by