Using ARPACK solving eigenvalueproblem, but getting inconsistent results with Matlab

一曲冷凌霜 提交于 2019-12-11 07:43:23

问题


I'm new to ARPACK, I downloaded a script like the following

import time
import numpy as np
from scipy.linalg import eigh
from scipy.sparse.linalg import eigs
np.set_printoptions(suppress=True)

n=30
rstart=0
rend=n

A=np.zeros(shape=(n,n))

# first row
if rstart == 0:
    A[0, :2] = [2, -1]
    rstart += 1
# last row
if rend == n:
    A[n-1, -2:] = [-1, 2]
    rend -= 1
# other rows
for i in range(rstart, rend):
    A[i, i-1:i+2] = [-1, 2, -1]

A[0,8]=30

start_time = time.time()

evals_large, evecs_large = eigs(A, 10, sigma=3.6766133, which='LM')
print evals_large

end_time=time.time()-start_time
print(" Elapsed time: %12f seconds " % end_time)

It solves a very simple eigenvalue problem (The matrix A there is not symmetric, I set A[0,8] to be 30). The 3 eigenvalues closest to 3.6766133 (sigma=3.6766133 in the setting) according to the ARPACK results are

[ 3.68402411+0.j  3.82005897+0.j  3.51120293+0.j]

Then I go to MATLAB, and solve the same eigenvalue problem, and the results are

4.144524409923138 + 0.000000000000000i 
3.642801014184622 + 0.497479798520641i
3.642801014184622 - 0.497479798520641i
2.372392770347609 + 0.762183281789166i
2.372392770347609 - 0.762183281789166i
3.979221766266502 + 0.000000000000000i
3.918541441830947 + 0.000000000000000i
3.820058967057387 + 0.000000000000000i 
3.684024113506185 + 0.000000000000000i
3.511202932803536 + 0.000000000000000i
3.307439963195127 + 0.000000000000000i
3.080265978640102 + 0.000000000000000i
2.832849552917550 + 0.000000000000000i
2.565972630556613 + 0.000000000000000i
2.283744793210587 + 0.000000000000000i
1.996972474451519 + 0.000000000000000i
0.927737801889518 + 0.670252740725955i
0.927737801889518 - 0.670252740725955i
1.714561796881689 + 0.000000000000000i
-0.015193770830045 + 0.264703483268519i
-0.015193770830045 - 0.264703483268519i
1.438919271663752 + 0.000000000000000i
0.019951101383019 + 0.000000000000000i
0.080534338862828 + 0.000000000000000i 
0.181591307101504 + 0.000000000000000i
0.318955140475174 + 0.000000000000000i
0.488231021129767 + 0.000000000000000i
0.688030188040126 + 0.000000000000000i
1.171318650526539 + 0.000000000000000i
0.917612528393044 + 0.000000000000000i

Apparently, the second mode 3.642801014184622 + 0.497479798520641i is more close to sigma=3.6766133, but ARPACK didn't pick it out.

What could be the problem? Could you help me figure this out? Thanks a lot.


回答1:


A few things first about the MATLAB functions:

  • Eigenvalues returned by eig are NOT sorted. In [V,D] = eig(A) we are only guaranteed that the columns of V are the corresponding right eigenvectors to the eigenvalues in D(i,i). On the other hand, svd returns singular values sorted in decreasing order.

  • d = eigs(A,k) return the k largest-magnitude eigenvalues. However it is intended for large and sparse matrices, and generally is not a substitute for:

    d = eig(full(A));
    d = sort(d, 'descend');
    d = d(1:k);
    

    (eigs is based on ARPACK, while eig uses LAPACK routines).

  • There is no natural ordering of complex numbers. The convention is that the sort function sorts complex elements first by magnitude (i.e. abs(x)), then by phase angle on [-pi,pi] interval (i.e. angle(x)) if magnitudes are equal.


MATLAB

With that in mind, consider the following MATLAB code:

% create the same banded matrix you're using
n = 30;
A = spdiags(ones(n,1)*[-1,2,-1], [-1 0 1], n, n);
A(1,9) = 30;
%A = full(A);

% k eigenvalues closest to sigma
k = 10; sigma = 3.6766133;
D = eigs(A, k, sigma);

% lets check they are indeed sorted by distance to sigma
dist = abs(D-sigma);
issorted(dist)

I get:

>> D
D =
  3.684024113506185 + 0.000000000000000i
  3.820058967057386 + 0.000000000000000i
  3.511202932803535 + 0.000000000000000i
  3.918541441830945 + 0.000000000000000i
  3.979221766266508 + 0.000000000000000i
  3.307439963195125 + 0.000000000000000i
  4.144524409923134 + 0.000000000000000i
  3.642801014184618 + 0.497479798520640i
  3.642801014184618 - 0.497479798520640i
  3.080265978640096 + 0.000000000000000i

>> dist
dist =
   0.007410813506185
   0.143445667057386
   0.165410367196465
   0.241928141830945
   0.302608466266508
   0.369173336804875
   0.467911109923134
   0.498627536953383
   0.498627536953383
   0.596347321359904

You can try to get similar results using dense eig:

% closest k eigenvalues to sigma
ev = eig(full(A));
[~,idx] = sort(ev - sigma);
ev = ev(idx(1:k))

% compare against eigs
norm(D - ev)

The difference is acceptably small (close to machine epsilon):

>> norm(ev-D)
ans =
     1.257079405021441e-14

Python

Similarly in Python:

import numpy as np
from scipy.sparse import spdiags
from scipy.sparse.linalg import eigs

# create banded matrix
n = 30
A = spdiags((np.ones((n,1))*[-1,2,-1]).T, [-1,0,1], n, n).todense()
A[0,8] = 30

# EIGS: k closest eigenvalues to sigma
k = 10
sigma = 3.6766133
D = eigs(A, k, sigma=sigma, which='LM', return_eigenvectors=False)
D = D[::-1]
for x in D:
    print '{:.16f}'.format(x)

# EIG
ev,_ = np.linalg.eig(A)
idx = np.argsort(np.abs(ev - sigma))
ev = ev[idx[:k]]
for x in ev:
    print '{:.16f}'.format(x)

with similar results:

# EIGS
3.6840241135061853+0.0000000000000000j
3.8200589670573866+0.0000000000000000j
3.5112029328035343+0.0000000000000000j
3.9185414418309441+0.0000000000000000j
3.9792217662665070+0.0000000000000000j
3.3074399631951246+0.0000000000000000j
4.1445244099231351+0.0000000000000000j
3.6428010141846170+0.4974797985206380j
3.6428010141846170-0.4974797985206380j
3.0802659786400950+0.0000000000000000j

# EIG
3.6840241135061880+0.0000000000000000j
3.8200589670573906+0.0000000000000000j
3.5112029328035339+0.0000000000000000j
3.9185414418309468+0.0000000000000000j
3.9792217662665008+0.0000000000000000j
3.3074399631951201+0.0000000000000000j
4.1445244099231271+0.0000000000000000j
3.6428010141846201+0.4974797985206384j
3.6428010141846201-0.4974797985206384j
3.0802659786400906+0.0000000000000000j



回答2:


The results are consistent between NumPy and Matlab if you use the eigs function:

>> format long
>> A = diag(-ones(n-1,1),-1) + diag(2*ones(n,1)) + diag(-ones(n-1,1),+1);A(1,9)=30;
>> eigs(A,3,3.6766133)'
ans =
   3.684024113506185   3.820058967057386   3.511202932803534

As for why the true closest eigenvalue isn't selected, I think that has to do with convergence to complex eigenvalues of real matrices and the choice of a real shift. I don't know how ARPACK calculates its iterates, but I remember being told that a real A with a real σ cannot by default converge to a complex conjugate pair since their ratio in absolute value is 1 (for Inverse Power Iteration). Since ARPACK will generate the complex eigenvalues at the 8th and 9th iteration (+/- ordering is random), I'm guessing their is some fix for this that I've forgotten about or never knew:

>> ev = eigs(A,9,3.6766133);ev(8:9)
ans =
  3.642801014184617 - 0.497479798520639i
  3.642801014184617 + 0.497479798520639i

I'm not sure if their is a general work around for this other than guessing a complex part for the shift or just grabbing extra eigenvalues until the conjugate pair falls into the ball of convergence for the ARPACK method.



来源:https://stackoverflow.com/questions/26520447/using-arpack-solving-eigenvalueproblem-but-getting-inconsistent-results-with-ma

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!