All about Eigenvalue and Eigenvector using power methods

Recently I used power methods for getting the largest eigenvalue and the corresponding eigen vector in both matlab and python.

It is obvious, matlab is the easy way for matrix operation. as python is using array of list, so a little complicated if you written your own matrix multiply operation function. And it’s much slower using your own function, but it surely was a way to practice how you think, and make sure you are doing right with all the i, j,k indices.

Here is what I wrote in Matlab: The converge rate of the power method depends on λ2/λ1, (here λ2 is the 2nd largest eigenvalue and λ1 is the largest eigenvalue). It can take some iterations:

Example is from classical oscillation, we could find the eigenfrequency and visualize the eigenvector as how it vibrates. and here is the code:

in order for it to converge fast, I added Rayleigh quotient after some number of power method iteration. you should change it according to your need. When matrix size is not big and the required accuracy is not high, power methods general gives out stable and good results. Integrate Rayleigh in the algorithm could save us some iteration times, but then it might not be stable and require multiple trials or have a relatively large power iteration first. Computing inv(A- λ *I) is relatively expensive and when high accuracy of eigenvalue is required, (A- λ *I) will be nearly singular as Ax=λx ,​ ​this​ ​could​ ​also​ ​cause​ ​trouble. but in general, it works great!

n=input('please enter n:');
h=1/n;
a=2*ones(1,n);
a(n)=1;
b=ones(1,n-1);
K=diag(a)-diag(b,1)-diag(b,-1);
M=2*diag(a)+diag(b,1)+diag(b,-1);

K=(1/h)*K;
M=(h/6)*M;
A=inv(M)*K;
c=ones(n,1);
z=c;

i=0;
r=1;

I=eye(n);
error=1;


for j=1:5000
    
    z=A*z;
    z=z/norm(z);
       
end

while abs(error)>1e-4

    r=(z'*A*z)/(z'*z);
    z=inv(A-r*I)*z;
    z=z/norm(z);
    i=i+1;
    newr=(z'*A*z)/(z'*z);
    error=newr-r;
end
r
disp([num2str(i),' iteration'])
abc=eig(A);
abc(1)
plot(z)
It requires more coding, and structure, it’s simply on paper, a few lines and then power method are there. but telling the computer to get the right thing done is slightly more expensive. But I wrote two different methods, one basicly used a function to compute matrix multiplication and calculate vector 2-norm. another calling some useful numpy command as
numpy.linalg.norm
numpy.linalg.eig
numpy.dot
Below is an example for doing it in python, it might not be the best way, but it’s very detailed structured. and a lot of basic command is included.
#this is compute the largest eigenvalue and corresponding eigenvector using power method for matrix A
#written by Peili Guo (contact@guopeili.com)


import numpy as np
import timeit # for timing code snippets
import math # for math functions

#------------------------------------------function below-------------------------------------------------
def mm_multi(A,B):
  '''compute AxB  
     #example A=[[1,2],[3,4]] and  B=[1,1], will return [3,7] 
     #Input A matrix, B vector'''
  C=[[0 for x in range(len(B[0]))] for y in range(len(A))]
  if (len(A[0])==len(B)):
    for i in range(len(A)):
      for j in range(len(B[0])):
        for k in range(len(B)):
            C[i][j] += A[i][k] * B[k][j]
    return C

  else:
    return "goodbye"
#------------------------------------------function above------------------------------------------------- 	   
#-----------------------------------i am powerful dash line-----------------------------------------------
#------------------------------------------function below-------------------------------------------------    
def twonorm(a):
  '''compute 2 norm of a given vector,  
     #example a=[1,1], will return 1.414 
     #Input a is vector'''
  sum=0.0    
  for i in range(len(a)):
	for j in range(len(a[0])):
		sum+=a[i][j]*a[i][j]
  return((math.sqrt(sum)))
#------------------------------------------function above------------------------------------------------- 		
#-----------------------------------i am powerful dash line-----------------------------------------------
#------------------------------------------function below-------------------------------------------------
def findev(M,tol,totalrun):
  '''here we will find the eigenvector corresponding to the largest eigenvlue of a given matrix, magic?
     using power method, you can do it, by iterate it again and again and again
     until the error margin is satisfied
     input M is a matrix, and tol is your error margin, and the total max iteration inside the while loop
     Input a can be vector or matrix'''
     
  i=0
  N=len(M[0])
	#print('N',N)
  zold = np.random.random((N,1))
	#print('zold',zold)
  err=10
  znew = np.random.random((N,1))
	#print('znew',znew)
	#print('znew[0]',znew[0][0])
	#print('znew[0]',znew[1][0])
	
  while (err>tol):
 		znew = mm_multi(M,zold)
 		zmax=np.max(znew)
  		znew = znew/zmax
 	 	err =  twonorm(znew-zold)
 	 	zold=znew
 	 	i=i+1
 	 	if i > totalrun:
 	 		break
 	
  if i > totalrun:
 		return "goodbye"
  else: 		
		return(znew)

 	 	
#------------------------------------------function above------------------------------------------------- 	
#-----------------------------------i am powerful dash line----------------------------------------------- 	 	
#------------------------------------------function below-------------------------------------------------
def findeveasy(M,tol,totalrun):
  '''here we will find the eigenvector corresponding to the largest eigenvlue of a given matrix, magic?
     using power method, you can do it, by iterate it again and again and again
     until the error margin is satisfied
     input M is a matrix, and tol is your error margin, and the total max iteration inside the while loop
     Input a can be vector or matrix'''
     
  i=0
  N=len(M[0])
	#print('N',N)
  zold = np.random.random((N,1))
	#print('zold',zold)
  err=10
  znew = np.random.random((N,1))
	#print('znew',znew)
	#print('znew[0]',znew[0][0])
	#print('znew[0]',znew[1][0])
	
  while (err>tol):
 		znew = np.dot(M,zold)
 		zmax=np.max(znew)
  		znew = znew/zmax
 	 	err = np.linalg.norm(znew-zold)
 	 	zold=znew
 	 	i=i+1
 	 	if i > totalrun:
 	 		break
 	
  if i > totalrun:
 		return "goodbye"
  else: 		
		return(znew)

 	 	
#------------------------------------------function above------------------------------------------------- 	

#here we start by initial yourmatrix and z0 for iteration

#define size 
totalrun=1000 #you can change this totalrun number as you like, but it might be slow, if your N is small
N= input('what size is your matrix? ')

#create a random matrix, and you can also input your marix, let yourmatrix = the matrix you want to compute
yourmatrix=np.random.random((N,N))

#error tolerance
errtol = 10**(-6)

####################use method 1###########################################################################
#we call method number 1 to find largest eigenvector and eigenvalue and time the whole process
#in this method 1, a lot of computing is using function written above 

start_time = timeit.default_timer()

# call the function findev
 	
ourev=findev(yourmatrix,errtol,totalrun)
#get the eigenvector now compute eigen value
#we compute lamda, using basic multiplication that lamda=transpose(y)*A*y/transpose(y)*y
znewtran=np.zeros((1,N))

for i in range(len(ourev)):
	znewtran[0][i]=ourev[i][0]
			
Ay=mm_multi(znewtran,yourmatrix)
ytAy=mm_multi(Ay,ourev)
yty=mm_multi(znewtran,ourev)

l1=ytAy[0][0]/yty[0][0]

time1 = timeit.default_timer() - start_time
#print('elapsed time in compute_reciprocals(): %1.5f' % time1)

####################use method 2###########################################################################
#we call method number 2 to find largest eigenvector and eigenvalue and time the whole process
#in this method, most numpy package is used for computing and we will compare the speed!

start_time = timeit.default_timer()

# call the function findeveasy

ev2=findeveasy(yourmatrix,errtol,totalrun)
#get the eigenvector now compute eigen value
#we compute lamda, using basic multiplication that lamda=transpose(y)*A*y/transpose(y)*y
ev2t=np.transpose(ev2)

l2=np.dot(np.dot(ev2t,yourmatrix),ev2)
l2=l2/np.dot(ev2t,ev2)
time2 = timeit.default_timer() - start_time
#print('elapsed time in compute_reciprocals(): %1.5f' % time1)	


####################finish computing  below are output######################################################

print('eigenvector by method 1',ourev)
print('eigevalue by method 1',"%.6f" % l1) 
print('computing time for method 1: %.6f' % time1)
#format with 6 decimal output 

print('eigenvector by method 2',ev2)
print('eigevalue by method 2',"%.6f" % l2) 
print('computing time for method 2: %.6f' % time2)
#format with 6 decimal output 

evals,evecs=np.linalg.eig(yourmatrix)
ltrue=max(evals.real)

print('eigevalue by np.linalg.eig ',"%.6f" % ltrue ) 
#format with 6 decimal output 

you can contact me at contact@guopeili.com  if you have any suggestions or comments.