Thursday, March 28, 2024
HomeMatlabWhat Is an Eigenvalue? – Nick Higham

What Is an Eigenvalue? – Nick Higham


An eigenvalue of a sq. matrix A is a scalar lambda such that Ax = lambda x for some nonzero vector x. The vector x is an eigenvector of A and it has the excellence of being a route that isn’t modified on multiplication by A.

An ntimes n matrix has n eigenvalues. This may be seen by noting that Ax = lambda x is equal to (lambda I - A) x = 0, which signifies that lambda I - A is singular, since xne 0. Therefore det(lambda I - A) = 0. However

notag  det(lambda I - A) = lambda^n +                        a_{n-1}lambda^{n-1} + cdots + a_1 lambda + a_0

is a scalar polynomial of diploma n (the attribute polynomial of A) with nonzero main coefficient and so has n roots, that are the eigenvalues of A. Since det(lambda I - A) = det( (lambda I - A)^T)       = det(lambda I - A^T), the eigenvalues of A^T are the identical as these of A.

An actual matrix might have advanced eigenvalues, however they seem in advanced conjugate pairs. Certainly Ax = lambda x implies overline{A}overline{x} = overline{lambda} overline{x}, so if A is actual then overline{lambda} is an eigenvalue of A with eigenvector overline{x}.

Listed below are some 2times 2 matrices and their eigenvalues.

notag begin{aligned}  A_1 &= begin{bmatrix}1 & 0  0 & 1 end{bmatrix}, quad         lambda = 1,1,  A_2 &= begin{bmatrix}0 & 1  0 & 0 end{bmatrix}, quad         lambda = 0,0,  A_3 &= begin{bmatrix}0 & 1  -1 & 0 end{bmatrix}, quad         lambda = mathrm{i},mathrm{-i}. end{aligned}

Be aware that A_1 and A_2 are higher triangular, that’s, a_{ij} = 0 for i>j. For such a matrix the eigenvalues are the diagonal components.

A symmetric matrix (A^T = A) or Hermitian matrix (A^* = A, the place A^* = overline{A}^T) has actual eigenvalues. A proof is Ax = lambda x Rightarrow x^*A^* = overline{lambda} x^* so premultiplying the primary equation by x^* and postmultiplying the second by x provides x^*Ax = lambda x^*x and x^*Ax = overline{lambda} x^*x, which signifies that (lambda-overline{lambda})x^*x = 0, or lambda=overline{lambda} since x^*x ne 0. The matrix A_1 above is symmetric.

A skew-symmetric matrix (A^T = -A) or skew-Hermitian advanced matrix (A^* = -A) has pure imaginary eigenvalues. A proof is just like the Hermitian case: Ax = lambda x Rightarrow -x^*A = x^*A^* = overline{lambda} x^* and so x^*Ax is the same as each lambda x^*x and -overline{lambda} x^*x, so lambda = -overline{lambda}. The matrix A_3 above is skew-symmetric.

Normally, the eigenvalues of a matrix A can lie anyplace within the advanced airplane, topic to restrictions primarily based on matrix construction equivalent to symmetry or skew-symmetry, however they’re restricted to the disc centered on the origin with radius |A|, as a result of for any matrix norm |cdot| it may be proven that each eigenvalue satisfies |lambda| le |A|.

Listed below are some instance eigenvalue distributions, computed in MATLAB. (The eigenvalues are computed at excessive precision utilizing the Advanpix Multiprecision Computing Toolbox in an effort to be sure that rounding errors don’t have an effect on the plots.) The second and third matrices are actual, so the eigenvalues are symmetrically distributed about the true axis. (The primary matrix is advanced.)

eig_smoke.jpgeig_dramadah.jpgeig_toeppen_inv.jpg

Though this text is about eigenvalues we have to say somewhat extra about eigenvectors. An ntimes n matrix A with distinct eigenvalues has n linearly impartial eigenvectors. Certainly it’s diagonalizable: A = XDX^{-1} for some nonsingular matrix X with D = mathrm{diag}(lambda_i) the matrix of eigenvalues. If we write X when it comes to its columns as X = [x_1,x_2,dots,x_n] then AX = XD is equal to Ax_i = lambda _i x_i, i=1colon n, so the x_i are eigenvectors of A. The matrices A_1 and A_3 above each have two linearly impartial eigenvectors.

If there are repeated eigenvalues there will be lower than n linearly impartial eigenvectors. The matrix A_2 above has just one eigenvector: the vector left[begin{smallmatrix}1  0 end{smallmatrix}right] (or any nonzero scalar a number of of it). This matrix is a Jordan block. The matrix A_1 exhibits {that a} matrix with repeated eigenvalues can have linearly impartial eigenvectors.

Listed below are some questions on eigenvalues.

  • What matrix decompositions reveal eigenvalues? The reply is the Jordan canonical type and the Schur decomposition. The Jordan canonical type exhibits what number of linearly impartial eigenvectors are related to every eigenvalue.
  • Can we get hold of higher bounds on the place eigenvalues lie within the advanced airplane? Many outcomes can be found, of which probably the most well-known is Gershgorin’s theorem.
  • How can we compute eigenvalues? Varied strategies can be found. The QR algorithm is extensively used and is relevant to all sorts of eigenvalue issues.

Lastly, we notice that the idea of eigenvalue is extra basic than simply for matrices: it extends to nonlinear operators on finite or infinite dimensional areas.

References

Many books embrace therapies of eigenvalues of matrices. We give simply three examples.

  • Gene Golub and Charles F. Van Mortgage, Matrix Computations, fourth version, Johns Hopkins College Press, Baltimore, MD, USA, 2013.
  • Roger A. Horn and Charles R. Johnson, Matrix Evaluation, second version, Cambridge College Press, 2013. My assessment of the second version.
  • Carl D. Meyer, Matrix Evaluation and Utilized Linear Algebra, Society for Industrial and Utilized Arithmetic, Philadelphia, PA, USA, 2000.
RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments