linear-algebra

Eigenvalues and Eigenvectors: A Beginner-Friendly Introduction

What eigenvalues and eigenvectors mean geometrically, how to compute them via the characteristic polynomial, and why they power PCA, Google PageRank, and quantum mechanics.

本文中文版本即将上线。下方暂以英文原文展示。

AI-Math Editorial Team

作者: AI-Math Editorial Team

发布于 2026-05-01

Eigenvalues and eigenvectors look mysterious the first time you see them, but the underlying idea is intuitive: when a matrix transforms a vector, most vectors get rotated and stretched. Eigenvectors are the special directions that only get stretched, never rotated. That stretch factor is the eigenvalue.

The definition

Given an n×nn \times n matrix AA, a non-zero vector v\mathbf{v} is an eigenvector with eigenvalue λ\lambda when:

Av=λvA \mathbf{v} = \lambda \mathbf{v}

Geometrically: AA acting on v\mathbf{v} produces λ\lambda times v\mathbf{v} — same direction, just scaled.

How to find them — characteristic polynomial

Rearranging gives (AλI)v=0(A - \lambda I)\mathbf{v} = \mathbf{0}. For a non-trivial v\mathbf{v} to exist, the matrix AλIA - \lambda I must be singular, i.e.:

det(AλI)=0\det(A - \lambda I) = 0

This expands into a polynomial in λ\lambda called the characteristic polynomial, of degree nn. Its roots are the eigenvalues.

Worked 2×22 \times 2 example

A=(4123)A = \begin{pmatrix} 4 & 1 \\ 2 & 3 \end{pmatrix}

  1. AλI=(4λ123λ)A - \lambda I = \begin{pmatrix} 4 - \lambda & 1 \\ 2 & 3 - \lambda \end{pmatrix}.
  2. det=(4λ)(3λ)2=λ27λ+10\det = (4-\lambda)(3-\lambda) - 2 = \lambda^2 - 7\lambda + 10.
  3. Solve λ27λ+10=0\lambda^2 - 7\lambda + 10 = 0: λ=5\lambda = 5 or λ=2\lambda = 2.

For λ=5\lambda = 5: solve (A5I)v=0(A - 5I)\mathbf{v} = 0, i.e. (1122)v=0\begin{pmatrix} -1 & 1 \\ 2 & -2 \end{pmatrix}\mathbf{v} = 0, giving eigenvector v1=(1,1)\mathbf{v}_1 = (1, 1).

For λ=2\lambda = 2: similar process gives v2=(1,2)\mathbf{v}_2 = (1, -2).

Why eigenvectors matter

  • Principal Component Analysis (PCA): eigenvectors of the covariance matrix are the principal directions of variation in your data.
  • Google PageRank: the rank vector is the dominant eigenvector of the web's link matrix.
  • Quantum mechanics: observables are operators; their eigenvalues are the only outcomes you can measure.
  • Differential equations: eigenvalues of the system matrix tell you whether solutions decay or blow up.

Geometric meaning recap

For a 2D matrix, eigenvectors are special axes. If you align the coordinate system with them, AA becomes diagonal — pure scaling along each axis with no rotation. That is diagonalisation, and it is the foundation of dozens of algorithms.

Common mistakes

  • Forgetting eigenvectors are defined up to scaling — any non-zero multiple of an eigenvector is also an eigenvector.
  • Skipping the characteristic equation and trying to guess.
  • Treating det(AλI)\det(A - \lambda I) as det(A)λ\det(A) - \lambda — it isn't.

Try with the AI Matrix Solver

Drop your matrix into the Matrix Calculator and request eigenvalues — every step shown.

Related references:

AI-Math Editorial Team

作者: AI-Math Editorial Team

发布于 2026-05-01

A small team of engineers, mathematicians, and educators behind AI-Math, focused on making step-by-step math help accessible to every student.