Matrix multiplication is the operation that drives linear algebra, computer graphics, machine learning, and physics simulations. Yet most students learn it as a mechanical recipe and never see why it is defined the way it is. This guide gives you both the recipe and the intuition.
Dimension rule first
Before you compute anything, check dimensions. To multiply :
- must have shape
- must have shape
- Result has shape
The inner dimensions must match (); the outer dimensions become the result shape.
If you ever try to multiply a by a , the operation is undefined — no amount of arithmetic will save you.
The row-times-column recipe
The entry of is the dot product of row of with column of :
Worked example
Compute :
So .
Why is multiplication defined this way?
Matrices represent linear maps between vector spaces. If maps from to , and maps from to , then should be the composition of those maps. The row-times-column rule is precisely what produces composition. The recipe is not arbitrary — it falls out of the requirement that encode "first apply , then apply ".
Properties (and non-properties!)
| Property | Holds? |
|---|---|
| associative | Yes |
| distributive | Yes |
| commutative | No, in general |
| or | No |
The non-commutativity is the single biggest mental adjustment from scalar arithmetic.
Common mistakes
- Adding instead of multiplying the row-column products (you do both — multiply pairwise then sum).
- Switching the dimension check order — it must be , not .
- Assuming commutativity — may not even be defined if is.
Try with the AI Matrix Solver
Type any pair of matrices into the Matrix Calculator for fully-shown row-by-row work.
Related references:
- Determinant Calculator — pairs naturally with products
- Inverse Calculator — uses as the defining relation
- Vector Calculator — dot product underlies every entry