statistics

Z-score (Standard Score)

A z-score measures how many standard deviations a value lies above or below the mean. z = (x − μ) / σ. Used for comparing values across distributions and probability lookups.

A z-score (standard score) is a value's distance from the mean expressed in units of standard deviations:

z=xμσz = \frac{x - \mu}{\sigma}

(use xˉ\bar{x} and ss for sample data).

A z-score of +2+2 means "two standard deviations above the mean"; 1.5-1.5 means "1.5 below."

Z-scores let you:

  • Compare values from different distributions — a kid scoring 80 on Test A (μ=70,σ=5\mu=70, \sigma=5) is more impressive (z=2) than 80 on Test B (μ=75,σ=10\mu=75, \sigma=10, z=0.5).
  • Look up probabilities in a standard normal table — P(Z<1.96Z < 1.96) ≈ 0.975, the basis for 95% CI.
  • Identify outliers — by convention z>3|z| > 3 flags an unusual observation in roughly normal data.

Standardisation (z-scoring) is also a fundamental machine-learning preprocessing step: scaling inputs to mean 0, std 1 helps gradient descent converge and prevents features with bigger units (e.g. income in dollars vs age in years) from dominating distance-based models.