Flip a fair coin 10 times, and getting 7 heads wouldn't surprise you. Flip it 10,000 times, and getting 7,000 heads would be extraordinary. This intuition captures the Law of Large Numbers—one of probability theory's foundational theorems, and the mathematical reason casinos profit reliably, insurance companies price accurately, and scientific experiments yield trustworthy results.

Weak and Strong Laws

There are two versions. The Weak Law: for any ε > 0, the probability that the sample mean deviates from the true mean by more than ε approaches zero as n approaches infinity. The Strong Law: the sample mean converges to the true mean with probability 1—meaning in almost every infinite sequence of trials, the average eventually gets and stays arbitrarily close to the true mean. The strong law makes an almost-certain statement about what happens in the limit; the weak law makes a probabilistic statement for each finite n. Both say: average enough, and you get the truth.

Weak Law: P(|X̄_n − μ| > ε) → 0 as n → ∞ for any ε > 0

Why Casinos Win

American roulette has 38 slots: 18 red, 18 black, 2 green. A $1 bet on red pays $1 if red comes up, loses $1 otherwise. Expected value per bet: (18/38)(+1) + (20/38)(−1) = −$0.053. You lose 5.3 cents per dollar bet on average. For a single bet, this is nearly imperceptible. But the casino takes millions of bets per day. By the Law of Large Numbers, their average return per bet converges reliably to −5.3 cents for players. The house edge isn't a rigged game—it's a mathematical certainty at scale that no individual winning streak can overcome in the long run.

E[bet] = (18/38)(+1) + (20/38)(−1) = −$0.053 per dollar wagered

Insurance and Risk Pooling

Insurance companies profit from the same principle. The probability of any individual having a house fire is highly uncertain—it either happens or it doesn't. But across 100,000 policyholders, the fraction experiencing fires converges reliably to the true probability. An insurer can set premiums accurately by knowing average loss rates, transforming individual unpredictability into collective predictability. This is risk pooling: the LLN converts uncertain individual outcomes into reliable aggregate statistics, making the business of insurance mathematically sound.

The Gambler's Fallacy

The LLN is frequently misunderstood as implying a 'correction mechanism'—that after a run of heads, tails become more likely to 'balance out.' This is the Gambler's Fallacy. Coins have no memory. After 100 consecutive heads, the next flip is still exactly 50% heads. The LLN says long-run averages converge, not that short-run deviations get corrected through compensating outcomes. Convergence happens through dilution—thousands of future typical flips overwhelm the initial unusual streak numerically—not through a mysterious balancing force.

Scientific Applications

The LLN justifies the entire enterprise of empirical science. We can't measure every particle, survey every person, or run every possible experiment. Instead, we sample—and the LLN guarantees that sample averages converge to population parameters with enough data. Clinical trials with sufficiently large samples yield reliable estimates of treatment effects. Particle physics experiments run millions of collisions to measure properties with high precision. The LLN is the mathematical foundation of the idea that nature is knowable through measurement and repetition.

Conclusion

The Law of Large Numbers bridges probability theory and observed reality. It explains why randomness doesn't preclude reliability—why casinos profit consistently, why insurance pricing works, why scientific measurement converges. Individual outcomes may be unpredictable, but averages are not. The LLN transforms probability from a theoretical framework about hypothetical frequencies into a practical tool for making reliable predictions, giving justified confidence that what we measure with sufficient repetition reflects what is genuinely true.