Poisson Distribution Calculator

Calculate Poisson probabilities P(X=k), P(X≤k), and P(X≥k) for any rate λ. Includes binomial-to-Poisson approximation, interarrival times, and M/M/1 queuing theory.

P(X = k)
P(X ≤ k)
P(X ≥ k)
Mean = Variance = λ
Extended More scenarios, charts & detailed breakdown
P(X = k)
Mode of distribution
Std Dev (σ = √λ)
Context
Professional Full parameters & maximum detail

Probabilities

P(X = k) at λ
P(X = k) at λt (scaled)

Distribution Moments

Mean = Variance = λ
Skewness = 1/√λ
Excess Kurtosis = 1/λ

Applications

Mean Interarrival Time (Exp dist)
Queuing Application (M/M/1)

How to Use This Calculator

  1. Enter λ (mean events per interval) and k (target event count).
  2. Read P(X=k), P(X≤k), and P(X≥k) instantly.
  3. Use Cumulative tab for P(X≤k) problems.
  4. Use Binomial→Poisson tab to compare approximation accuracy for large n, small p.
  5. Professional tab adds skewness, kurtosis, interarrival time, and M/M/1 queuing notes.

Formula

PMF: P(X=k) = e^(−λ) × λ^k / k!

Mean = Variance = λ

Skewness: 1/√λ   Kurtosis: 1/λ

Example

λ=3, k=2: P(X=2) = e^(−3) × 3² / 2! = 0.0498 × 9 / 2 ≈ 0.2240 (22.4% chance of exactly 2 events)

Frequently Asked Questions

  • The Poisson distribution models the number of events occurring in a fixed interval of time or space when events happen at a constant average rate and independently of each other. The parameter λ (lambda) is both the mean and the variance of the distribution. The probability of exactly k events is P(X=k) = e^(−λ) × λ^k / k!, where e ≈ 2.71828. Conditions for a Poisson process: events occur singly (not in clusters), the average rate λ is constant, events in non-overlapping intervals are independent, and the probability of an event in a very short interval is proportional to the interval length. Siméon-Denis Poisson formalized the distribution in his 1837 book 'Recherches sur la probabilité des jugements.' Classic applications include: number of calls arriving at a call center per hour, number of radioactive decays per second, number of typing errors per page, number of deaths per year in the Prussian cavalry by horse kick (Bortkiewicz 1898 — the dataset that made the distribution famous).
  • The Poisson approximation to the binomial is appropriate when n is large (n ≥ 20, preferably n ≥ 100), p is small (p ≤ 0.05), and np is moderate (typically np ≤ 10). Set λ = np for the approximating Poisson distribution. The approximation improves as n → ∞ and p → 0 with np remaining constant. The intuition: each of the n trials has a very small probability p of success, so most trials yield no success and successes are rare. Mathematically, as n → ∞ and p → 0 with np = λ constant, the binomial PMF converges to the Poisson PMF. Example: in a population of n=500 people, if each independently has a 0.6% chance of a rare genetic mutation, the number of carriers X ~ Binomial(500, 0.006) ≈ Poisson(3). The Poisson gives P(X=2) = e^(−3) × 9/2 ≈ 0.2240, while the exact binomial gives 0.2235 — an error of only 0.0005. When n is large but p is not small, use the normal approximation instead.
  • This equality is not a coincidence — it is a fundamental property derived from the mathematical structure of the Poisson distribution. The moment-generating function (MGF) of X ~ Poisson(λ) is M(t) = exp(λ(e^t − 1)). The mean is M'(0) = λ and the variance is M''(0) − [M'(0)]² = λ. So both equal λ exactly, not approximately. Intuitively, in a Poisson process events are purely random with no structure causing extra variability (underdispersion) or clustering (overdispersion). Real-world data that looks Poisson-distributed can be tested: compute the sample mean and variance — if they are approximately equal, Poisson is plausible. If variance > mean (overdispersion), a negative binomial distribution is often more appropriate. Overdispersion is common in ecology (species counts), epidemiology (disease counts with clustering), and accident data (some people or locations are more prone). The mean=variance equality makes Poisson processes analytically tractable and is the reason for their widespread use in queueing theory and telecommunications.
  • Ladislaus Bortkiewicz's 1898 analysis of deaths by horse kick in the Prussian cavalry is the most celebrated validation of the Poisson distribution. Over 20 years across 14 cavalry corps, he recorded 0, 1, 2, 3, or 4 deaths per corps per year. The observed distribution matched Poisson(λ=0.61) remarkably well. Modern examples: customer arrivals at a bank between 10–11 AM (Poisson assumption is the foundation of queuing theory); photons hitting a detector per millisecond in optical physics; cars passing a toll booth per minute on a quiet road; mutations occurring per base pair per replication in molecular biology; earthquakes above magnitude 4.0 per decade in a given region; network packets arriving at a router per microsecond; goals scored in a soccer match (famously modeled as Poisson with λ ≈ 1.5 per team per 90 minutes). The key requirement is truly independent events at a constant rate — clustering (overdispersion) or regularity (underdispersion) violates the Poisson assumption.
  • The Poisson distribution is the cornerstone of queueing theory (queuing theory). In the classic M/M/1 queue model (the simplest and most important in telecommunications and operations research), the M stands for 'Markovian' — arrivals follow a Poisson process with rate λ, and service times follow an exponential distribution with rate μ. The queue is stable only when λ < μ (the server is fast enough to handle arrivals on average). Performance metrics derived from this Poisson-based model: server utilization ρ = λ/μ; mean number of customers in system L = ρ/(1−ρ); mean waiting time in queue W_q = λ/(μ(μ−λ)); probability of n customers in system P(n) = (1−ρ)ρ^n. Call centers use these formulas to determine how many agents are needed to maintain acceptable wait times. The Erlang C formula (a generalization of M/M/1 to M/M/c with c servers) is standard in call center workforce management. Internet router design, hospital emergency room staffing, and factory production scheduling all rely on Poisson arrival assumptions.

Related Calculators

Sources & References (5)
  1. Poisson 1837 — Recherches sur la probabilité des jugements (original derivation) — Bachelier
  2. OpenStax Statistics — Chapter 4: Discrete Random Variables — OpenStax
  3. NIST/SEMATECH Engineering Statistics Handbook — Poisson Distribution — NIST
  4. Sheldon Ross — Introduction to Probability Models (11th ed.) — Academic Press
  5. MIT OCW 18.05 — Introduction to Probability and Statistics — MIT OpenCourseWare