Binomial Distribution Calculator

Calculate exact, cumulative, and complementary binomial probabilities. Find P(X=k), P(X≤k), P(X≥k) with mean, variance, skewness, and normal approximation check.

P(X = k) — Exact
P(X ≤ k) — Cumulative
P(X ≥ k)
Mean (μ = np)
Variance (σ² = np(1−p))
Extended More scenarios, charts & detailed breakdown
P(X = k)
Mean μ = np
Std Dev σ = √(np(1−p))
Normal Approx valid?
Professional Full parameters & maximum detail

Probabilities

P(X = k)
P(X ≤ k)

Distribution Properties

Mean μ = np
Variance σ² = np(1−p)
Skewness
Excess Kurtosis

Normal Approximation

Normal Approx: P(X=k) ≈

How to Use This Calculator

  1. Enter n (number of trials), p (success probability), and k (target successes).
  2. Read P(X=k), P(X≤k), and P(X≥k) instantly.
  3. Use Cumulative tab for P(X≤k) analysis.
  4. Use P(X≥k) tab for at-least-k problems.
  5. Professional tab adds skewness, kurtosis, and normal approximation comparison.

Formula

PMF: P(X=k) = C(n,k) × p^k × (1−p)^(n−k)

Mean: μ = np   Variance: σ² = np(1−p)

Normal approx valid when: np ≥ 10 AND n(1−p) ≥ 10

Example

n=10, p=0.5, k=6: C(10,6) = 210. P(X=6) = 210 × 0.5⁶ × 0.5⁴ = 210 / 1024 ≈ 0.2051

Frequently Asked Questions

  • A binomial distribution models the number of successes in a fixed number of independent trials, each with the same probability of success. The four conditions for a binomial setting (often remembered as BINS): Binary outcomes — each trial results in exactly one of two outcomes (success or failure); Independent — the outcome of each trial does not affect the others; Number of trials — n is fixed in advance; Same probability — p is constant across all trials. Examples include: the number of heads in 10 coin flips (p=0.5), the number of defective items in a batch of 50 (p=0.03), the number of patients responding to a drug in a clinical trial of 100 (p=0.6). The formula is P(X=k) = C(n,k) × p^k × (1−p)^(n−k), where C(n,k) = n! / (k! × (n−k)!) is the binomial coefficient — the number of ways to choose k successes from n trials. The distribution is symmetric when p=0.5, right-skewed when p is small, and left-skewed when p is large.
  • A Bernoulli distribution is the simplest special case of the binomial distribution with exactly one trial (n=1). A Bernoulli trial has only two outcomes — success (1) with probability p and failure (0) with probability 1−p. The Bernoulli distribution has mean p and variance p(1−p). The binomial distribution generalizes this to n independent Bernoulli trials and counts the total number of successes. A binomial(n, p) random variable can be thought of as the sum of n independent Bernoulli(p) random variables. This relationship is important: because the binomial is a sum of independent identically distributed variables, the Central Limit Theorem guarantees that for large n, the binomial distribution approaches normality. The Bernoulli distribution was studied by Jacob Bernoulli, who published the foundational work 'Ars Conjectandi' in 1713, eight years after his death. Bernoulli also proved the Law of Large Numbers — that the sample proportion of successes converges to p as n grows — which underpins much of frequentist statistics.
  • The normal approximation to the binomial is valid when both np ≥ 10 and n(1−p) ≥ 10. These conditions ensure that the binomial distribution is sufficiently symmetric and bell-shaped for the normal approximation to be accurate. When conditions are met, X ~ Binomial(n, p) can be approximated by Z ~ Normal(μ = np, σ² = np(1−p)). A continuity correction improves accuracy: P(X = k) ≈ P(k−0.5 < Z < k+0.5), P(X ≤ k) ≈ P(Z ≤ k+0.5). Without continuity correction, the approximation tends to underestimate probabilities. If p is very small (p < 0.05) and n is large, the Poisson approximation with λ = np is more accurate than the normal approximation. Modern computing makes the approximation less necessary since exact binomial calculations are fast. However, the approximation remains pedagogically important because it demonstrates the Central Limit Theorem in action and connects discrete and continuous probability distributions.
  • The cumulative distribution function (CDF) P(X≤k) gives the probability that the number of successes is at most k. It is the sum of all individual probabilities from 0 to k: P(X≤k) = P(X=0) + P(X=1) + ... + P(X=k). Interpretation example: if X ~ Binomial(10, 0.5) represents coin flips, P(X≤4) ≈ 0.377 means there is a 37.7% chance of getting 4 or fewer heads. The complement P(X>k) = 1 − P(X≤k) ≈ 0.623 is the probability of getting 5 or more heads. For quality control: if a production line has a 5% defect rate (p=0.05) and you inspect 20 items (n=20), P(X≤2) gives the probability of catching 2 or fewer defects. CDF values are monotonically non-decreasing from 0 to 1. They are used in hypothesis testing (binomial test) to compute exact p-values: the p-value for observing k or more successes is P(X≥k) = 1 − P(X≤k−1), compared to the binomial distribution under H₀.
  • The binomial distribution is foundational to acceptance sampling and statistical process control — the quantitative backbone of manufacturing quality assurance. In acceptance sampling, a random sample of n items is inspected from a lot of N items; if more than c defective items are found, the lot is rejected. The probability of accepting a lot with true defect proportion p is exactly P(X≤c) where X~Binomial(n,p). Operating characteristic (OC) curves plot this probability against p, allowing manufacturers to design sampling plans that balance producer's risk (α = probability of rejecting a good lot) and consumer's risk (β = probability of accepting a bad lot). Control charts for attributes (p-charts, np-charts) use the binomial distribution to set control limits: the center line is np (expected defectives), and control limits are np ± 3√(np(1−p)). Points outside these limits signal a process shift. The American National Standard ANSI/ASQ Z1.4 and MIL-STD-1916 specify binomial-based sampling plans used in defense and manufacturing worldwide.

Related Calculators

Sources & References (5)
  1. Bernoulli — Ars Conjectandi 1713 (foundational binomial work) — Cambridge University Press (translation)
  2. OpenStax Statistics — Chapter 4: Discrete Random Variables — OpenStax
  3. NIST/SEMATECH Engineering Statistics Handbook — Binomial Distribution — NIST
  4. Sheldon Ross — Introduction to Probability Models (11th ed.) — Academic Press
  5. MIT OCW 18.05 — Introduction to Probability and Statistics — MIT OpenCourseWare