Big O Notation Calculator

Calculate algorithm time and space complexity. Compare O(1), O(log n), O(n), O(n log n), O(n²), O(n³), O(2ⁿ) with real operation counts and estimated runtime at your CPU speed.

Operations Count
Estimated Time (ms)
Performance Class
Extended More scenarios, charts & detailed breakdown
Operations
Time @ 1 GHz (ms)
Time @ 3 GHz (ms)
Professional Full parameters & maximum detail

Performance

Total Operations
Time (ms)
Time (seconds)

Analysis

Scalability Note
Example Algorithms

How to Use This Calculator

  1. Enter the input size n (e.g. number of items in your dataset).
  2. Select the time complexity class of your algorithm.
  3. Adjust CPU speed in billion ops/sec (modern CPUs ≈ 1–4 GHz).
  4. Read the operation count and estimated runtime instantly.
  5. Use Compare Algorithms tab to see all complexities side-by-side for the same n.
  6. Professional tab adds scalability analysis and real algorithm examples for each class.

Formula

O(1)=1 | O(log n)=log₂n | O(n)=n | O(n log n)=n·log₂n | O(n²)=n² | O(n³)=n³ | O(2ⁿ)=2ⁿ

Example

n=10,000, O(n²): 100M ops at 10⁹/sec = 100 ms. Same n with O(n log n): ~133,000 ops = 0.13 ms — 750× faster.

Frequently Asked Questions

  • Big O notation is a mathematical framework for describing how an algorithm's resource usage (time or memory) scales with input size n. It captures the worst-case growth rate, ignoring constants and lower-order terms. An algorithm performing 3n²+5n+7 operations is O(n²) — as n grows large, the n² term dominates everything else. Big O allows programmers to compare algorithms independently of hardware, language, or implementation details. A sorting algorithm that is O(n log n) will always outperform an O(n²) algorithm for large enough inputs, regardless of constant optimizations. This matters enormously in production systems: an algorithm working fine for 1,000 items may become unusable for 1,000,000 items if complexity is poor. Understanding Big O is foundational to writing scalable software — it is the language used in technical interviews, academic papers, and engineering design discussions worldwide.
  • Determining Big O involves analyzing how many operations an algorithm performs relative to input size n. For simple loops, count iterations: a single loop from 1 to n is O(n); nested loops are O(n²). For recursive algorithms, write and solve a recurrence relation — merge sort's T(n) = 2T(n/2)+n solves to O(n log n) via the master theorem. Key rules: drop constants (O(3n) = O(n)); drop lower-order terms (O(n²+n) = O(n²)); add sequential loops (O(n)+O(n) = O(n)); multiply nested loops (two nested O(n) loops = O(n²)). Common patterns: halving the problem each step gives O(log n) — binary search, merge sort recursion. Processing all pairs gives O(n²). The master theorem T(n) = aT(n/b)+f(n) classifies divide-and-conquer algorithms. Space complexity follows the same notation for memory usage.
  • The practical difference between O(n) and O(n²) grows catastrophically with input size. At n=1,000: O(n) does 1,000 operations while O(n²) does 1,000,000 — a 1,000x difference. At n=1,000,000: O(n) does 1M operations while O(n²) does 10¹² — a trillion. On modern hardware at 10⁹ ops/second: O(n) processes 1M items in 1 millisecond; O(n²) takes 1,000 seconds (17 minutes). This is why bubble sort (O(n²)) is never used on large datasets while merge sort (O(n log n)) handles millions efficiently. The crossover point depends on constants — a 'slower' O(n log n) algorithm with a small constant sometimes beats an O(n) algorithm with a large constant for small inputs. This is why insertion sort (O(n²)) beats merge sort for n < 20 due to cache locality, and why profiling real code always matters alongside theoretical analysis.
  • O(1) is perfect — hash table lookups and array access achieve constant time regardless of data size. O(log n) is excellent — binary search finds an item in 1 billion entries in just 30 comparisons. O(n) is good and often optimal for problems requiring reading all input at least once. O(n log n) is the gold standard for sorting — mathematically proven optimal for comparison-based sorting. O(n²) is acceptable only for small datasets (typically n < 10,000) or when simplicity matters more than speed. O(n³) is generally impractical beyond n = 1,000. O(2ⁿ) is infeasible for n > 40-50. In practice, cache efficiency, memory access patterns, and hardware characteristics can make a theoretically 'worse' algorithm faster in reality. Always profile with realistic data — theoretical complexity is a guide, not a guarantee.
  • O(log n) is extraordinarily efficient because logarithms grow incredibly slowly. log₂(1,000,000) ≈ 20, meaning binary search through a million sorted items requires at most 20 comparisons. Double the data to 2 million? Now 21 comparisons. Binary search halves the search space each step: after k steps only n/2^k elements remain. Setting n/2^k = 1 gives k = log₂n. This is why sorted data is so valuable. Database B-tree indexes apply this principle: a table with billions of rows can be searched in milliseconds because tree height is O(log n). Balanced BSTs, heaps, and divide-and-conquer algorithms all achieve O(log n) by repeatedly halving the problem. The master theorem explains why: T(n) = T(n/2)+O(1) solves to O(log n). Every time you see 'divide in half,' think O(log n). This is the mathematical reason 'divide and conquer' is the single most powerful strategy in algorithm design.

Related Calculators

Sources & References (5)
  1. Introduction to Algorithms (CLRS) — Cormen, Leiserson, Rivest, Stein — MIT Press
  2. MIT OCW 6.006 — Introduction to Algorithms — MIT OpenCourseWare
  3. Stanford CS161 — Design and Analysis of Algorithms — Stanford University
  4. The Art of Computer Programming Vol. 1 — Donald Knuth — Addison-Wesley
  5. Big-O Algorithm Complexity Cheat Sheet — bigocheatsheet.com