Big O Notation: An Interactive Guide

Building intuition to recognize time complexity from code patterns.

The Visualizer

The visualizer consists of three components:

  1. Code Editor: JavaScript that executes across a range of input sizes.
  2. Graph Chart: Displays operation counts as n increases.
  3. Control Panel: Controls the range of inputs via slider.
Big O Visualizer Components

Use op() markers to count operations.

As n increases, the visualizer tracks how many times each marker is called.

Loading...
n = 10

Additional features:

  • Pass values to specify growth rates, e.g., op(n) for linear or op(n*Math.log2(n)) for linearithmic.
  • Use plot() to overlay theoretical curves, e.g., plot(n*n) for quadratic comparison.
  • Add op.scope() before inner loops to isolate their contribution.

Common Complexities

O(1) - Constant

Operations that execute in fixed time regardless of input size. Example: Array element access.

Loading...
n = 10

O(log n) - Logarithmic

Occurs when the problem size is halved at each step. Logarithmic growth is remarkably slow—log₂(1,000,000) is approximately 20. Example: Binary search.

Loading...
n = 40

O(n) - Linear

One complete pass through the input. Often represents the lower bound for problems requiring examination of all data. Example: Finding the maximum value in an array.

Loading...
n = 10

O(n log n) - Linearithmic

The complexity of efficient divide-and-conquer algorithms. Marginally worse than linear, but substantially better than quadratic. Example: Merge sort, quicksort.

Loading...
n = 40

O(n²) - Quadratic

Results from nested iteration over the input. Manageable for small datasets, but performance degrades rapidly beyond a few thousand items. Example: Bubble sort, naive matrix multiplication.

Loading...
n = 10

O(2ⁿ) - Exponential

Each increment of input size doubles the total work. Visualize it as a binary tree that expands level by level.

The code mirrors this structure—each recursive call spawns two more calls. Example: Naive recursive Fibonacci, subset enumerationNote: This complexity can be reproduced iteratively using a stack (DFS) or queue (BFS).

Loading...
n = 10

O(n!) - Factorial

Each increment of input size multiplies the total work. The recursion tree branches from 1 up to n at each level, leading to extremely rapid growth. Even for n ≈ 60, the total number of operations exceeds the number of atoms in the universe.

The code reflects this branching pattern—each call spawns a number of recursive calls equal to the current n. Example: generating all permutations naively.

Loading...
n = 6

Three Core Principles

1. Constants Drop Out

C · O(f) = O(f). Running the same loop twice still yields O(n). Big O cares about the shape of growth, not the exact numbers.Note: In practice, constants can still matter when algorithms share the same complexity (QuickSort vs MergeSort).

Loading...
n = 10

2. Largest Term Dominates

O(f) + O(g) = O(max(f, g)). When combining operations of different complexities, the fastest-growing term determines the overall behavior. The slower-growing terms become negligible as n increases.

Loading...
n = 10

3. Nested Operations Multiply

O(f) · O(g) = O(f · g). When one operation is nested inside another, their complexities multiply. The inner work executes once for each iteration of the outer operation.

Loading...
n = 10

The Master Theorem

For recursive divide-and-conquer algorithms: T(n) = aT(n/b) + O(nᵈ)

T(n) is runtime for input size n.

  • a - number of subproblems
  • b - division factor
  • d - exponent of work done outside recursion

The theorem determines which grows faster: the recursive splitting (a) or the work performed at each level (nᵈ).

  • If a = bᵈ: O(nᵈ log n) (Balanced)
  • If a < bᵈ: O(nᵈ) (External work dominates)
  • If a > bᵈ: O(nlogba) (Recursion dominates)

Binary Search (Balanced)

a=1, b=2, d=0 → Since 1 = 2⁰, recursion and external work are balanced, resulting in O(n⁰ log n) = O(log n)

Loading...
n = 50

Merge Sort (Balanced)

a=2, b=2, d=1 → Since 2 = 2¹, recursion and external work are balanced, resulting in O(n log n)

Loading...
n = 50

Strassen's Matrix Multiplication (Recursion Dominates)

a=7, b=2, d=2 → Since 7 > 2² = 4, recursion dominates over external work, achieving O(nlog₂7) ≈ O(n2.81) instead of the naive O(n3)

Loading...
n = 50

Practical Example: Merge Sort

Different markers track distinct aspects of the algorithm's behavior:

  • op() - Counts total recursive calls across all levels (O(n))
  • op1() - Tracks merge operations within each recursion level (O(log n), scoped)
  • op2() - Measures cumulative merge work across the entire algorithm (O(n log n))
Loading...
n = 10

Comparative View

All major complexity classes on a single chart. Note how exponential and factorial growth rapidly overwhelms the others.

Loading...
n = 6

Conclusion

The visualizers let you experiment—tweak loops, adjust recursion, or drop in your own code, add op() and see how the curves change. Recognizing common complexity patterns helps estimate performance without formal analysis.

This becomes useful when working with large datasets, preparing for technical interviews on LeetCode, and avoiding exponential or factorial algorithms that won't scale.

For more depth, Tim Roughgarden's Algorithms Illuminated is a good resource.