When you study recursive algorithms like Merge Sort, Binary Search, or Quick Sort, you’ll often end up writing a recurrence relation like:
T(n) = 2T(n/2) + n
At this point, most beginners get stuck: “How do I find the time complexity from this?”
You could expand it manually… or use recursion trees…
But there’s a faster way.
That’s where the Master Theorem comes in — a direct formula to solve most divide-and-conquer recurrences in seconds.
The Master Theorem applies to recurrences of the form:
T(n) = aT(n/b) + f(n)
Where:
a: Number of subproblems the problem is divided inton/b: Size of each subproblemf(n): Cost of dividing, merging, or combining the resultsIt helps us find the asymptotic time complexity (Big O) directly without manually expanding the recurrence.
Imagine you have a big problem of size n.
Each recursive call breaks it into a subproblems of size n/b. You also spend some extra effort f(n) outside recursion (like merging or partitioning).
The total time is the sum of all work done across recursion levels.
So the total cost = Work in subproblems + Work outside recursion.
The Master Theorem compares which part dominates — the recursive work or the combining work.
We analyze recurrences of the form:
Define the benchmark work from the recursion tree (ignoring f):
Now compare f(n) with:
Condition:
This means, the non-recursive work f(n) grows strictly slower than the work contributed by the subproblems.
Result:
Intuition: Most of the time is spent inside recursion; the combine work is negligible by comparison.
Example:
Here: a = 8, b = 2 → log₂8 = 3
Condition
That means both recursive and combining work grow equally fast.
Result:
Intuition: Each level of recursion does the same amount of work → multiply by log n levels.
Example:
Here:
If
and if a × f(n/b) ≤ c × f(n) for some constant c < 1 and sufficiently large n (called the regularity condition),
then:
Intuition: The combine step f(n) is doing so much work that the recursive calls become insignificant.
Example:
Here:
a = 1, b = 2, f(n) = 1
Compute:
→ f(n) = Θ(n⁰) → Case 2: T(n) = Θ(log n)
a = 2, b = 2, f(n) = n
Compute:
→ f(n) = Θ(n¹) → Case 2: T(n) = Θ(n log n)
Not in Master Theorem form because subproblem size is (n−1) not (n/b).
❌ Master Theorem not applicable
We’d need expansion or recursion tree methods.
Here:
This is how Strassen’s algorithm beats the classic O(n³) approach.
The master theorem is powerful but not universal.
You cannot use it when:
In such cases, use recursion tree or substitution method.