So far, we've analyzed algorithms using Worst-Case Analysis. We look at a single operation and ask, "What is the absolute most work this one operation could ever do?"
This is the Big O we've been calculating, and it's a vital tool.
But sometimes, worst-case analysis can be too pessimistic. It can make a fast algorithm look slow.
If you’ve ever worked with a dynamic array (like ArrayList in Java or vector in C++), you might’ve noticed something strange:
So what’s the real cost of inserting into a dynamic array? Is it O(1)? Or is it O(n)?
The answer lies in a powerful idea called Amortized Analysis.
In many algorithms, some operations are cheap and others are expensive but the expensive ones don’t happen often.
If we only analyze the worst case per operation, it looks bad. But if we look at the average cost per operation over a sequence, the picture changes.
That’s what Amortized Analysis does:
It gives a more realistic view of an algorithm’s performance by averaging the total cost of a series of operations.
Imagine you're a daily commuter.
If someone asked you "How much does a ride cost?", how would you answer?
Neither answer tells the whole story. The most useful answer is the average cost per ride over the whole month. If you take 50 rides, the cost is $130 / 50 = $2.60 per ride. This is the amortized cost.
Amortized Analysis isn't about probability or average inputs. It's about finding the average cost of an operation over a sequence of operations, where occasional, expensive operations are "paid for" by many cheap ones.
It's a guarantee: over a long sequence, the average performance will be X, even if a single operation is occasionally much worse.
Before diving deeper, let’s clarify the difference:
Type | What It Measures | Depends On | Example |
|---|---|---|---|
Worst Case | Maximum time for any single operation | The most difficult input | Sorting a reverse-sorted array using Bubble Sort |
Average Case | Expected time over all inputs | Probability distribution of inputs | Searching in a random array |
Amortized Case | Average time over a sequence of operations | Behavior of algorithm, not inputs | Resizing in dynamic array |
Amortized analysis doesn’t rely on input randomness. It guarantees that even though some operations are costly, the average cost per operation stays low.
Let’s take the example of inserting elements into a dynamic array (like Java’s ArrayList).
An ArrayList uses a regular, fixed-size array internally. When you create it, it might have a default capacity, say, 4.
add(10)
add(20)
add(30)
add(40)
Now, the internal array is full. What happens when we call add(50)?
The Expensive Operation:
Cost: 4 copies + 1 write = 5 operations.
This one operation was O(N), where N was the current size of the list. The next few adds, however, will be cheap again:
Then we'll have another expensive O(N) resize.
If we perform n insertions, how much total work is done?
Most insertions cost 1 unit (O(1)), but occasionally we pay extra when the array resizes.
Let’s add them up:
Total cost = n (for simple inserts) + 2n (for copies) = O(3n) = O(n)
So, n insertions cost O(n) total, meaning the average cost per insertion = O(1).
Amortized cost per operation = O(1)
There are three main ways to perform amortized analysis mathematically:
Compute the total cost of all operations and divide by n.
Example:
For dynamic array insertions: Total cost = O(n) for n operations → amortized O(1) per operation.
Assign credits (or tokens) to operations.
Example:
Thus, the average charge per operation is constant → O(1).
We define a potential function (Φ) that represents “stored energy” in the system.
When operations happen:
Formula:
This method is more formal and used in advanced analysis (e.g., Union-Find).
So far, we’ve focused on analyzing algorithms that perform repeated operations (sometimes cheap, sometimes costly) and we learned how amortized analysis helps us find the true average cost over a long sequence.
But what about recursive algorithms, where a problem breaks itself into smaller subproblems again and again like Merge Sort, Binary Search, or Quick Sort?
In those cases, the running time of the algorithm depends on the time taken by its smaller instances, forming a mathematical relationship between them.
To analyze such recursive behavior, we need a new tool — the Recurrence Relation.
In the next chapter, we’ll learn how to express an algorithm’s time complexity as a recurrence and solve it to uncover its Big-O performance.