AlgoMaster Logo

Amortized Analysis

Ashish

Ashish Pratap Singh

So far, we've analyzed algorithms using Worst-Case Analysis. We look at a single operation and ask, "What is the absolute most work this one operation could ever do?"

This is the Big O we've been calculating, and it's a vital tool.

But sometimes, worst-case analysis can be too pessimistic. It can make a fast algorithm look slow.

If you’ve ever worked with a dynamic array (like ArrayList in Java or vector in C++), you might’ve noticed something strange:

  • Most of the time, adding an element is instant (O(1)).
  • But sometimes, it suddenly takes longer, when the array resizes itself.

So what’s the real cost of inserting into a dynamic array? Is it O(1)? Or is it O(n)?

The answer lies in a powerful idea called Amortized Analysis.

The Problem That Amortized Analysis Solves

In many algorithms, some operations are cheap and others are expensive but the expensive ones don’t happen often.

If we only analyze the worst case per operation, it looks bad. But if we look at the average cost per operation over a sequence, the picture changes.

That’s what Amortized Analysis does:

It gives a more realistic view of an algorithm’s performance by averaging the total cost of a series of operations.

Neither answer tells the whole story. The most useful answer is the average cost per ride over the whole month. If you take 50 rides, the cost is $130 / 50 = $2.60 per ride. This is the amortized cost.

It's a guarantee: over a long sequence, the average performance will be X, even if a single operation is occasionally much worse.

Amortized vs Average vs Worst Case

Before diving deeper, let’s clarify the difference:

Type
What It Measures
Depends On
Example

Worst Case

Maximum time for any single operation

The most difficult input

Sorting a reverse-sorted array using Bubble Sort

Average Case

Expected time over all inputs

Probability distribution of inputs

Searching in a random array

Amortized Case

Average time over a sequence of operations

Behavior of algorithm, not inputs

Resizing in dynamic array

Amortized analysis doesn’t rely on input randomness. It guarantees that even though some operations are costly, the average cost per operation stays low.

Example — Dynamic Array Resizing

Let’s take the example of inserting elements into a dynamic array (like Java’s ArrayList).

An ArrayList uses a regular, fixed-size array internally. When you create it, it might have a default capacity, say, 4.

0
-
1
-
2
-
3
-
(size=0, capacity=4)

add(10)

0
10
1
-
2
-
3
-
(size=1, capacity=4).

add(20)

0
10
1
20
2
-
3
-
(size=2, capacity=4)

add(30)

0
10
1
20
2
30
3
-
(size=3, capacity=4)

add(40)

0
10
1
20
2
30
3
40
(size=4, capacity=4)

Now, the internal array is full. What happens when we call add(50)?

The Expensive Operation:

  1. A new, larger array is allocated, typically double the size (capacity = 8).
  2. All the old elements (10, 20, 30, 40) are copied from the old array to the new one.
  3. The new element (50) is added at the end.
  4. The old array is discarded.
0
10
1
20
2
30
3
40
4
50
5
-
6
-
7
-
(size=5, capacity=8)

Cost: 4 copies + 1 write = 5 operations.

This one operation was O(N), where N was the current size of the list. The next few adds, however, will be cheap again:

  • add(60): Cost: 1 write.
  • add(70): Cost: 1 write.
  • add(80): Cost: 1 write.

Then we'll have another expensive O(N) resize.

If we perform n insertions, how much total work is done?

Most insertions cost 1 unit (O(1)), but occasionally we pay extra when the array resizes.

Let’s add them up:

Total cost = n (for simple inserts) + 2n (for copies) = O(3n) = O(n)

So, n insertions cost O(n) total, meaning the average cost per insertion = O(1).

Amortized cost per operation = O(1)

Types of Amortized Analysis

There are three main ways to perform amortized analysis mathematically:

(a) Aggregate Method

Compute the total cost of all operations and divide by n.

Example:

For dynamic array insertions: Total cost = O(n) for n operations → amortized O(1) per operation.

(b) Accounting Method (a.k.a. Banker's Method)

Assign credits (or tokens) to operations.

  • Cheap operations “save” credits.
  • Expensive operations “spend” saved credits.

Example:

  • Charge each insertion a fixed 3 units.
    • 1 for the actual insert.
    • 2 saved for future resizing.
  • When resizing occurs, the saved credits pay for the copying.

Thus, the average charge per operation is constant → O(1).

(c) Potential Method

We define a potential function (Φ) that represents “stored energy” in the system.

When operations happen:

  • The actual cost may increase or decrease potential.
  • We analyze the change in potential to determine amortized cost.

Formula:

This method is more formal and used in advanced analysis (e.g., Union-Find).

So far, we’ve focused on analyzing algorithms that perform repeated operations (sometimes cheap, sometimes costly) and we learned how amortized analysis helps us find the true average cost over a long sequence.

But what about recursive algorithms, where a problem breaks itself into smaller subproblems again and again like Merge Sort, Binary Search, or Quick Sort?

In those cases, the running time of the algorithm depends on the time taken by its smaller instances, forming a mathematical relationship between them.

To analyze such recursive behavior, we need a new tool — the Recurrence Relation.

In the next chapter, we’ll learn how to express an algorithm’s time complexity as a recurrence and solve it to uncover its Big-O performance.