AlgoMaster Logo

Min Cost Climbing Stairs

Last Updated: March 28, 2026

easy
4 min read

Understanding the Problem

We have a staircase where each step has a cost. We want to reach "the top," which is one step beyond the last element in the array. At each step, we pay the cost at that step and then choose to climb either one or two steps forward. We can begin at step 0 or step 1.

The question is: what sequence of steps gives us the minimum total cost to reach the top?

This is a classic dynamic programming problem because each decision (climb one or climb two) affects future options, and the optimal choice at each step depends on what comes next. The key observation is that to arrive at any step, you must have come from either one step back or two steps back, and the cheapest way to reach a step is the cheaper of those two paths.

Key Constraints:

  • 2 <= cost.length <= 1000 --> With n up to 1000, even an O(2^n) brute force would be astronomically slow. O(n^2) is fine, O(n) is ideal.
  • 0 <= cost[i] <= 999 --> Costs are non-negative, so we never benefit from taking extra steps. A greedy "skip expensive steps" intuition is not sufficient because skipping one expensive step might force you to land on another.

Approach 1: Recursion (Brute Force)

Intuition

The most natural starting point is to think recursively. We want to reach the top (position n, one past the last step). From any step i, we can jump to i+1 or i+2 after paying cost[i]. So the minimum cost starting from step i is cost[i] plus the cheaper of the minimum cost from step i+1 and the minimum cost from step i+2.

Since we can start at step 0 or step 1, the answer is the minimum of starting from either.

This approach works, but it is extremely slow. Every call branches into two recursive calls, and many subproblems are solved over and over again.

Algorithm

  1. Define a recursive function minCost(i) that returns the minimum cost to reach the top starting from step i.
  2. Base cases: if i >= n (where n is the length of cost), return 0 because we have already reached or passed the top.
  3. Recursive case: return cost[i] + min(minCost(i + 1), minCost(i + 2)).
  4. The answer is min(minCost(0), minCost(1)).

Example Walkthrough

1Start: try minCost(0) and minCost(1), pick the cheaper
0
10
start?
1
start?
15
2
20
1/5

Code

The bottleneck here is that minCost(i) gets called with the same value of i many times. What if we stored the result the first time we computed it, so we never had to compute it again?

Approach 2: Memoized Recursion (Top-Down DP)

Intuition

The recursive solution computes the same subproblems repeatedly. The fix is memoization: store the result of minCost(i) in a cache the first time we compute it, and return the cached value on all subsequent calls. This turns our exponential recursion into a linear one, because each of the n subproblems is solved exactly once.

The logic stays identical. We just add a lookup table that says "if we have already computed the answer for step i, return it immediately instead of recursing."

Algorithm

  1. Create a memo array of size n, initialized to -1 (meaning "not yet computed").
  2. Define minCost(i) the same as before, but before recursing, check if memo[i] is already filled. If so, return it directly.
  3. After computing the result for step i, store it in memo[i] before returning.
  4. The answer is min(minCost(0), minCost(1)).

Example Walkthrough

1Start: recurse from step 0, drilling down to base cases first
0
1
minCost(0)
1
100
2
1
3
1
4
1
5
100
6
1
7
1
8
100
9
1
1/7

Code

Memoization gives us O(n) time, but the recursion stack still uses O(n) space. Can we flip the direction and build the solution bottom-up, eliminating the recursion entirely?

Approach 3: Bottom-Up DP with Space Optimization

Intuition

Instead of thinking "what is the cheapest way to reach the top from step i?" (top-down), let's think "what is the cheapest way to reach step i from the bottom?" (bottom-up).

Define dp[i] as the minimum cost to reach step i. To arrive at step i, we must have come from either step i-1 (paying cost[i-1]) or step i-2 (paying cost[i-2]). So dp[i] = min(dp[i-1] + cost[i-1], dp[i-2] + cost[i-2]).

The base cases are dp[0] = 0 and dp[1] = 0, because we can start at either step for free. The answer is dp[n]. But since dp[i] only depends on the two preceding values, we can collapse the entire table into two rolling variables.

Algorithm

  1. Initialize prev2 = 0 (cost to reach step 0) and prev1 = 0 (cost to reach step 1).
  2. For each step i from 2 to n:
    • Compute current = min(prev1 + cost[i-1], prev2 + cost[i-2]).
    • Shift: prev2 = prev1, prev1 = current.
  3. Return prev1, which now holds dp[n].

Example Walkthrough

1Initialize: prev2=0 (dp[0]), prev1=0 (dp[1])
0
prev2=0
1
1
100
prev1=0
2
1
3
1
4
1
5
100
6
1
7
1
8
100
9
1
1/6

Code