Last Updated: March 28, 2026
We have a staircase where each step has a cost. We want to reach "the top," which is one step beyond the last element in the array. At each step, we pay the cost at that step and then choose to climb either one or two steps forward. We can begin at step 0 or step 1.
The question is: what sequence of steps gives us the minimum total cost to reach the top?
This is a classic dynamic programming problem because each decision (climb one or climb two) affects future options, and the optimal choice at each step depends on what comes next. The key observation is that to arrive at any step, you must have come from either one step back or two steps back, and the cheapest way to reach a step is the cheaper of those two paths.
2 <= cost.length <= 1000 --> With n up to 1000, even an O(2^n) brute force would be astronomically slow. O(n^2) is fine, O(n) is ideal.0 <= cost[i] <= 999 --> Costs are non-negative, so we never benefit from taking extra steps. A greedy "skip expensive steps" intuition is not sufficient because skipping one expensive step might force you to land on another.The most natural starting point is to think recursively. We want to reach the top (position n, one past the last step). From any step i, we can jump to i+1 or i+2 after paying cost[i]. So the minimum cost starting from step i is cost[i] plus the cheaper of the minimum cost from step i+1 and the minimum cost from step i+2.
Since we can start at step 0 or step 1, the answer is the minimum of starting from either.
This approach works, but it is extremely slow. Every call branches into two recursive calls, and many subproblems are solved over and over again.
minCost(i) that returns the minimum cost to reach the top starting from step i.i >= n (where n is the length of cost), return 0 because we have already reached or passed the top.cost[i] + min(minCost(i + 1), minCost(i + 2)).min(minCost(0), minCost(1)).The bottleneck here is that minCost(i) gets called with the same value of i many times. What if we stored the result the first time we computed it, so we never had to compute it again?
The recursive solution computes the same subproblems repeatedly. The fix is memoization: store the result of minCost(i) in a cache the first time we compute it, and return the cached value on all subsequent calls. This turns our exponential recursion into a linear one, because each of the n subproblems is solved exactly once.
The logic stays identical. We just add a lookup table that says "if we have already computed the answer for step i, return it immediately instead of recursing."
n, initialized to -1 (meaning "not yet computed").minCost(i) the same as before, but before recursing, check if memo[i] is already filled. If so, return it directly.i, store it in memo[i] before returning.min(minCost(0), minCost(1)).Memoization gives us O(n) time, but the recursion stack still uses O(n) space. Can we flip the direction and build the solution bottom-up, eliminating the recursion entirely?
Instead of thinking "what is the cheapest way to reach the top from step i?" (top-down), let's think "what is the cheapest way to reach step i from the bottom?" (bottom-up).
Define dp[i] as the minimum cost to reach step i. To arrive at step i, we must have come from either step i-1 (paying cost[i-1]) or step i-2 (paying cost[i-2]). So dp[i] = min(dp[i-1] + cost[i-1], dp[i-2] + cost[i-2]).
The base cases are dp[0] = 0 and dp[1] = 0, because we can start at either step for free. The answer is dp[n]. But since dp[i] only depends on the two preceding values, we can collapse the entire table into two rolling variables.
The recurrence dp[i] = min(dp[i-1] + cost[i-1], dp[i-2] + cost[i-2]) captures the core decision: to reach step i, you either stepped from i-1 (paying cost[i-1]) or jumped from i-2 (paying cost[i-2]). The minimum of these two options gives the cheapest way to reach step i. Since each dp[i] only depends on the two preceding values, we collapse the entire DP table into two rolling variables.
This is the same idea as computing Fibonacci numbers with constant space. The Fibonacci recurrence looks at the two previous values, and so does this one, just with an added cost at each step.
prev2 = 0 (cost to reach step 0) and prev1 = 0 (cost to reach step 1).i from 2 to n:current = min(prev1 + cost[i-1], prev2 + cost[i-2]).prev2 = prev1, prev1 = current.prev1, which now holds dp[n].prev1 and prev2) regardless of input size. No recursion stack, no DP array.