Last Updated: March 28, 2026
We need to count the total number of distinct ways to reach step n, starting from step 0. At each position, we have two choices: take one step or take two steps. The order matters, so "1 step then 2 steps" is different from "2 steps then 1 step."
If you think about it from the top down, to arrive at step n, you must have come from either step n-1 (by taking 1 step) or step n-2 (by taking 2 steps). So the number of ways to reach step n is the sum of the ways to reach step n-1 and step n-2. This is exactly the Fibonacci sequence, just shifted by one index.
The key observation is: ways(n) = ways(n-1) + ways(n-2), with base cases ways(1) = 1 and ways(2) = 2.
1 <= n <= 45 → With n at most 45, a brute force recursive solution without memoization would mean roughly 2^45 operations (over 35 trillion), which is far too slow. We need at least O(n) or better.The most natural approach is to translate the problem directly into a recursive function. To reach step n, we could have arrived from step n-1 or step n-2. So the total ways to reach step n is the sum of ways to reach those two earlier steps.
The base cases are straightforward: there is exactly 1 way to reach step 1 (take one step) and 2 ways to reach step 2 (either two single steps, or one double step).
This works correctly but has a serious flaw. It recalculates the same subproblems many times. Computing climbStairs(5) requires climbStairs(4) and climbStairs(3). But climbStairs(4) itself calls climbStairs(3) again. As n grows, this tree of recursive calls expands exponentially.
Input: n = 5
The recursion unfolds as a tree. climbStairs(5) calls climbStairs(4) and climbStairs(3). Each of those branches further until hitting the base cases. The answer bubbles up: climbStairs(3) = 3, climbStairs(4) = 5, and finally climbStairs(5) = 8.
This works but is exponentially slow because it recalculates the same subproblems over and over. What if we stored the result of each subproblem the first time we computed it?
The fix for the brute force recursion is simple: cache the result of every subproblem. Before computing climbStairs(k), check if we already have the answer stored. If yes, return it immediately. If not, compute it, store it, and then return.
This technique is called memoization, and it transforms our exponential recursion into a linear one. Each subproblem from 1 to n is solved exactly once, and subsequent calls just look up the cached value in O(1).
Memoization works because it eliminates the core inefficiency of the brute force approach: redundant computation. There are only n unique subproblems (from step 1 to step n), and each one is solved exactly once. Every subsequent request for the same subproblem is answered in O(1) via the cache.
This uses O(n) space for both the memo array and the call stack. But at any point, we only need the previous two values. What if we computed the answer iteratively, keeping just two variables?
Since ways(n) = ways(n-1) + ways(n-2), we can compute the answer iteratively starting from the base cases and working our way up. At each step, we only need the previous two values. So instead of maintaining an array of size n, we just keep two variables and update them as we go.
This is the same relationship as the Fibonacci sequence (shifted by one): ways(1) = 1, ways(2) = 2, ways(3) = 3, ways(4) = 5, ways(5) = 8, ... The climbing stairs sequence is just fib(n+1).
The bottom-up approach computes the same recurrence as the recursive solution, but in forward order. Starting from the known base cases (step 1 and step 2), we build up to step n one at a time. Since each step only depends on the two immediately preceding steps, we only need two variables. This is the classic space optimization for 1D dynamic programming problems where the recurrence only looks back a fixed number of steps.
prev1 = 2 (ways to reach step 2) and prev2 = 1 (ways to reach step 1).current = prev1 + prev2.prev2 = prev1 and prev1 = current.prev1 holds the answer.