Last Updated: March 28, 2026
This is one of the most classic problems in computer science. The definition itself gives you the answer: each Fibonacci number is the sum of the two before it. The challenge is not figuring out what to compute, but figuring out how to compute it efficiently.
A naive recursive implementation directly mirrors the mathematical definition, but it recalculates the same subproblems over and over. For example, computing F(5) requires F(4) and F(3), but computing F(4) also requires F(3). That F(3) gets computed twice. As n grows, this redundancy explodes exponentially.
The real question is: how do we avoid recomputing the same values? This is where dynamic programming enters the picture, and Fibonacci is the textbook introduction to it.
0 <= n <= 30 → With n at most 30, even an O(2^n) brute force solution would mean about 1 billion operations for n=30, which is borderline. But O(n) or O(n) with memoization is trivially fast. The small input size means any reasonable approach will pass.The most direct approach is to translate the mathematical definition into code. F(n) calls F(n-1) and F(n-2), each of which makes their own recursive calls, and so on until we hit the base cases F(0) = 0 and F(1) = 1.
This works, but it's wildly inefficient. The recursion tree branches into two at every level, and many subproblems get solved repeatedly. For example, computing F(5) computes F(3) twice, F(2) three times, and F(1) five times. The total number of function calls grows exponentially with n.
n <= 1, return n (base cases: F(0) = 0, F(1) = 1).fib(n - 1) + fib(n - 2).Let's trace through n = 4:
The exponential time comes from recomputing the same subproblems. For F(30), the naive approach makes over 2 million function calls, but there are only 31 unique subproblems.
What if we stored the result of each F(k) the first time we compute it, so we never compute it again?
The key observation is that the recursive approach solves only 31 unique subproblems for n=30, but it solves them millions of times. If we cache the result of each F(k) in an array, we can look it up instantly on subsequent calls instead of recomputing the entire subtree.
This technique is called memoization. It keeps the top-down recursive structure but eliminates redundant computation. The first time we compute F(k), we store the result. Every subsequent call to F(k) returns the cached value in O(1).
n <= 1, return n.memo[n] is already computed (not -1), return it.fib(n - 1) + fib(n - 2), store it in memo[n], and return it.At any point during the computation, we only need the two most recent Fibonacci numbers to compute the next one. Storing all n+1 values is unnecessary.
What if we computed Fibonacci numbers from the bottom up, keeping only the last two values at any time?
Instead of working top-down with recursion, we can work bottom-up. Start from the base cases F(0) = 0 and F(1) = 1, then iteratively compute F(2), F(3), ..., F(n). At each step, we only need the previous two values, so two variables are enough.
This approach eliminates both the recursion overhead and the memo array. It's the cleanest and most efficient solution.
n <= 1, return n.prev2 = 0 (F(0)) and prev1 = 1 (F(1)).i = 2 to n:current = prev1 + prev2.prev2 = prev1, prev1 = current.prev1.