AlgoMaster Logo

Fibonacci Number

Last Updated: March 28, 2026

easy

Understanding the Problem

This is one of the most classic problems in computer science. The definition itself gives you the answer: each Fibonacci number is the sum of the two before it. The challenge is not figuring out what to compute, but figuring out how to compute it efficiently.

A naive recursive implementation directly mirrors the mathematical definition, but it recalculates the same subproblems over and over. For example, computing F(5) requires F(4) and F(3), but computing F(4) also requires F(3). That F(3) gets computed twice. As n grows, this redundancy explodes exponentially.

The real question is: how do we avoid recomputing the same values? This is where dynamic programming enters the picture, and Fibonacci is the textbook introduction to it.

Key Constraints:

  • 0 <= n <= 30 → With n at most 30, even an O(2^n) brute force solution would mean about 1 billion operations for n=30, which is borderline. But O(n) or O(n) with memoization is trivially fast. The small input size means any reasonable approach will pass.

Approach 1: Recursive (Brute Force)

Intuition

The most direct approach is to translate the mathematical definition into code. F(n) calls F(n-1) and F(n-2), each of which makes their own recursive calls, and so on until we hit the base cases F(0) = 0 and F(1) = 1.

This works, but it's wildly inefficient. The recursion tree branches into two at every level, and many subproblems get solved repeatedly. For example, computing F(5) computes F(3) twice, F(2) three times, and F(1) five times. The total number of function calls grows exponentially with n.

Algorithm

  1. If n <= 1, return n (base cases: F(0) = 0, F(1) = 1).
  2. Otherwise, return fib(n - 1) + fib(n - 2).

Example Walkthrough

Let's trace through n = 4:

Code

The exponential time comes from recomputing the same subproblems. For F(30), the naive approach makes over 2 million function calls, but there are only 31 unique subproblems.

What if we stored the result of each F(k) the first time we compute it, so we never compute it again?

Approach 2: Memoization (Top-Down DP)

Intuition

The key observation is that the recursive approach solves only 31 unique subproblems for n=30, but it solves them millions of times. If we cache the result of each F(k) in an array, we can look it up instantly on subsequent calls instead of recomputing the entire subtree.

This technique is called memoization. It keeps the top-down recursive structure but eliminates redundant computation. The first time we compute F(k), we store the result. Every subsequent call to F(k) returns the cached value in O(1).

Algorithm

  1. Create a memo array to store computed results, initialized to -1.
  2. If n <= 1, return n.
  3. If memo[n] is already computed (not -1), return it.
  4. Otherwise, compute fib(n - 1) + fib(n - 2), store it in memo[n], and return it.

Example Walkthrough

1Initialize: memo filled with -1, call fib(5)
0
-1
1
-1
2
-1
3
-1
4
-1
5
-1
1/6

Code

At any point during the computation, we only need the two most recent Fibonacci numbers to compute the next one. Storing all n+1 values is unnecessary.

What if we computed Fibonacci numbers from the bottom up, keeping only the last two values at any time?

Approach 3: Iterative DP (Space Optimized)

Intuition

Instead of working top-down with recursion, we can work bottom-up. Start from the base cases F(0) = 0 and F(1) = 1, then iteratively compute F(2), F(3), ..., F(n). At each step, we only need the previous two values, so two variables are enough.

This approach eliminates both the recursion overhead and the memo array. It's the cleanest and most efficient solution.

Algorithm

  1. Handle base cases: if n <= 1, return n.
  2. Initialize two variables: prev2 = 0 (F(0)) and prev1 = 1 (F(1)).
  3. Loop from i = 2 to n:
    • Compute current = prev1 + prev2.
    • Shift: prev2 = prev1, prev1 = current.
  4. Return prev1.

Example Walkthrough:

1Initialize: prev2=0 (F0), prev1=1 (F1)
prev2
0
prev1
1
1/7

Code