Last Updated: March 28, 2026
We have a row of houses, each with some amount of money. We want to maximize the total money we collect, but we cannot rob two houses that are next to each other. So if we rob house i, we must skip house i+1 (and house i-1).
This is not as simple as "take every other house." Consider [2, 1, 1, 100]. If you greedily take house 0 and house 2 (skipping odd indices), you get 2 + 1 = 3. But the optimal is house 0 and house 3, giving 2 + 100 = 102. The decision of whether to rob a house depends on decisions made for earlier houses.
The key observation: at each house, you have exactly two choices. Rob it (and add its value to the best you could do up to two houses back) or skip it (and carry forward the best you could do up to the previous house). This "take or skip with a neighbor constraint" structure is a textbook dynamic programming setup.
1 <= nums.length <= 100 --> With n at most 100, even O(n^3) would be fast enough. But n being small does not mean we should brute-force all subsets (2^100 is astronomical). We need something smarter than exponential, but even O(n^2) is overkill here. A simple O(n) DP is perfect.0 <= nums[i] <= 400 --> Values are non-negative. This means skipping a house never helps unless it lets us rob a more valuable neighbor. We never need to worry about negative values making the "skip" choice better by default.The most natural way to think about this problem is with a recursive decision at each house. Standing at house i, we ask: should we rob it or skip it?
If we rob house i, we earn nums[i] and then solve the subproblem for houses 0 through i-2 (we must skip house i-1). If we skip house i, we solve the subproblem for houses 0 through i-1. We take whichever choice gives more money.
This recursive thinking directly models the problem, but it's slow. The same subproblem gets solved over and over. For example, computing the answer for house 5 requires answers for houses 3 and 4, and house 4 also requires house 3. This overlapping subproblem structure means the recursion tree grows exponentially.
rob(i) that returns the maximum money from houses 0 through i.i < 0, return 0. If i == 0, return nums[0].max(rob(i - 1), nums[i] + rob(i - 2)).rob(n - 1) where n is the length of nums.The recursion recomputes the same subproblems many times. What if we stored the result of each rob(i) the first time we compute it?
The brute force recursion has the right structure, it just does too much redundant work. We can fix that by caching results. The first time we compute rob(i), we store the answer. Every subsequent call to rob(i) returns the cached value immediately.
This technique is called memoization, and it transforms our exponential solution into a linear one. Each subproblem rob(0) through rob(n-1) is solved exactly once, and each takes O(1) work (just a comparison and an addition). So the total work drops from O(2^n) to O(n).
Memoization works because this problem has overlapping subproblems: the answer for house i depends on answers for houses i-1 and i-2, and many different call paths lead to the same subproblem. By storing results the first time, we ensure each of the n subproblems is computed exactly once, cutting the exponential tree down to a linear chain.
n, initialized to -1 (meaning "not computed yet").rob(i) the same as before, but before computing, check if memo[i] is already set. If so, return it.memo[i] before returning.rob(n - 1).We're using O(n) space for the memo array plus O(n) for the recursion stack. Since rob(i) only depends on rob(i-1) and rob(i-2), can we fill in the answers iteratively from left to right?
Instead of starting from the end and recursing backward, we flip the direction. Start from the first house and work forward. For each house i, compute the best we can do considering houses 0 through i. The recurrence is the same: dp[i] = max(dp[i-1], nums[i] + dp[i-2]). But now we fill the table iteratively, which eliminates recursion overhead and makes the computation order explicit.
This is the classic bottom-up DP approach. It's often the version interviewers expect to see because it's clean, easy to analyze, and avoids any risk of stack overflow.
The bottom-up approach works because we process houses in order, and by the time we reach house i, we already have the optimal answers for all earlier houses. The recurrence dp[i] = max(dp[i-1], nums[i] + dp[i-2]) captures both choices (skip or rob) and keeps the better one. Since each house's decision only depends on the previous two results, filling the table left to right gives us the global optimum at dp[n-1].
n == 1, return nums[0].dp of size n.dp[0] = nums[0] (only one house, rob it).dp[1] = max(nums[0], nums[1]) (pick the richer of the first two houses).i from 2 to n-1: dp[i] = max(dp[i-1], nums[i] + dp[i-2]).dp[n-1].We allocate an entire array of size n, but at any point we only ever look at the last two values. Can we replace the dp array with just two variables?
Look at the recurrence again: dp[i] = max(dp[i-1], nums[i] + dp[i-2]). We only ever need two values from the past: the result one step back and the result two steps back. So instead of storing the entire dp array, we keep just two variables: prev1 (the best up to the previous house) and prev2 (the best up to two houses back).
As we move forward, we compute the new best, then shift the variables: the old prev1 becomes the new prev2, and the newly computed value becomes prev1. This brings space down from O(n) to O(1).
The rolling variable technique works because the recurrence has a fixed "window" of dependencies. Since dp[i] only depends on dp[i-1] and dp[i-2], once we move past house i, we never need dp[i-2] again. Replacing the array with two variables is mathematically identical to the tabulation approach, just with less memory.
n == 1, return nums[0].prev2 = nums[0] and prev1 = max(nums[0], nums[1]).i from 2 to n-1:current = max(prev1, nums[i] + prev2).prev2 = prev1, prev1 = current.prev1.