Last Updated: April 2, 2026
We have a binary tree where each node holds a non-negative integer (the money in that house). The thief wants to maximize the total money stolen, but there is a constraint: if you rob a node, you cannot rob its direct parent or its direct children. In other words, no two adjacent nodes in the tree can both be robbed.
This is the tree version of the classic House Robber problem. In the original, houses sit in a line, and you cannot rob adjacent houses. Here, "adjacent" means parent-child in a tree. The linear structure is gone, so simple left-to-right DP will not work. We need to think recursively.
The key observation is this: at every node, we face a binary choice. Either we rob this node (and skip its children, collecting from grandchildren and beyond) or we skip this node (and are free to rob or skip each child independently). The optimal answer for the whole tree is the better of these two choices made at the root, where each subtree has already resolved its own rob-or-skip decisions optimally.
1 <= number of nodes <= 10^4 --> With up to 10,000 nodes, an O(n) solution is ideal. The naive recursive approach without memoization can be exponential on skewed trees, so we need memoization or a bottom-up approach.0 <= Node.val <= 10^4 --> All values are non-negative. Skipping a node never helps unless it lets you collect more from its children.The most natural way to think about this: at every node, we choose to rob it or not. If we rob the current node, we add its value and then solve the problem for its four grandchildren (skipping both children). If we skip the current node, we solve the problem for its two children and take whatever is best from each.
We pick whichever choice gives more money. This directly translates the problem statement into code. The issue is that the same subtree gets solved many times. When computing the answer for the root, both the "rob root" path (which needs grandchildren) and the "skip root" path (which needs children, which in turn need grandchildren) end up recomputing the same nodes.
The same subtree gets solved multiple times because the rob option needs grandchildren while the skip option needs children, which then also need grandchildren. What if we cached the result of each node so it is only computed once?
The naive approach has the right logic but does redundant work. The fix is straightforward: use a hash map to store the result of rob(node) after computing it the first time. Before computing any node, check the map. If we have seen this node before, return the cached result immediately.
Since each node in the tree is computed at most once, the total work drops from exponential to linear.
Memoization ensures every node is solved at most once. When we encounter a node for the second time (from a different parent or grandparent call), we return the cached value in O(1). The hash map uses node references as keys, which works because each TreeNode object is unique in memory. Two different nodes with the same value are still distinct objects.
The memoization approach works in O(n) time, but the hash map adds constant-factor overhead. What if each recursive call returned enough information for the parent to make its decision without any caching at all?
Here is the key insight that makes everything click. Instead of returning a single "best you can get" value from each subtree, return two values: the best if you rob this node, and the best if you skip this node. When a parent receives these two values from each child, it has everything it needs to make its own decision instantly.
If we rob the current node, both children must be skipped. So the rob value is: node.val + leftSkip + rightSkip.
If we skip the current node, each child can be robbed or skipped independently. So the skip value is: max(leftRob, leftSkip) + max(rightRob, rightSkip).
Each node computes its pair in O(1) given its children's pairs. Since we process each node exactly once in a post-order traversal, the total time is O(n) with no hash map needed.
The pair [rob, skip] completely captures the optimal strategy for a subtree from the parent's perspective. The parent only needs to know two things: what is the best from this child's subtree if the child is robbed, and what is the best if the child is not robbed. With these two numbers, the parent can make its own rob/skip decision in O(1). This works because of optimal substructure: the best solution for the whole tree decomposes into the best solutions for each subtree.
[robThis, skipThis] for each node.[0, 0].[leftRob, leftSkip] from the left child and [rightRob, rightSkip] from the right child.robThis = node.val + leftSkip + rightSkip (rob current, must skip children).skipThis = max(leftRob, leftSkip) + max(rightRob, rightSkip) (skip current, children choose freely).[robThis, skipThis].max(robThis, skipThis) at the root.