Last Updated: April 2, 2026
We have a binary search tree and a target value, and we need to find the node whose value matches the target. If we find it, we return that node (which effectively returns the entire subtree rooted at it). If no match exists, we return null.
The critical detail here is that this is a binary search tree, not just any binary tree. In a BST, for every node, all values in its left subtree are smaller and all values in its right subtree are larger. This ordering property is exactly what makes binary search possible on trees. Instead of checking every node, we can make a decision at each step about which direction to go, eliminating half the remaining tree each time (in a balanced tree).
So the question is really: can we do better than visiting every node? And the answer is yes, because the BST property gives us a roadmap.
1 <= number of nodes <= 5000 -- With at most 5,000 nodes, even a full tree traversal in O(n) is fast. But we can do better using the BST structure.1 <= Node.val <= 10^7 -- Values are positive integers. No special handling needed for negatives or zero.root is a binary search tree -- This is the key constraint. It guarantees the ordering property that lets us prune our search.1 <= val <= 10^7 -- The target is always a valid positive integer within the same range as node values.The simplest approach is to ignore the BST property entirely and treat this like a regular binary tree. We visit every node using DFS, and whenever we find a node whose value matches val, we return it. If we exhaust the entire tree without finding a match, we return null.
This is what you'd do if someone handed you a generic binary tree. It works, but it doesn't take advantage of the structure we've been given.
val, return the current node.The brute force visits both subtrees at every node. But in a BST, we know that if val is less than the current node's value, it can't possibly be in the right subtree. What if we used the BST ordering to pick just one direction at each step?
This is where the BST property really shines. Instead of blindly searching both subtrees, we compare val to the current node's value and make a decision:
val equals the node's value, we found it. Return the node.val is less than the node's value, the target must be in the left subtree (because in a BST, everything to the right is larger).val is greater than the node's value, the target must be in the right subtree.This is essentially binary search, but on a tree instead of an array. At each step, we eliminate one entire subtree from consideration. For a balanced BST, this cuts our work from O(n) to O(log n).
val, return the current node.val is less than the current node's value, recursively search the left subtree.val is greater than the current node's value, recursively search the right subtree.The BST invariant guarantees a strict ordering: every node in the left subtree is smaller than the current node, and every node in the right subtree is larger. So when we compare val to the current node and pick a direction, we're not guessing. We know for certain that the target can't exist in the subtree we're skipping. This is the same logic that makes binary search on a sorted array so efficient.
The recursive approach is clean and easy to understand, but it uses O(h) space on the call stack. Can we eliminate the stack overhead entirely by replacing the recursion with a simple loop?
The recursive solution has a nice property: it's tail-recursive. Each recursive call is the last thing the function does before returning. This means we can trivially convert it into a while loop. Instead of making a recursive call, we just update the root pointer to point to the next node we want to examine.
The iterative version does the same comparison-and-branch logic, but without building up a call stack. This gives us O(1) space complexity, which is a meaningful improvement for very deep trees.
current to the root.current is not null:current.val equals val, return current.val is less than current.val, move current to current.left.current to current.right.The logic is identical to the recursive approach. We're just managing the traversal manually with a pointer instead of letting the call stack do it. At each step, we compare and branch, following the BST ordering until we either find our target or fall off the bottom of the tree (null pointer). Since we only move one pointer forward and don't allocate any new data structures, the space usage is constant.