Last Updated: March 29, 2026
We need to count how many nodes are in a complete binary tree. At first glance, this sounds trivial: just traverse the whole tree and count. But the problem explicitly asks for a solution faster than O(n), which means we need to exploit the structural property of complete binary trees.
So what makes a complete binary tree special? Every level is fully filled except possibly the last, and the last level's nodes are packed to the left. This means the tree's shape is highly constrained. If you know the tree's height and where the last level "ends," you can compute the node count without visiting every node.
The key observation is this: in a complete binary tree, if the left subtree and right subtree have the same height (measured by going all the way down-left), then the left subtree is a perfect binary tree. If they differ, the right subtree is a perfect binary tree (one level shorter). In either case, one subtree's count can be computed with a formula, and we only need to recurse into the other subtree.
0 <= nodes <= 5 * 10^4 → An O(n) traversal would work but the problem explicitly asks for better. This points toward O(log^2 n) or O(log n) solutions.The tree is guaranteed to be complete → This structural guarantee is the lever for sub-linear performance. We can use height comparisons to skip entire subtrees.0 nodes possible → We must handle the empty tree case.The most straightforward approach is to ignore the "complete" property entirely and just count every node. Visit the root, recursively count the left subtree, recursively count the right subtree, and add them together plus one for the root.
This is the first thing most people would write. It is clean, correct, and easy to reason about. For an interview, you would present this as the brute force baseline before leveraging the complete tree structure.
Input:
We visit every node: 1, 2, 3, 4, 5, 6. Count = 6.
This works but visits every node, ignoring the complete tree structure entirely. What if we could detect perfect subtrees and skip them using a formula?
Here is the insight that makes this problem interesting. In a complete binary tree, one of the two subtrees (left or right) at any node is always a perfect binary tree. Think about why: the last level fills from left to right, so either the left subtree is fully packed (perfect) and the right subtree is complete, or the right subtree is one level shorter and fully packed (perfect) while the left subtree is complete but not perfect.
How do we check if a subtree is perfect? We compare its leftmost depth to its rightmost depth. If they match, every level is filled, so it is a perfect binary tree with 2^h - 1 nodes. Computing these two depths takes O(log n) time (just walk down-left and down-right).
So the algorithm becomes: at each node, compute the left depth and right depth. If they are equal, the tree rooted here is perfect, return 2^h - 1. If not, recurse into both children and sum up. The crucial point is that at each level of recursion, one of the two recursive calls will immediately return (because it hits a perfect subtree), so we only recurse down one side. That gives us O(log n) levels of recursion, each doing O(log n) work to compute depths, for a total of O(log^2 n).
The magic lies in the structure of complete binary trees. At every level of recursion, we compare the leftmost and rightmost depths. When they match, we know the subtree is perfect and we compute its size with a formula. When they do not match, we recurse into both children. But here is the critical part: when we recurse into both children of a complete tree, one of them is guaranteed to be perfect (its left and right depths will match on the very next call), so it returns in O(log n) time. Only the other child requires further recursion.
This means at each level of recursion, we do O(log n) work (computing depths) and only one branch continues recursing. There are O(log n) levels of recursion, so the total work is O(log n) * O(log n) = O(log^2 n).
The recursive approach is elegant, but it uses O(log n) stack space. Can we reformulate this as a binary search to make it iterative with O(1) space?
Here is another way to think about the problem. A complete binary tree of height h has between 2^(h-1) and 2^h - 1 nodes (treating the root level as height 1). The first h-1 levels are fully filled with 2^(h-1) - 1 nodes. So the question boils down to: how many nodes are on the last level?
The last level can have nodes at positions 0, 1, 2, ..., 2^(h-1) - 1 (left to right). Since nodes fill from left to right, there is some cutoff index where all positions to the left exist and all positions to the right do not. This is a classic binary search setup: we binary search for the rightmost position on the last level that has a node.
To check if a node exists at a given position on the last level, we trace a path from the root. At each level, we go left or right depending on whether the target position is in the left half or right half of the remaining range. This takes O(log n) time per check.
The binary search approach works because of the left-filling property. On the last level, all nodes are packed to the left. So there is a monotonic boundary: all positions from 0 to some index k have nodes, and all positions beyond k are empty. Binary search finds this boundary in O(log n) iterations. Each iteration traces a root-to-leaf path in O(log n) time. Total: O(log^2 n).
This approach makes the binary search aspect more explicit compared to Approach 2, which implicitly does the same thing through recursive height comparisons. Both achieve the same complexity.