Last Updated: March 20, 2026
We need to build an array where each index i holds the popcount (population count) of i, which is the number of 1-bits in its binary form. For example, 5 in binary is 101, so it has two 1-bits.
The brute force is straightforward: for each number, count its bits. But the follow-up hints at something smarter. The key observation is that the bit count of a number is closely related to the bit count of a smaller number we've already computed. That's dynamic programming hiding in plain sight.
0 <= n <= 10^5 → With n up to 100,000, O(n log n) is fast enough, but the follow-up explicitly asks for O(n), so we should aim for a single-pass DP approach.The most natural approach: for each number from 0 to n, count how many 1-bits it has. To count the bits of a single number, we can repeatedly check the last bit (using & 1) and right-shift until the number becomes 0.
This is exactly what built-in functions like Integer.bitCount() or __builtin_popcount do under the hood (though they use clever parallel tricks to do it faster). Since we're told to avoid built-ins, we'll write our own bit counter.
n + 1i from 0 to n:num & 1) to the counter and right-shift the numberans[i]The bottleneck here is that we're counting every bit of every number independently, throwing away information we've already computed. But the binary representation of i and i >> 1 differ by just one bit position. What if we could reuse the bit count of a smaller number to instantly get the bit count of a larger one?
Here's the key insight. Take any number, say 13 (1101 in binary). If you right-shift it by 1, you get 6 (110). The only difference is that right-shifting drops the last bit. So the bit count of 13 equals the bit count of 6, plus whatever that dropped last bit was.
In other words: popcount(i) = popcount(i >> 1) + (i & 1).
Since i >> 1 is always less than i, by the time we're computing ans[i], we've already computed ans[i >> 1]. That's a DP recurrence that gives us each answer in O(1) time.
Right-shifting a number by 1 is equivalent to dividing by 2 and discarding the remainder. The binary digits of i >> 1 are identical to the upper digits of i. So i has the same 1-bits as i >> 1, except possibly one more if i is odd (its last bit is 1). This is why ans[i] = ans[i >> 1] + (i & 1) is always correct.
n + 1, with ans[0] = 0i from 1 to n:ans[i] = ans[i >> 1] + (i & 1)i >> 1 gives us the same number with its last bit removed (already computed)i & 1 tells us if the last bit is 1 or 0We can't do better than O(n) since we need to fill n + 1 entries. But there's an alternative DP recurrence worth knowing. Instead of relating i to i / 2, what if we related i to the number you get by clearing its lowest set bit?
There's a well-known bit trick: i & (i - 1) clears the lowest set bit of i. For example:
6 & 5 = 110 & 101 = 100 = 4 (cleared the lowest 1-bit at position 1)12 & 11 = 1100 & 1011 = 1000 = 8 (cleared the lowest 1-bit at position 2)So i & (i - 1) is a number that has exactly one fewer 1-bit than i, and it's always smaller than i. That means: ans[i] = ans[i & (i - 1)] + 1.
This is Brian Kernighan's bit-counting trick applied as a DP recurrence. It's elegant because the dependency jumps directly to the most relevant smaller number, not just i / 2.
The operation i & (i - 1) always produces a number strictly smaller than i (it removes at least one bit), and that number has exactly one fewer 1-bit. So ans[i & (i - 1)] is always ready before we need it, and adding 1 accounts for the removed bit. The recurrence is correct by induction: it's true for 0 (base case), and if it's true for all numbers less than i, it's true for i.
n + 1, with ans[0] = 0i from 1 to n:ans[i] = ans[i & (i - 1)] + 1i & (i - 1) is i with its lowest set bit cleared (always a smaller number we've already computed)