Last Updated: March 29, 2026
We have an array of n distinct integers, and these integers come from the range [0, n]. Since the range has n + 1 numbers but the array only holds n of them, exactly one number is missing. Our job is to find it.
The key observation is that we know what the complete set should look like. If n = 3, the full set is {0, 1, 2, 3}. The array gives us 3 of those 4 numbers, and we need to figure out which one got left out. This "known universe" property is what makes the problem tractable. We're not searching for an arbitrary missing value. We know the exact range, and that gives us powerful mathematical tools.
n == nums.length with 1 <= n <= 10^4 → With n up to 10,000, even O(n^2) would run in about 100 million operations, which is borderline. O(n log n) and O(n) are both comfortable.0 <= nums[i] <= n → All values fit in the range [0, n], so we could use an auxiliary boolean array of size n+1 if we wanted.The most direct way to find a missing element: put everything you have into a set, then check which number from the expected range isn't there. It's the approach you'd use in everyday programming without thinking twice.
We know the full range is [0, n]. We have n numbers. Dump them into a hash set, then loop from 0 to n and return the first number that's not in the set.
nums.nums).The hash set approach uses O(n) extra space to store every element. What if we could exploit the mathematical properties of the range [0, n] to avoid storing anything at all?
Here's the key insight: we know exactly what the sum of [0, n] should be. The famous formula n * (n + 1) / 2 gives us the sum of the first n natural numbers. If we compute the actual sum of the array and subtract it from the expected sum, the difference must be the missing number.
Think of it like counting money. If you know a cash register should have $45 (the sum of bills numbered 0 through 9) and you count $37, then $8 worth of bills is missing. Same principle here.
The sum formula leverages the fact that we know the complete set of numbers. If every number in [0, n] were present, the sum would be exactly n * (n + 1) / 2. Removing one number reduces the sum by exactly that number's value. So the difference between the expected and actual sums is precisely the missing number.
This is an application of the invariant principle. The sum is a single scalar that captures aggregate information about the set. Removing one element changes it in a predictable, reversible way.
n * (n + 1) / 2 where n is the length of the array.nums.expectedSum - actualSum.The sum approach is already O(n) time and O(1) space. But there's a subtle concern with integer overflow for large n. Is there a bitwise technique that avoids arithmetic altogether?
XOR has a beautiful property: a ^ a = 0 and a ^ 0 = a. In other words, XOR-ing a number with itself cancels it out, and XOR-ing with zero leaves it unchanged. XOR is also commutative and associative, so the order doesn't matter.
Here's the trick: XOR all the indices from 0 to n together with all the values in the array. Every number that appears in both the index range and the array will cancel out. The only number left standing is the one that exists in the range but not in the array, which is the missing number.
XOR is its own inverse: applying XOR twice with the same value returns you to the original. When we XOR all numbers from 0 to n and all elements of the array, every number that appears in both gets XOR-ed twice and cancels to zero. The missing number only appears in the index range (not the array), so it's XOR-ed exactly once and survives.
More formally: let M be the missing number. We compute (0 ^ 1 ^ 2 ^ ... ^ n) ^ (nums[0] ^ nums[1] ^ ... ^ nums[n-1]). Every number except M appears exactly twice (once from the range, once from the array), contributing x ^ x = 0. M appears only once, so the result is 0 ^ M = M.
xor to 0.xor with each index from 0 to n.xor with each element of nums.xor holds the missing number. Return it.