Last Updated: March 28, 2026
For each element in the array, we need to count how many elements to its right are strictly smaller. The result is an array of the same length where each position holds that count.
The naive thought is: for each element, scan everything to its right and count. That works, but with up to 100,000 elements, an O(n^2) approach means up to 10 billion operations. We need something smarter.
The key insight is that this problem is really about tracking relative order. As we process elements from right to left, we need to efficiently answer: "How many elements have I already seen that are smaller than this one?" That question points us toward data structures that support efficient rank queries, like Binary Indexed Trees, or algorithmic techniques like merge sort that naturally count inversions.
1 <= nums.length <= 10^5 --> With n up to 100,000, an O(n^2) brute force would mean 10 billion operations. That's too slow. We need O(n log n) or better.-10^4 <= nums[i] <= 10^4 --> The value range is bounded at 20,001 distinct values. This is important because it means we can use value-indexed structures (like a BIT of size 20,001) without blowing up memory.The most direct approach: for each element, scan every element to its right and count how many are smaller. This is exactly what the problem asks for, translated line by line into code.
There's no trick here. For position i, iterate from i + 1 to the end, incrementing a counter each time you find a value strictly less than nums[i].
nums, initialized to 0.i from 0 to n - 2, iterate through indices j from i + 1 to n - 1.nums[j] < nums[i], increment result[i].For each element we're scanning every element to its right, and we never reuse information from previous scans. What if we processed elements from right to left and maintained a structure that could answer "how many elements smaller than X have I seen?" in O(log n)?
Here's a really elegant observation: counting smaller elements to the right is essentially counting inversions. An inversion is a pair (i, j) where i < j but nums[i] > nums[j]. For each index i, the count of smaller elements to its right equals the number of inversions involving i as the left element.
Merge sort naturally counts inversions during its merge step. When we merge two sorted halves, every time we pick an element from the left half and place it into the merged result, we know that all the already-placed elements from the right half were smaller. That count is exactly what we need.
The catch is that we need to track which original index each element came from, since sorting rearranges the elements. So we sort pairs of (value, original index) and accumulate counts into the result array using the original indices.
During the merge step, both halves are sorted. When we pick an element from the right half (because it's smaller than the current left element), we increment rightCount. When we then pick an element from the left half, all those rightCount elements from the right half were originally to the right of it in the array and are smaller. Merge sort preserves relative order within each half, so the counts we accumulate at each level of recursion are additive and correct.
(value, original_index) for each element.result[original_index].The merge sort approach requires careful bookkeeping of original indices through the sorting process. What if instead of sorting, we used a data structure that lets us insert elements one by one and query "how many elements smaller than X?" in O(log n)?
The idea is beautifully simple: traverse the array from right to left. For each element, ask: "How many elements have I already inserted that are smaller than this one?" Then insert the current element.
A Binary Indexed Tree (BIT) is perfect for this. A BIT supports two operations in O(log n): point update (add 1 at a position) and prefix sum query (sum of all values from index 1 to k). If we index the BIT by value, then inserting an element means updating position value by +1, and counting smaller elements means querying the prefix sum from 1 to value - 1.
Since values can be negative (down to -10,000), we shift all values by adding 10,001 so they map to the range [1, 20,001]. This offset ensures every value maps to a valid BIT index.
The BIT maintains frequency counts indexed by value. When we insert a value, we increment its position. When we query for "how many values less than X," we compute the prefix sum up to X - 1. Since we process right to left, every element already in the BIT was originally to the right of the current element. So the prefix sum gives us exactly the count of smaller elements to the right.
nums from right to left.shifted_value - 1. This gives the count of smaller elements already inserted.shifted_value by +1.