Last Updated: March 29, 2026
We need to count pairs of indices (i, j) where i comes before j in the array and the element at i is strictly greater than twice the element at j. This is a modified inversion counting problem. In a standard inversion, the condition is nums[i] > nums[j]. Here, the condition is asymmetric: nums[i] > 2 * nums[j].
The twist compared to simpler inversion problems is that the counting condition and the sorting condition are different. In regular inversion counting with merge sort, you count and sort using the same comparison. Here, you need to count pairs satisfying nums[i] > 2 * nums[j] but still sort normally. This decoupling is the key challenge.
One more thing to watch out for: the values can be as large as 2^31 - 1, so 2 * nums[j] can overflow a 32-bit integer. You'll need to use long (64-bit) arithmetic for the comparison.
1 <= nums.length <= 5 * 10^4 --> With n up to 50,000, an O(n^2) brute force means roughly 2.5 billion operations. That's too slow. We need O(n log n) or better.-2^31 <= nums[i] <= 2^31 - 1 --> The full 32-bit integer range. This means 2 * nums[j] can overflow int. We must use long for the multiplication. Also, the value range is too large for a direct BIT indexed by value, so we'd need coordinate compression if going the BIT route.The most direct approach: check every pair (i, j) with i < j and count the ones where nums[i] > 2 * nums[j]. There's no cleverness here. Just two nested loops, testing the condition for each pair.
This gives us a baseline to understand the problem and verify correctness before optimizing.
i from 0 to n - 2, iterate through indices j from i + 1 to n - 1.nums[i] > 2 * (long) nums[j], increment the counter. Use long to avoid overflow.This works but is too slow for n up to 50,000. What if we could sort the array in a way that lets us count valid pairs during the sorting process itself?
Merge sort has a beautiful property: during the merge step, you have two sorted halves, and every element in the left half originally appeared before every element in the right half. This positional guarantee is exactly what we need for counting pairs where i < j.
But here's the twist. In standard inversion counting, the counting and sorting use the same comparison (nums[i] > nums[j]). In Reverse Pairs, the counting condition is nums[i] > 2 * nums[j], while sorting still uses the standard nums[i] > nums[j]. So we can't count and merge in one pass.
The solution is to split the merge step into two phases:
Both phases run in O(n) time because both halves are sorted, so the overall complexity stays O(n log n).
Here's why the count phase works with two pointers: the left half is sorted, so as we move to the next (larger) element in the left half, the pointer in the right half only needs to move forward, never backward. For each left element, we find how far into the right half we can go while left[i] > 2 * right[j]. Since both pointers only move forward, this is a single O(n) pass.
The key insight is that merge sort preserves relative ordering within each half while establishing a clean boundary between elements that originally came from the left vs. right side of the split. At each merge step, every element in the left half had a smaller original index than every element in the right half. So any pair we count between the two halves automatically satisfies the i < j condition.
The two-pointer counting works because both halves are sorted. As we move to the next (larger) element in the left half, the pointer j into the right half never needs to go backward. If left[i] > 2 * right[j], then left[i+1] >= left[i] > 2 * right[j], so left[i+1] will also form a valid pair with right[j]. The j pointer only moves forward, giving us an O(n) counting pass.
left[i] > 2 * right[j].The merge sort approach is already optimal at O(n log n). But there's an alternative angle using a Binary Indexed Tree that avoids modifying the input array.
Instead of sorting the array, we can process elements from right to left and maintain a data structure that tells us how many elements we've already seen. For each element nums[i], we need to count how many previously seen elements nums[j] (where j > i) satisfy nums[i] > 2 * nums[j], which is equivalent to counting elements with value at most floor((nums[i] - 1) / 2).
A Binary Indexed Tree (BIT) is perfect for this: it supports point updates and prefix sum queries in O(log n) time. But the value range spans the full 32-bit integer range, which is far too large to index directly. So we use coordinate compression: collect all values we'll ever query or insert, sort and deduplicate them, and map each to a compact index.
By processing from right to left, every element we've already inserted into the BIT has a larger index in the original array. So the positional constraint i < j is automatically satisfied. The BIT answers the value question: "how many inserted elements have value at most T?" where T is the threshold for the current element. Coordinate compression ensures the BIT only uses O(n) space regardless of the value range.
nums[i], add both nums[i] and floor((nums[i] - 1) / 2) (the threshold) to a list.nums[i]: a. Look up the compressed index of the threshold floor((nums[i] - 1) / 2).
b. Query the BIT for the prefix sum up to that index.
c. Insert nums[i] by updating the BIT at the compressed index of nums[i].