Last Updated: April 2, 2026
We have a sorted array and need to find two numbers that add up to a given target. The key constraints that shape our approach: the array is sorted, we must return 1-indexed positions, and we can only use constant extra space.
That last constraint is the interesting one. It rules out the classic hash map approach from the original Two Sum. The problem is telling us to use the sorted property to our advantage rather than relying on extra data structures.
Since the array is sorted and there's guaranteed to be exactly one solution, we can leverage the ordering to efficiently narrow down where the pair must be.
numbers.length up to 3 * 10^4 → O(n^2) brute force would be about 9 * 10^8 operations, which is borderline too slow. We should aim for O(n log n) or O(n).The most straightforward approach: try every pair of elements and check if they add up to the target. For each element at index i, check every element at index j > i. If their sum equals the target, return the 1-indexed positions.
This doesn't use the sorted property at all, which is a red flag that we're leaving performance on the table. But it's a valid starting point for building intuition.
i from 0 to n-2.i, loop through each index j from i+1 to n-1.numbers[i] + numbers[j] equals the target, return [i+1, j+1].The brute force completely ignores the sorted property. Since the array is sorted, what if we used binary search to jump straight to where the complement would be, instead of scanning linearly?
Since the array is sorted, we can do something smarter for each element. Instead of linearly scanning for its complement, we can binary search for it. For each numbers[i], the complement we need is target - numbers[i]. A binary search on the remaining portion of the array finds this complement in O(log n) instead of O(n).
This is a natural application of the sorted property. Whenever you see a sorted array and need to find a specific value, binary search should come to mind.
i from 0 to n-2:complement = target - numbers[i].complement in the subarray from index i+1 to n-1.j, return [i+1, j+1].Binary search cut the inner loop from O(n) to O(log n), but we're still treating each element independently. What if both pointers could move simultaneously, each step eliminating a candidate and making progress toward the answer?
Here's the key insight: place one pointer at the beginning (smallest element) and one at the end (largest element). Their sum gives us information about which direction to move.
This works because the array is sorted. Each pointer move eliminates exactly one candidate from consideration, and we never need to backtrack. The two pointers converge toward each other, and since a solution is guaranteed, they'll meet at the answer.
The correctness relies on a simple invariant: the answer pair is always between the two pointers (inclusive). Initially, the pointers span the entire array, so the answer is obviously included. Each step, we move one pointer inward.
Say the answer is at indices (a, b) where a < b. If left < a, then numbers[left] + numbers[right] <= numbers[a] + numbers[right]. If this sum is already less than or equal to the target, we move left right, which brings us closer to a, not past it. Similarly, if right > b and the sum is too large, moving right left brings us closer to b. We never skip the answer because each move is forced by the sum being on the wrong side of the target.
left = 0 and right = n - 1.left < right:sum = numbers[left] + numbers[right].sum == target, return [left + 1, right + 1].sum < target, increment left (we need a larger sum).sum > target, decrement right (we need a smaller sum).