Last Updated: March 29, 2026
We're given a sorted array of unique integers and need to find the index of a specific target value. If the target isn't in the array, we return -1.
The problem explicitly tells us the array is sorted, which is a massive hint. A sorted array means we don't have to look at every element. If we pick any element and compare it to our target, we immediately know whether the target lies to the left or to the right. That single comparison eliminates half the search space. This is the fundamental idea behind binary search.
1 <= nums.length <= 10^4 → With n up to 10,000, even an O(n) linear scan would be fast enough in practice. But the problem explicitly requires O(log n), so we must use binary search.-10^4 < nums[i], target < 10^4 → Small value range. No integer overflow concerns when computing the middle index since n is at most 10^4.nums are unique → No duplicates to worry about. Once we find the target, there's exactly one match.nums is sorted in ascending order → This is the key constraint that makes binary search possible.The simplest approach is to ignore the sorted property entirely and just walk through the array element by element. If we find the target, return its index. If we reach the end without finding it, return -1.
This is what you'd do with an unsorted array, and it works perfectly fine here too. It just doesn't take advantage of the structure we've been given.
nums[i] == target, return i.-1.This works, but we're checking every element sequentially even though the array is sorted. What if each comparison could eliminate half the remaining elements instead of just one?
Since the array is sorted, we can do something much smarter than checking one element at a time. Pick the middle element. If it equals the target, we're done. If the target is smaller, it must be in the left half. If the target is larger, it must be in the right half. Either way, we've just eliminated half the array with a single comparison.
Think of it like guessing a number between 1 and 100. You wouldn't start at 1 and count up. You'd guess 50, ask "higher or lower?", then guess 25 or 75, and keep halving. That's binary search.
Binary search relies on a simple invariant: after every iteration, the target (if it exists) is still within the range [left, right]. When we find nums[mid] < target, we know the target can't be at mid or anywhere to its left (because the array is sorted), so we safely move left past mid. The same logic applies in reverse when nums[mid] > target.
The loop terminates when left > right, which means the search range is empty. At that point, we've eliminated every possible position for the target, so it doesn't exist in the array.
Each step halves the search range. Starting from n elements, after one step we have n/2, then n/4, then n/8, and so on. After log2(n) steps, the range shrinks to zero. That's why the time complexity is O(log n).
left = 0 and right = nums.length - 1.left <= right:mid = left + (right - left) / 2.nums[mid] == target, return mid.nums[mid] < target, set left = mid + 1 (target is in the right half).nums[mid] > target, set right = mid - 1 (target is in the left half).-1.left, right, and mid. No extra data structures.