Last Updated: March 29, 2026
We have a list of 2D points and need to find the k points that are closest to the origin (0, 0). The Euclidean distance from a point (x, y) to the origin is sqrt(x^2 + y^2). But here is a useful observation: since we only care about relative ordering of distances (not the actual distance values), we can compare x^2 + y^2 directly and skip the square root entirely. Squaring preserves the ordering because the square root function is monotonically increasing.
The problem says the answer can be returned in any order, which is a strong hint. We do not need a fully sorted result, just any k elements that happen to be the smallest. This opens the door to approaches that are faster than a full sort.
1 <= k <= points.length <= 10^4 -> With n up to 10,000, O(n log n) sorting is comfortable. But the "any order" hint suggests we can do better.-10^4 <= xi, yi <= 10^4 -> Squared distance x^2 + y^2 is at most 2 * 10^8, which fits in a 32-bit integer. No overflow concerns.k <= points.length -> k is always valid, no edge case handling needed.The most direct approach: compute the squared distance for every point, sort the entire array by that distance, and take the first k elements. Since we only need relative ordering, we can sort by x^2 + y^2 without computing the actual square root.
This is clean and easy to get right. Sorting gives us a fully ordered result, which is more than we need (we only need the k smallest, not all n in order), but it works.
points array using a custom comparator that compares x^2 + y^2 for each point.k elements of the sorted array.This approach works but sorts all n points when we only need the k smallest. What if we maintained just the top k closest points as we scanned the array?
Instead of sorting the entire array, we can maintain a max-heap (priority queue) of size k. The heap holds the k closest points we have seen so far, with the farthest of those k at the top.
As we scan through the points, we compare each new point's distance against the heap's maximum. If the new point is closer, we remove the farthest point and insert the new one. If the heap has fewer than k elements, we just add the point directly.
The max-heap acts as a sliding window over the k closest points. At any moment, the heap contains the k smallest distances seen so far. The max-heap property ensures the largest among those k is always at the top, making it O(1) to compare against new candidates. When a closer point arrives, it displaces the farthest point in our current set.
Once a point is kicked out of the heap, it can never come back. If point P was removed because point Q was closer, then P is farther than at least k other points and cannot be in the answer.
The heap approach is better when k is small, but we still pay O(log k) per element. What if we could partition the array around the k-th smallest distance instead?
The problem boils down to finding the k-th smallest distance and returning everything at or below that threshold. This is exactly what the quickselect algorithm does: it partially sorts an array so that the element at position k is in its final sorted position, with all smaller elements to its left and all larger elements to its right.
Quickselect works by choosing a pivot, partitioning the array around it (all elements with smaller distance go left, larger go right), and then checking where the pivot landed. If the pivot is at index k, we are done. If it is past k, recurse on the left half. If it is before k, recurse on the right half.
The efficiency comes from only recursing on one side. Unlike quicksort which recurses on both halves (total work: O(n log n)), quickselect recurses on one half, cutting the problem size roughly in half each time: n + n/2 + n/4 + ... = O(2n) = O(n) on average.
quickSelect(points, left, right, k).left and right.