Last Updated: March 31, 2026
Quick Sort is one of the most efficient and widely used sorting algorithms in practice. It follows a divide-and-conquer approach, but instead of splitting the array evenly, it selects a pivot and partitions the array into elements smaller and larger than the pivot.
This partitioning step places the pivot in its correct position, and the same process is then applied recursively to the left and right subarrays. When implemented well, Quick Sort achieves an average time complexity of O(n log n) with excellent performance in real-world scenarios.
However, its performance depends on pivot selection, and poor choices can lead to O(n²) time in the worst case.
In this chapter, you will learn how Quick Sort works step by step, how partitioning is performed, how to choose pivots effectively, and how to implement it efficiently.
Quick Sort is a divide-and-conquer sorting algorithm. The core idea revolves around a single operation called partitioning.
Here is the strategy in plain English:
The beauty of this approach is that after the partition step, the pivot is in its final position. It will never move again. Each recursive call places more pivots in their correct positions until the entire array is sorted.
The diagram above captures the entire algorithm. Pick a pivot, partition, then recurse on each half. That is Quick Sort.
Compare this with Merge Sort, which does the heavy lifting during the merge step (combining two sorted halves). Quick Sort does the heavy lifting during the partition step. By the time you reach the base case (sub-arrays of size 0 or 1), the array is already sorted. There is nothing to combine.
Let us break down the algorithm into its key components.
Partitioning is the heart of Quick Sort. There are two classic partition schemes:
Lomuto Partition Scheme (simpler, easier to understand):
i that tracks where the "smaller than pivot" region endsHoare Partition Scheme (more efficient):
We will focus on Lomuto for the code implementations because it is easier to follow and less error-prone during interviews. Hoare's scheme is more efficient in practice but trickier to implement correctly.
Here is the full Quick Sort algorithm using Lomuto partition:
i to low - 1low to high - 1:i and swap the current element with the element at index ihigh) with the element at i + 1i + 1, which is its final sorted positionThe recursion tree fans out like a binary tree. In the best case (balanced partitions), the tree has log n levels, and each level does O(n) work during partitioning. That gives us O(n log n) total.
The partition function does the real work: it places the pivot in its correct position and returns that index. The quickSort function handles the recursion.
Let us trace through Quick Sort step by step using the array [10, 7, 8, 9, 1, 5].
The pivot is arr[5] = 5. We run the Lomuto partition with i = -1.
After partition, 5 is in its correct sorted position at index 1. Everything to its left ([1]) is smaller, and everything to its right ([8, 9, 10, 7]) is larger.
The left sub-array is [1]. A single element is already sorted. Base case, return.
The sub-array is [8, 9, 10, 7]. The pivot is arr[5] = 7.
Now 7 is in its correct position at index 2.
The sub-array is [9, 10, 8]. The pivot is arr[5] = 8.
Now 8 is at index 3.
The sub-array is [10, 9]. The pivot is arr[5] = 9.
Both remaining sub-arrays ([] and [10]) are base cases.
Final sorted array: `[1, 5, 7, 8, 9, 10]`
Notice how each partition places exactly one element (the pivot) in its final position. After enough partitions, every element has been placed correctly.
The diagram shows the recursion tree. Orange nodes are pivots that have been placed in their final positions. Green nodes are base cases. Blue nodes are sub-arrays that still need sorting.
| Case | Time Complexity | Space Complexity | Explanation |
|---|---|---|---|
| Best | O(n log n) | O(log n) | Pivot always splits array into two equal halves |
| Average | O(n log n) | O(log n) | Expected performance with random input |
| Worst | O(n^2) | O(n) | Array is already sorted and pivot is always the smallest or largest |
Stability: Quick Sort is not stable. Equal elements may change their relative order during swaps.
Each partition step does O(n) work, scanning through the sub-array once. The key question is: how many levels of recursion are there?
When the pivot splits the array roughly in half each time, the recursion depth is log n. Each level does a total of O(n) work across all partitions at that level. So the total work is O(n) * O(log n) = O(n log n).
The worst case happens when the partition is maximally unbalanced. If the pivot is always the smallest or largest element, one partition gets n-1 elements and the other gets 0 elements. This creates n levels of recursion instead of log n, and each level still does O(n) work. The total becomes O(n) * O(n) = O(n^2).
When does this happen? The classic scenario: the array is already sorted (ascending or descending), and you always pick the first or last element as the pivot. Every partition produces the worst possible split.
Quick Sort sorts in-place, so it does not allocate extra arrays. However, it uses the call stack for recursion. In the best case, the recursion depth is O(log n). In the worst case, it is O(n), which can cause a stack overflow on very large sorted arrays.
The left tree shows balanced partitions (best case), where the depth is log n. The right tree shows unbalanced partitions (worst case), where the depth is n. This visual makes it clear why pivot selection matters so much.
The pivot strategy is the single most important decision in a Quick Sort implementation. A bad pivot leads to O(n^2). A good pivot keeps things at O(n log n).
1. Last or First Element (Naive) Simple but dangerous. Degrades to O(n^2) on sorted or nearly sorted data, which is surprisingly common in real applications.
2. Random Pivot Pick a random element and swap it with the last element before partitioning. This makes the worst case extremely unlikely, though not impossible. The expected time complexity is O(n log n) regardless of input order.
3. Median-of-Three Pick the median of the first, middle, and last elements. This avoids the worst case on sorted data and performs well in practice. Many production implementations use this strategy.
The table below summarizes the trade-offs:
| Strategy | Worst Case Input | Implementation Effort | Used In Practice |
|---|---|---|---|
| First/Last element | Sorted arrays | Trivial | Rarely |
| Random pivot | Extremely unlikely | Easy | Often |
| Median-of-three | Very unlikely | Moderate | Standard libraries |