AlgoMaster Logo

Quick Sort

Last Updated: March 31, 2026

Ashish

Ashish Pratap Singh

Quick Sort is one of the most efficient and widely used sorting algorithms in practice. It follows a divide-and-conquer approach, but instead of splitting the array evenly, it selects a pivot and partitions the array into elements smaller and larger than the pivot.

This partitioning step places the pivot in its correct position, and the same process is then applied recursively to the left and right subarrays. When implemented well, Quick Sort achieves an average time complexity of O(n log n) with excellent performance in real-world scenarios.

However, its performance depends on pivot selection, and poor choices can lead to O(n²) time in the worst case.

In this chapter, you will learn how Quick Sort works step by step, how partitioning is performed, how to choose pivots effectively, and how to implement it efficiently.

What Is Quick Sort?

Quick Sort is a divide-and-conquer sorting algorithm. The core idea revolves around a single operation called partitioning.

Here is the strategy in plain English:

  1. Pick an element from the array. Call it the pivot.
  2. Rearrange the array so that all elements smaller than the pivot are on the left, and all elements greater than the pivot are on the right. The pivot ends up in its correct sorted position.
  3. Recursively apply the same process to the left and right sub-arrays.

The beauty of this approach is that after the partition step, the pivot is in its final position. It will never move again. Each recursive call places more pivots in their correct positions until the entire array is sorted.

The diagram above captures the entire algorithm. Pick a pivot, partition, then recurse on each half. That is Quick Sort.

Compare this with Merge Sort, which does the heavy lifting during the merge step (combining two sorted halves). Quick Sort does the heavy lifting during the partition step. By the time you reach the base case (sub-arrays of size 0 or 1), the array is already sorted. There is nothing to combine.

How It Works

Let us break down the algorithm into its key components.

The Partition Operation

Partitioning is the heart of Quick Sort. There are two classic partition schemes:

Lomuto Partition Scheme (simpler, easier to understand):

  • Uses the last element as the pivot
  • Maintains a boundary index i that tracks where the "smaller than pivot" region ends
  • Scans from left to right, swapping elements smaller than the pivot to the front
  • At the end, swaps the pivot into its correct position

Hoare Partition Scheme (more efficient):

  • Uses the first element as the pivot
  • Uses two pointers starting from opposite ends
  • Pointers move toward each other, swapping elements that are on the wrong side
  • Performs fewer swaps on average (about three times fewer than Lomuto)

We will focus on Lomuto for the code implementations because it is easier to follow and less error-prone during interviews. Hoare's scheme is more efficient in practice but trickier to implement correctly.

Step-by-Step Process

Here is the full Quick Sort algorithm using Lomuto partition:

  1. If the sub-array has fewer than 2 elements, return (base case)
  2. Select the last element as the pivot
  3. Initialize a boundary index i to low - 1
  4. Scan elements from low to high - 1:
  • If the current element is less than or equal to the pivot, increment i and swap the current element with the element at index i
  1. After the scan, swap the pivot (at high) with the element at i + 1
  2. The pivot is now at index i + 1, which is its final sorted position
  3. Recursively sort the sub-array to the left of the pivot
  4. Recursively sort the sub-array to the right of the pivot

The recursion tree fans out like a binary tree. In the best case (balanced partitions), the tree has log n levels, and each level does O(n) work during partitioning. That gives us O(n log n) total.

Code Implementation

The partition function does the real work: it places the pivot in its correct position and returns that index. The quickSort function handles the recursion.

Example Walkthrough

Let us trace through Quick Sort step by step using the array [10, 7, 8, 9, 1, 5].

First Call: quickSort(arr, 0, 5)

The pivot is arr[5] = 5. We run the Lomuto partition with i = -1.

After partition, 5 is in its correct sorted position at index 1. Everything to its left ([1]) is smaller, and everything to its right ([8, 9, 10, 7]) is larger.

Second Call: quickSort(arr, 0, 0)

The left sub-array is [1]. A single element is already sorted. Base case, return.

Third Call: quickSort(arr, 2, 5)

The sub-array is [8, 9, 10, 7]. The pivot is arr[5] = 7.

Now 7 is in its correct position at index 2.

Fourth Call: quickSort(arr, 3, 5)

The sub-array is [9, 10, 8]. The pivot is arr[5] = 8.

Now 8 is at index 3.

Fifth Call: quickSort(arr, 4, 5)

The sub-array is [10, 9]. The pivot is arr[5] = 9.

Both remaining sub-arrays ([] and [10]) are base cases.

Final sorted array: `[1, 5, 7, 8, 9, 10]`

Notice how each partition places exactly one element (the pivot) in its final position. After enough partitions, every element has been placed correctly.

The diagram shows the recursion tree. Orange nodes are pivots that have been placed in their final positions. Green nodes are base cases. Blue nodes are sub-arrays that still need sorting.

Complexity Analysis

CaseTime ComplexitySpace ComplexityExplanation
BestO(n log n)O(log n)Pivot always splits array into two equal halves
AverageO(n log n)O(log n)Expected performance with random input
WorstO(n^2)O(n)Array is already sorted and pivot is always the smallest or largest

Stability: Quick Sort is not stable. Equal elements may change their relative order during swaps.

Why O(n log n) on Average?

Each partition step does O(n) work, scanning through the sub-array once. The key question is: how many levels of recursion are there?

When the pivot splits the array roughly in half each time, the recursion depth is log n. Each level does a total of O(n) work across all partitions at that level. So the total work is O(n) * O(log n) = O(n log n).

Why O(n^2) in the Worst Case?

The worst case happens when the partition is maximally unbalanced. If the pivot is always the smallest or largest element, one partition gets n-1 elements and the other gets 0 elements. This creates n levels of recursion instead of log n, and each level still does O(n) work. The total becomes O(n) * O(n) = O(n^2).

When does this happen? The classic scenario: the array is already sorted (ascending or descending), and you always pick the first or last element as the pivot. Every partition produces the worst possible split.

Space Complexity

Quick Sort sorts in-place, so it does not allocate extra arrays. However, it uses the call stack for recursion. In the best case, the recursion depth is O(log n). In the worst case, it is O(n), which can cause a stack overflow on very large sorted arrays.

The left tree shows balanced partitions (best case), where the depth is log n. The right tree shows unbalanced partitions (worst case), where the depth is n. This visual makes it clear why pivot selection matters so much.

When to Use Quick Sort

Good Fit

  • General-purpose sorting: When you need to sort data and have no specific constraints, Quick Sort is a strong default choice
  • Memory-constrained environments: Quick Sort works in-place with O(log n) stack space, unlike Merge Sort which needs O(n) extra memory
  • Cache-friendly workloads: Since Quick Sort operates on contiguous memory, it benefits from CPU cache locality. This is a major reason it outperforms Merge Sort in practice despite both being O(n log n)
  • Average case performance matters more than worst case: If your data is not adversarial and you use randomized pivot selection, the average case dominates

Not Ideal For

  • Worst case sensitive applications: If you absolutely cannot tolerate O(n^2) (real-time systems, adversarial inputs), consider Merge Sort or Heap Sort, which guarantee O(n log n)
  • Stability required: Quick Sort is not stable. If you need equal elements to preserve their original order, use Merge Sort
  • Linked lists: Quick Sort's advantage comes from in-place array operations and cache locality. On linked lists, these advantages vanish. Merge Sort is the better choice for linked lists because it does not need random access
  • Small arrays: For very small arrays (less than 10-20 elements), Insertion Sort is faster due to lower overhead. In practice, optimized Quick Sort implementations switch to Insertion Sort for small sub-arrays

Pivot Selection Strategies

The pivot strategy is the single most important decision in a Quick Sort implementation. A bad pivot leads to O(n^2). A good pivot keeps things at O(n log n).

1. Last or First Element (Naive) Simple but dangerous. Degrades to O(n^2) on sorted or nearly sorted data, which is surprisingly common in real applications.

2. Random Pivot Pick a random element and swap it with the last element before partitioning. This makes the worst case extremely unlikely, though not impossible. The expected time complexity is O(n log n) regardless of input order.

3. Median-of-Three Pick the median of the first, middle, and last elements. This avoids the worst case on sorted data and performs well in practice. Many production implementations use this strategy.

The table below summarizes the trade-offs:

StrategyWorst Case InputImplementation EffortUsed In Practice
First/Last elementSorted arraysTrivialRarely
Random pivotExtremely unlikelyEasyOften
Median-of-threeVery unlikelyModerateStandard libraries