Last Updated: March 31, 2026
Merge Sort is a classic divide-and-conquer algorithm that breaks a problem into smaller, manageable pieces. It repeatedly splits the array into halves until each part contains a single element, then merges those parts back together in sorted order.
The key idea lies in the merge step, where two sorted arrays are combined efficiently to produce a fully sorted result. This guarantees a consistent time complexity of O(n log n), regardless of the input.
Merge Sort is stable, predictable, and widely used in practice, especially when working with linked lists or large datasets that do not fit entirely in memory.
In this chapter, you will learn how Merge Sort works step by step, how to implement it, and why it is one of the most important sorting algorithms to understand.
Merge sort is a divide-and-conquer algorithm. It breaks a problem into smaller subproblems, solves each one independently, and then combines the results. For sorting, that means three steps:
The recursion bottoms out when a subarray has zero or one element, since a single element is already sorted. Then the magic happens during the merge step, where two sorted subarrays are combined into one.
Here is what the full split-and-merge process looks like for a small array:
The top half of the diagram shows the divide phase, where we keep splitting until every subarray has one element. The bottom half shows the combine phase, where sorted subarrays merge back together, growing larger at each level until the full array is sorted.
This is what makes merge sort special. Unlike quicksort, which can degrade to O(n^2) on bad inputs, merge sort always splits the array exactly in half. That means:
There is no best case or worst case. The input order does not matter. Merge sort always does the same amount of work because the splitting and merging structure is fixed.
Let us break down the algorithm into its two core pieces: the recursive splitting logic and the merge procedure.
The high-level logic is straightforward:
mid = left + (right - left) / 2.mergeSort(arr, left, mid).mergeSort(arr, mid + 1, right).The key insight is that by the time we call merge(), both halves are already sorted. The left half arr[left..mid] is sorted, and the right half arr[mid+1..right] is sorted. Our only job is to combine them.
Merging two sorted arrays is where the real work happens. The idea uses a two-pointer technique:
This merge step is what gives merge sort its name, and it runs in O(n) time because each element is visited exactly once.
At each step of the merge, we compare the front elements of both halves and pick the smaller one. Because both halves are sorted, this guarantees the merged result is also sorted.
Notice that the implementations use <= (not <) when comparing elements from the left and right halves. This is what makes merge sort stable: equal elements from the left half always come before equal elements from the right half, preserving their original relative order.
Let us trace through merge sort step by step with the array [38, 27, 43, 3, 9, 82, 10]. Following the recursion carefully helps you understand exactly when splits and merges happen.
The algorithm splits the array recursively until every subarray has at most one element.
At Level 3, every subarray is a single element. Single elements are sorted by definition, so the recursion starts unwinding.
Now we merge back up, combining sorted subarrays at each level.
Merge [27] and [43]:
Merge [38] and [27, 43]:
Merge [3] and [9]:
Merge [82] and [10]:
Merge [3, 9] and [10, 82]:
Final Merge [27, 38, 43] and [3, 9, 10, 82]:
This is the most interesting merge because both halves have multiple elements and the pointers alternate between sides.
The final sorted array is [3, 9, 10, 27, 38, 43, 82].
Notice how the merge step never needs to "go back" or re-examine elements. Each element is compared and placed exactly once per merge, which is why each merge level takes O(n) total work.
| Case | Time Complexity | Explanation |
|---|---|---|
| Best | O(n log n) | Always splits in half, always merges. No shortcuts. |
| Average | O(n log n) | Same structure regardless of input order. |
| Worst | O(n log n) | Same. The split-merge structure is input-independent. |
| Space | O(n) | Temporary arrays used during merge. |
| Stable | Yes | Equal elements maintain their relative order. |
Each merge step creates temporary arrays to hold the two halves being merged. At the top level, the temporary arrays hold all n elements. While deeper recursion levels use smaller arrays, those arrays are freed before the top-level merge begins (since recursion unwinds bottom-up). So the peak additional memory usage is O(n).
The call stack also uses O(log n) space due to the recursion depth, but this is dominated by the O(n) temporary arrays.
| Property | Merge Sort | Quick Sort | Heap Sort |
|---|---|---|---|
| Worst-case time | O(n log n) | O(n^2) | O(n log n) |
| Average time | O(n log n) | O(n log n) | O(n log n) |
| Space | O(n) | O(log n) | O(1) |
| Stable | Yes | No | No |
| Cache-friendly | Moderate | Excellent | Poor |
| In-place | No | Yes | Yes |
Merge sort's trade-off is clear: guaranteed performance and stability at the cost of extra memory. Quick sort is typically faster in practice due to better cache locality, but it cannot guarantee O(n log n) without careful pivot selection.