Last Updated: March 29, 2026
We need to sort a singly linked list in ascending order. Unlike arrays, we cannot access elements by index, so algorithms like quicksort (which rely on random access for efficient partitioning) don't translate well. We need a sorting approach that works naturally with sequential access.
The key observation is that merge sort is a perfect fit for linked lists. The most expensive part of merge sort on arrays is the O(n) extra space needed during merging. But with linked lists, merging two sorted halves is essentially free in terms of space. We just rearrange pointers. The challenge is finding the middle of the list to split it, but the slow-fast pointer trick handles that in O(n).
0 <= n <= 5 * 10^4 → With up to 50,000 nodes, an O(n^2) approach like insertion sort or bubble sort would be around 2.5 billion operations in the worst case. That's too slow. We need O(n log n).-10^5 <= Node.val <= 10^5 → Values fit comfortably in a 32-bit integer. No overflow concerns.The simplest idea: sidestep the linked list limitations entirely. Copy all values into an array, sort the array using a built-in sort, then write the sorted values back into the linked list nodes.
This works and it is easy to implement, but it uses O(n) extra space for the array, which the follow-up specifically asks us to avoid.
Input:
After collecting values and sorting: [1, 2, 3, 4]. Write sorted values back to nodes:
This approach works correctly but uses O(n) extra space. Can we sort the linked list by rearranging pointers directly, using merge sort's divide-and-conquer structure?
Merge sort is the ideal sorting algorithm for linked lists. The merge step, which is the expensive O(n) space step for arrays, costs O(1) extra space with linked lists because we just rearrange pointers.
The plan is classic divide and conquer: find the middle of the list using the slow-fast pointer technique, split the list into two halves, recursively sort each half, and merge the two sorted halves back together.
Merge sort guarantees O(n log n) because each level of recursion does O(n) work (the merge step touches every node exactly once), and there are log(n) levels of recursion (each level halves the list size). The slow-fast pointer trick reliably finds the midpoint in O(n), adding only a constant factor to each level's work.
With linked lists, the merge step is particularly elegant. We do not need to allocate a temporary buffer. We just compare the heads of two sorted sublists and redirect pointers. Each node ends up in its correct position without any copying.
The O(log n) space comes from the recursion stack. The follow-up asks for O(1) space. What if we skip splitting entirely and merge from the bottom up, starting with individual nodes and doubling the merge size each round?
The top-down approach splits the list in half recursively, which requires O(log n) stack space. The bottom-up approach flips the strategy: instead of splitting first and merging later, we start by treating every single node as a sorted sublist of size 1, merge adjacent pairs into sorted sublists of size 2, then merge pairs of size 2 into size 4, and so on.
Each pass walks through all n nodes, and there are log(n) passes since the merge size doubles each time. Because there is no recursion, the space is O(1).