Last Updated: March 29, 2026
We have k linked lists, each already sorted, and we need to merge them into a single sorted linked list. This is a generalization of the classic "merge two sorted lists" problem. If you know how to merge two sorted lists, the question becomes: what is the most efficient way to extend that to k lists?
The naive approach is to compare all k list heads every time we need the next smallest element. But that comparison cost adds up quickly. The key observation is that we only ever care about the current head of each list, because each list is sorted. So the problem reduces to: among the k current heads, which one is the smallest? That's a question a min-heap answers very efficiently.
There's also a divide-and-conquer angle. Instead of merging all k lists at once, we can pair them up and merge each pair, then pair up the results, and repeat. This mirrors how merge sort works, and it turns out to be just as efficient as the heap approach.
0 <= k <= 10^4 → k can be quite large. Any approach that does O(k) work per element needs to account for this.The sum of lists[i].length will not exceed 10^4 → Total number of nodes N is at most 10,000. This is the key constraint for time complexity.-10^4 <= lists[i][j] <= 10^4 → Node values fit in a standard integer. No overflow concerns.The simplest way to think about this: forget the linked list structure entirely. Walk through every node in every list, collect all the values into a single array, sort that array, and then build a new linked list from the sorted values.
This completely ignores the fact that the input lists are already sorted, but it gives us a correct answer with minimal thinking.
The brute force works but ignores the fact that each list is already sorted. What if we could exploit that sorted property to pick the smallest element more efficiently?
Since each list is sorted, the next element in our merged result must be the smallest among the current heads of all k lists. Instead of scanning all k heads every time (which would be O(k) per element), we can use a min-heap to track these heads. The heap gives us the smallest in O(log k) time and lets us insert the next node from that list in O(log k) time.
The process is straightforward: push all k initial heads into the heap. Then repeatedly pop the smallest node, append it to the result, and push that node's next pointer (if it exists) back into the heap. When the heap is empty, we're done.
The min-heap always contains at most k nodes (one from each list). Since each list is sorted, the global minimum must be one of these k heads. By always popping the minimum and replacing it with the next node from the same list, we build the result in sorted order. The heap never contains more than k elements, so each push and pop is O(log k), not O(log N).
The heap approach is efficient, but can we achieve the same time complexity with a simpler structure? What if we just merged lists in pairs, like merge sort does?
Instead of using a heap, we can think about this problem the way merge sort works. Pair up the k lists, merge each pair (using the standard "merge two sorted lists" technique), and repeat until only one list remains.
In the first round, we merge k lists into k/2 lists. In the second round, k/2 becomes k/4. After log(k) rounds, we have a single merged list. Each round processes all N nodes exactly once (since every node appears in exactly one of the lists being merged), so the total work is O(N log k), same as the heap approach.
The beauty of this approach is that it only uses a simple merge-two-lists function as a building block, which many candidates have already implemented.
Each round of pairing halves the number of lists, so we need log(k) rounds. In each round, every node is visited exactly once during the merging of its pair. So the total work per round is O(N), and with log(k) rounds, the total time is O(N log k), matching the heap approach. This is the exact same structure as merge sort's merge phase.