Last Updated: March 29, 2026
This problem is trickier than the basic "remove duplicates" variant (LeetCode #83). In that easier version, you keep one copy of each duplicate. Here, you remove all copies. If a value appears two or more times, every node with that value is gone.
Because the list is sorted, duplicates are always adjacent. That's a huge advantage. We don't need a hash map to track counts. We just need to detect when consecutive nodes share the same value, and then skip the entire run of that value.
The real challenge is pointer management. When we delete nodes, we need to rewire the next pointer of the node before the duplicate group. But what if the duplicate group starts at the head? Then there is no "node before" it. This is where a dummy node (also called a sentinel) becomes essential. It gives us a stable anchor before the head so we can treat all positions uniformly.
Number of nodes in range [0, 300] → The list can be empty (0 nodes). With at most 300 nodes, even O(n^2) would be fast, but O(n) is the natural fit for linked list traversal.-100 <= Node.val <= 100 → Small value range. We could theoretically count occurrences with an array, but that's overkill since the list is sorted and duplicates are adjacent.Sorted in ascending order → This is the key constraint. Duplicates are guaranteed to be consecutive, so a single scan can identify and skip all duplicates.The most straightforward approach: first, count how many times each value appears. Then, build a new list containing only the values that appeared exactly once.
Since the list is sorted, we could also count by scanning for runs of equal values. But using a hash map keeps the logic simple and works even if the list weren't sorted.
prev starting at the dummy node.prev, check its frequency in the map. If the count is greater than 1, skip it by setting prev.next = prev.next.next. Otherwise, advance prev to the next node.dummy.next as the new head.This approach works but uses extra space for the hash map. Since the list is sorted, we can detect duplicates just by comparing adjacent nodes, eliminating the need for any extra data structure.
Since the list is sorted, duplicates cluster together. We can detect a duplicate run by checking if current.val == current.next.val. When we find one, we advance current until we pass all nodes with that value. Then we rewire the previous unique node to skip the entire run.
The dummy node trick is essential here. Without it, we'd need special handling when duplicates start at the head. The dummy gives us a stable node before the head, so every deletion follows the same logic: prev.next = (node after the duplicate run).
Here's the key distinction from the brute force: we don't need to know the exact count. We just need to know whether a value is duplicated, which we can determine by comparing adjacent nodes.
The algorithm maintains a simple invariant: prev always points to the last node we're confident is unique. When we encounter a duplicate run, we don't know if the nodes after the run are unique yet, so prev stays put. We only advance prev after confirming that the node at prev.next has a different value from its successor.
The dummy node ensures this works even when duplicates start at the very beginning of the list. Without it, we'd need a separate check like "is the head a duplicate?" which adds complexity and is easy to get wrong.
dummy.next = head.prev = dummy. This pointer always points to the last confirmed unique node.current = head and traverse the list:current.next exists and current.val == current.next.val, we've found a duplicate run. Advance current until the value changes. Then set prev.next = current.next to skip the entire run.prev to current.current = current.next (or prev.next after a skip).dummy.next.current and once by the inner while loop. The combined steps across the entire execution are proportional to n.dummy, prev, current). No extra data structures.