Last Updated: March 29, 2026
We're given an array of numbers that may contain duplicates, and we need to find all unique combinations that add up to a target sum. Each element in the array can be used at most once (based on its position), and we can't have duplicate combinations in the result.
This problem sits at the intersection of two challenges. First, it's a combination-sum problem, so we need to explore subsets that hit a specific target. Second, the input contains duplicates, which means naive backtracking will produce the same combination multiple times through different paths. For example, if candidates = [1, 1, 2] and target = 3, picking the first 1 with 2 and picking the second 1 with 2 both give [1, 2], but we should only include it once.
The key observation is that sorting the array groups duplicates together, which lets us skip over duplicate values at the same decision level during backtracking. This is the same deduplication technique used in Subsets II, applied to a combination-sum context.
1 <= candidates.length <= 100 → With n up to 100, we can't enumerate all 2^100 subsets. But the target is at most 30, and each candidate is at least 1, so any valid combination has at most 30 elements. This keeps the recursion tree shallow.1 <= candidates[i] <= 50 → All candidates are positive. This means the running sum only increases as we add elements, allowing us to prune when the sum exceeds the target.1 <= target <= 30 → The small target severely limits the number of valid combinations. Even though the array can have 100 elements, we'll never go deeper than 30 levels in the recursion.The simplest idea is to ignore duplicates during generation and deal with them afterward. Sort the array, generate all possible subsets that sum to the target using standard backtracking (where each element is either included or excluded), and then remove duplicate combinations from the result using a set.
This is what you'd do if you already had a working combination-sum solution and wanted to quickly adapt it for an input with duplicates. It works, but it's wasteful because we generate combinations we know we'll throw away.
This approach works but generates duplicate combinations and filters them afterward. What if we avoided generating duplicates in the first place by skipping over duplicate values during the search?
Instead of generating duplicates and filtering them out, we can prevent duplicates from being generated in the first place. The trick is to sort the array and add one rule during backtracking: at any given decision level, if we've already tried a particular value, skip all subsequent occurrences of that same value.
After sorting, duplicates sit next to each other. When we're at some position in our recursion and choosing which candidate to pick next, we iterate through the remaining candidates from left to right. If candidates[i] has the same value as candidates[i-1], and we're past the starting position for this decision level, then picking candidates[i] would lead to the exact same subtree we already explored when we picked candidates[i-1]. So we skip it.
The duplicate-skipping logic hinges on one insight: after sorting, if two adjacent elements have the same value, choosing the second one at the same recursion level would produce the exact same set of combinations as choosing the first one. By only using the first occurrence at each level, we guarantee uniqueness.
But we don't lose any valid combinations. The second copy of a value is still available as a deeper recursive choice. For example, with [1, 1, 6] and target = 8, we skip the second 1 at the root level, but when we pick the first 1 and recurse, the second 1 is now at a deeper level and gets picked normally. That's how we find [1, 1, 6].
start to the end of the array. For each index i: a. If candidates[i] exceeds the remaining target, break (all subsequent candidates are larger due to sorting).
b. If i > start and candidates[i] == candidates[i-1], skip this candidate (duplicate at the same level).
c. Add candidates[i] to the current combination.
d. Recurse with i + 1 as the new start and remaining - candidates[i] as the new target.
e. Remove the last element (backtrack).
candidates[i] > remaining pruning dramatically reduce the actual nodes visited. The small target (at most 30) also limits recursion depth.target since each level adds at least 1 to the sum. The current combination list holds at most target elements.Approach 2 is the standard optimal solution. But there's an alternative perspective: instead of sorting and skipping, what if we counted frequencies upfront and decided how many copies of each unique value to use?
Instead of sorting and skipping adjacent duplicates, we first count how many times each value appears, then backtrack over the unique values. For each unique value, we decide how many copies to use (from 0 up to the available count), then move to the next unique value.
This approach completely sidesteps the duplicate-skipping logic because we never encounter duplicates during the search. Each unique value is considered exactly once, and the "how many copies" decision naturally handles the multiplicity.
By grouping duplicates into (value, count) pairs, we transform the problem from "which indices to pick" into "how many of each value to use." This is a cleaner abstraction when duplicates are present. Each unique value is visited exactly once in the recursion, and the inner loop over copy counts handles the multiplicity naturally.
The pruning value * copies > remaining stops us from using too many copies of a value. Since values are sorted in ascending order, this prune is effective early when small values are being considered.
a. Try using 0 copies, 1 copy, 2 copies, ..., up to count copies (or until the sum exceeds the target).
b. After choosing how many copies to use, recurse to the next pair.
c. Remove the added copies (backtrack).