Last Updated: March 28, 2026
We need to count the number of distinct combinations (not permutations) of coins that add up to a target amount. The word "combinations" is doing a lot of heavy lifting here. Using coins [1, 2] to make amount 3, the combination 1+2 is the same as 2+1. We only count it once.
This distinction between combinations and permutations is exactly what makes this problem tricky. If we were counting permutations (where order matters), a simple 1D DP would work by iterating over amounts and trying all coins at each amount. But since order does not matter, we need a way to avoid counting the same set of coins in different orders.
The key observation is: if we process coins one at a time and decide how many of each coin to use before moving to the next, we naturally avoid duplicates. We never go back to a coin we have already passed, so we never generate both "1+2" and "2+1."
1 <= coins.length <= 300 --> Up to 300 different coin denominations. This is one dimension of our DP.0 <= amount <= 5000 --> The target amount goes up to 5000. This is the other dimension.1 <= coins[i] <= 5000 --> Individual coin values can be as large as the amount itself.With coins.length up to 300 and amount up to 5000, a 2D DP of size 300 5000 = 1,500,000 is very comfortable. O(coins amount) runs in milliseconds. Recursive brute force without memoization would explore an exponential number of paths, so we definitely need DP.
The most natural starting point is to think recursively. We have a set of coins and need to count combinations that sum to amount. To avoid counting duplicates like 1+2 and 2+1 as separate combinations, we process coins in order. We pick a coin index i and decide: how many of coin i do we use? Then we move to coin i+1 with whatever amount remains.
Define count(i, remaining) as the number of ways to make remaining using coins from index i onward. At each step:
remaining == 0, we found one valid combination. Return 1.remaining < 0 or i == coins.length, there is no valid combination. Return 0.i entirely (move to i+1), or use coin i once (subtract its value, stay at index i since we can reuse it).The sum of these two choices gives the total combinations.
count(i, remaining) where i is the current coin index and remaining is the amount left to fill.remaining == 0, return 1 (found a valid combination).remaining < 0 or i >= coins.length, return 0 (invalid path).count(i + 1, remaining) + count(i, remaining - coins[i]).count(0, amount).Let's trace through amount = 5, coins = [1, 2, 5]:
The recursion recomputes the same (coin_index, remaining) pairs many times. What if we cached each result the first time we computed it?
The recursive solution has overlapping subproblems. The state of each subproblem is fully described by two values: the current coin index i and the remaining amount. There are at most n * (amount + 1) unique states, where n is the number of coins. If we cache the result of each state the first time we compute it, every subsequent call with the same state returns immediately.
This is the classic memoization technique. We keep the exact same recursive logic from Approach 1 but add a 2D memo table. Before computing a state, check if it is already in the memo. After computing, store the result.
Memoization guarantees that each unique (coin_index, remaining) pair is computed exactly once. Since there are n coin indices and amount + 1 possible remaining values, we compute at most n * (amount + 1) states. Each state does O(1) work (two recursive calls that either hit the cache or recurse further). This transforms the exponential brute force into a polynomial-time solution.
coins.length x (amount + 1), initialized to -1 (meaning "not computed yet").count(i, remaining) with the same logic as Approach 1.memo[i][remaining]. If it is not -1, return the cached value.memo[i][remaining].count(0, amount).The memoization uses O(n * amount) space for the 2D table plus recursion overhead. Can we fill the table iteratively and reduce to a single 1D array?
Let's flip the approach from top-down recursion to bottom-up iteration. We define dp[j] as the number of combinations that make up amount j. The key to avoiding duplicate counting is the iteration order: we loop over coins in the outer loop and amounts in the inner loop.
Why does this order matter? When we process coin c and update dp[j], we are asking: "How many new combinations can we form that use coin c at least once to reach amount j?" The answer is dp[j - c], because dp[j - c] already contains all combinations using the coins we have processed so far (including c itself, since we are iterating forward through amounts). By processing one coin at a time, we ensure that [1, 2] and [2, 1] are never counted separately.
If we swapped the loops (amounts outer, coins inner), each amount would consider all coins at each step, and we would count permutations instead of combinations. That would solve a different problem (LeetCode #377, Combination Sum IV).
The outer loop over coins guarantees that we build combinations in a fixed order. When we process coin c, dp[j - c] already includes all ways to make j - c using coins processed so far, including coin c (since we iterate j forward from c). This allows unlimited reuse of coin c. But it never considers coins that come later in the array, so we never count the same multiset of coins in a different order.
The space optimization works because when computing dp[j] for a given coin, we only need the current state of the dp array. The forward iteration through amounts naturally layers in the "use this coin" option on top of the "skip this coin" option (which is simply the existing value of dp[j]).
dp of size amount + 1, initialized to 0.dp[0] = 1 (there is one way to make amount 0: use no coins).c in coins:j from c to amount:dp[j - c] to dp[j].dp[amount].