Last Updated: March 28, 2026
We need to find the minimum number of coins from a given set of denominations that sum up to a target amount. Each coin can be used unlimited times. If it is impossible to reach the target, we return -1.
This is a classic optimization problem. A greedy approach (always pick the largest coin first) seems tempting, but it does not always work. For example, with coins [1, 3, 4] and amount 6, greedy picks 4 + 1 + 1 = 3 coins, but the optimal answer is 3 + 3 = 2 coins. The problem has overlapping subproblems and optimal substructure, which are the two hallmarks of dynamic programming.
The key observation: if we knew the minimum coins needed for every amount smaller than our target, we could figure out our target by trying each coin and taking the best option. That recursive structure is the foundation of every approach below.
1 <= coins.length <= 12 --> Very few coin denominations. The number of coin types is tiny, so iterating over all coins at each step is cheap.0 <= amount <= 10^4 --> The target amount is at most 10,000. This means a DP table of size 10,001 is perfectly feasible.1 <= coins[i] <= 2^31 - 1 --> Coin values can be very large. A coin larger than amount is useless, so we can ignore it.The most natural way to think about this problem is recursively. To make amount A, we can pick any coin c from our set and then need to solve the smaller subproblem of making amount A - c. We try every coin and take whichever choice leads to the fewest total coins.
The base cases are straightforward: if A == 0, we need 0 coins (we are done). If A < 0, this path is invalid, so we return a large value or signal failure.
This approach explores every possible combination of coins, which is correct but extremely slow. It generates an exponential number of recursive calls because the same subproblems get solved over and over again.
solve(remaining) that returns the minimum coins to make remaining.remaining == 0, return 0.remaining < 0, return -1 (invalid).minCoins to a large value.remaining - coin. If the result is valid and less than minCoins, update minCoins.minCoins + 1 if a valid solution was found, otherwise -1.The recursion tries all combinations: solve(12) calls solve(11), solve(7), and solve(2). Each of those spawns further calls. Eventually, solve(12) discovers the best path is 10 + 1 + 1 = 3 coins.
The recursion re-solves the same subproblems many times. What if we stored the result of each subproblem the first time we computed it?
The recursive approach above is correct, but it recomputes the same subproblems over and over. The fix is memoization: store the result of solve(k) in a cache the first time we compute it, and return the cached value on subsequent calls.
Since the only parameter that changes between calls is the remaining amount, and it ranges from 0 to amount, we need a cache of size amount + 1. This transforms the exponential recursion into a polynomial one because each subproblem is solved exactly once.
amount + 1, initialized to a sentinel value (e.g., -2) meaning "not computed yet."solve(remaining) with the same logic as before.memo[remaining] is already set. If so, return it.memo[remaining] before returning.The memoized solution has the right time complexity, but it carries recursion overhead and risks stack overflow for large amounts. What if we built the solution iteratively from the bottom up?
Instead of starting from the target amount and recursing down, we can build the solution from the ground up. We create a table dp where dp[i] represents the minimum number of coins needed to make amount i. We start from dp[0] = 0 (zero coins for zero amount) and fill the table up to dp[amount].
For each amount i from 1 to amount, we try every coin. If a coin c is less than or equal to i, then we could potentially reach amount i by using one coin c plus the optimal solution for amount i - c. We take the minimum across all such coin choices.
This is the classic unbounded knapsack pattern. The "unbounded" part means we can use each coin as many times as we want, which is why dp[i - c] (not some previous row) is the right subproblem to reference.
The recurrence dp[i] = min(dp[i - c] + 1) over all valid coins c captures the optimal substructure of the problem: the best way to make amount i must use some last coin c, and before that last coin, we need the fewest coins to make i - c. Since we fill the table from left to right, dp[i - c] is always computed before dp[i].
The initialization to amount + 1 is a clever trick. The maximum number of coins we could ever need is amount (using all 1-cent coins). So amount + 1 is a safe "infinity" that avoids integer overflow issues that would arise from using Integer.MAX_VALUE with the + 1 operation.
dp of size amount + 1, initialized to amount + 1 (a value larger than any valid answer, acting as "infinity").dp[0] = 0 (base case: zero coins for zero amount).i from 1 to amount:c in coins:c <= i, set dp[i] = min(dp[i], dp[i - c] + 1).dp[amount] is still amount + 1, return -1 (unreachable). Otherwise return dp[amount].