Last Updated: March 28, 2026
We need to express a given number n as a sum of perfect square numbers (1, 4, 9, 16, 25, ...) and find the minimum number of terms in that sum. We can reuse the same perfect square as many times as we want.
For example, 12 can be expressed in several ways: 1+1+1+1+1+1+1+1+1+1+1+1 (twelve 1s), or 4+4+4 (three 4s), or 9+1+1+1 (four terms). The best is 4+4+4 using only 3 squares.
If you think about it, this is fundamentally a "minimum coins" problem in disguise. The "coins" are perfect squares (1, 4, 9, 16, ...), the "amount" is n, and we want the fewest coins to make that amount. That connection to the Coin Change problem is the key observation that guides every approach below.
1 <= n <= 10^4 --> With n up to 10,000, O(n sqrt(n)) is roughly 10,000 100 = 1,000,000 operations, which is very comfortable.The most direct approach is recursion. For a given number n, we try subtracting every possible perfect square and recursively solve the smaller subproblem. If we subtract j*j from n, we need 1 + numSquares(n - j*j) squares. We try all valid perfect squares and take the minimum.
The base case is simple: numSquares(0) = 0 (zero needs zero squares). For any positive n, we iterate through 1, 4, 9, 16, ... up to n, and pick the choice that gives the smallest count.
n == 0, return 0.result to n (worst case: using all 1s gives exactly n terms).j from 1 while j * j <= n, recursively compute numSquares(n - j * j) and update result = min(result, 1 + numSquares(n - j * j)).result.The recursion recomputes the same subproblems many times. What if we built solutions bottom-up, storing each result so we never recompute?
Since the recursive solution has overlapping subproblems, we can use dynamic programming. We build a table dp where dp[i] is the minimum number of perfect squares that sum to i, filling it from dp[0] up to dp[n].
For each value i, we try every perfect square j*j that fits (where j*j <= i), and check: dp[i] = min(dp[i], dp[i - j*j] + 1). This is exactly the Coin Change DP, where the "coins" are perfect squares.
This is classic bottom-up DP. We build solutions for smaller values first, then use them to solve larger values. Every dp[i] is computed using only values dp[k] where k < i, so by the time we need them, they are already finalized. The inner loop tries every perfect square as a "last move" and picks the one that leads to the fewest total squares.
dp of size n + 1, initialized to n + 1 (a safe upper bound).dp[0] = 0 (base case: zero needs zero squares).i from 1 to n, for each j from 1 while j * j <= i: dp[i] = min(dp[i], dp[i - j * j] + 1).dp[n].The DP solution builds the full table even when the answer is small. Can we model this as a shortest-path problem instead?
Imagine a graph where each node is a number from 0 to n, and from any node i there is an edge to i - j*j for every perfect square j*j <= i. Finding the minimum number of perfect squares that sum to n is the same as finding the shortest path from n to 0. BFS naturally finds shortest paths in unweighted graphs.
This approach can terminate early. If the answer is 1 or 2, BFS finds it after exploring very few nodes.
n. Create a visited array to avoid reprocessing.depth = 0.depth, process all nodes at the current level, and for each node subtract every possible j*j. If we reach 0, return depth. Otherwise enqueue unvisited results.Both DP and BFS are O(n * sqrt(n)). Can number theory give us a faster answer?
Lagrange's Four-Square Theorem states that every positive integer is a sum of at most four perfect squares, so the answer is always 1, 2, 3, or 4. Legendre's Three-Square Theorem tells us exactly when the answer is 4: when n has the form 4^a * (8b + 7). With these two results, we check for 1 (is n a perfect square?), check for 4 (Legendre's form), check for 2 (enumerate pairs), and default to 3.
n is a perfect square. If yes, return 1.n is divisible by 4, divide by 4. If the result mod 8 equals 7, return 4.j from 1 while j*j <= n, check if n - j*j is a perfect square. If any pair works, return 2.