Last Updated: March 22, 2026
We need to scan a matrix, find every cell that contains a zero, and then set that cell's entire row and column to zero. The tricky part is that we can't do it on the fly. If we see a zero at position (1,2) and immediately zero out row 1 and column 2, those new zeroes might fool us into thinking there were original zeroes in those positions. That would cascade incorrectly across the entire matrix.
So the core challenge is: how do we remember which rows and columns need to be zeroed, without confusing "original zeroes" with "zeroes we just wrote"?
The follow-up hints push us from O(mn) extra space down to O(m+n), and finally to O(1). Each step requires a cleverer way to store the "this row/column needs zeroing" information.
m, n <= 200 means the matrix has at most 40,000 cells. Even an O(m\*n) time solution is fast enough. The real optimization axis is space, not time.-2^31 <= matrix[i][j] <= 2^31 - 1 means values span the full integer range, so we can't use a sentinel value like Integer.MIN_VALUE to mark cells (it could be a legitimate value).The simplest way to avoid the cascading-zero problem is to just make a copy of the entire matrix. Scan the copy for zeroes, and write zeroes into the original. Since we're reading from an untouched copy, there's no risk of confusion between original and newly-written zeroes.
The bottleneck here isn't really time, it's space. We're copying the entire matrix just to remember where the zeroes were. But notice: we don't need to remember every cell's value. We only need to know which rows and which columns contain at least one zero. That's just m + n booleans instead of m * n integers. Can we reduce our bookkeeping to just that?
Instead of copying the whole matrix, we can make one pass to record which rows and which columns contain a zero, then make a second pass to actually zero them out. Two sets, one for zero-rows and one for zero-columns, is all we need.
This cleanly separates the "find" phase from the "write" phase, and uses O(m + n) space instead of O(m * n).
zeroRows and zeroCols.matrix[i][j] == 0, add i to zeroRows and j to zeroCols.i is in zeroRows or j is in zeroCols, set matrix[i][j] = 0.The two sets store at most m + n values, which is a big improvement over copying the whole matrix. But the problem's follow-up explicitly asks for O(1) space. The sets are external storage. Is there anywhere inside the matrix itself where we could store this same row/column information without corrupting the data we still need to read?
Here's the key insight: the first row has n cells, one per column. The first column has m cells, one per row. That's exactly the information we need, m row flags and n column flags, already sitting inside the matrix. Instead of external sets, we can use the first row to mark which columns should be zeroed, and the first column to mark which rows should be zeroed.
But there's a catch. Cell (0,0) belongs to both the first row and the first column. It can't serve double duty as both a row flag and a column flag. So we use matrix[0][0] to track whether the first row should be zeroed, and a separate boolean variable firstCol to track whether the first column should be zeroed.
The order of operations matters. We mark first, then zero the inner cells, and only at the end do we handle the first row and first column. If we zeroed the first row/column too early, we'd destroy our markers.
firstCol = false.matrix[i][j] == 0:matrix[i][0] = 0 (mark the row).j == 0, set firstCol = true. Otherwise, set matrix[0][j] = 0 (mark the column).matrix[i][0] == 0 or matrix[0][j] == 0, set matrix[i][j] = 0.matrix[0][0] == 0, zero out the entire first row.firstCol is true, zero out the entire first column.The first row and first column serve as "summary" flags. After phase 1, matrix[i][0] == 0 means "row i had an original zero somewhere," and matrix[0][j] == 0 means "column j had an original zero somewhere." We process the inner matrix first (phase 2) so that we read the markers before potentially overwriting them. Then we handle the first row and first column last, using matrix[0][0] and the firstCol boolean respectively.
The reason we need a separate firstCol variable is that matrix[0][0] sits at the intersection of both the first row and the first column. If we used it for both, a zero in column 0 (say at position (2,0)) would set matrix[0][0] = 0, which would incorrectly signal that the first row should also be zeroed. The separate boolean breaks this ambiguity.