Last Updated: March 28, 2026
We have a grid of 0s and 1s, and we need to compute a new grid where each cell contains the shortest Manhattan distance to the nearest 0. Cells that are already 0 get distance 0. Cells that are 1 need to find how far away the closest 0 is, counting only horizontal and vertical moves (not diagonal).
The key insight here is about the direction of the search. The naive way is to start from each 1-cell and search outward for the nearest 0. But a much smarter approach is to flip the perspective: start from all the 0-cells and expand outward simultaneously. This way, the first time we reach any 1-cell, we know that is its shortest distance. This "reverse the search direction" idea is the core observation that unlocks the efficient solutions.
1 <= m, n <= 10^4 and 1 <= m * n <= 10^4 -- The total number of cells is at most 10,000. A brute force BFS from every cell would be O((m*n)^2) = 10^8, which is borderline.mat[i][j] is either 0 or 1 -- Binary values only. We do not need to handle weighted distances.0 in mat -- We are guaranteed a solution exists. No need to handle the impossible case.The most straightforward approach: for every cell that contains a 1, start a BFS and expand outward until we hit a 0. The number of steps taken is the distance. Cells that are already 0 get distance 0 immediately.
This works correctly because BFS explores cells layer by layer, so the first 0 we encounter is guaranteed to be the nearest one. The problem is that we are running a separate BFS for every 1-cell, and each BFS could visit the entire grid in the worst case.
mat.mat[i][j] == 0, set result[i][j] = 0.For this brute force approach, the walkthrough is straightforward: for each 1-cell, we run a BFS outward until we find a 0. The BFS for cell (1,1) in mat = [[0,0,0],[0,1,0],[0,0,0]] immediately finds a 0 at distance 1 in all four directions.
For every 1-cell, we launch an independent BFS that potentially scans the entire grid. Many of these BFS runs overlap, doing the same work repeatedly. What if we flipped the direction and searched from all 0-cells outward simultaneously?
Instead of running a separate BFS from each 1-cell, run a single BFS starting from all 0-cells simultaneously. Add every 0-cell to the queue at the start with distance 0. Then expand outward layer by layer. The first time we reach any 1-cell, the distance recorded is guaranteed to be the shortest, because BFS explores in order of increasing distance.
Think of it like dropping a stone into water at every 0-cell at the same time. The ripples expand outward, and wherever two ripples meet, the closer source wins.
Multi-source BFS is essentially the same as adding a virtual "super source" node connected to all 0-cells with edge weight 0, and then running a standard single-source BFS from that super source. Since BFS explores nodes in order of increasing distance, the first time it reaches any 1-cell, that path length is guaranteed to be the shortest. Every cell is enqueued and dequeued exactly once, so the total work is proportional to the number of cells.
mat[i][j] == 0 to the queue and set their distance to 0 in the result matrix.The multi-source BFS is optimal in time but uses O(m * n) extra space for the queue. Can we avoid the queue entirely and achieve the same time complexity with O(1) extra space?
The distance from any cell to the nearest 0 is determined by its neighbors. If a cell is not itself 0, its distance is 1 plus the minimum distance among its four neighbors. The challenge is that we cannot compute all four directions in a single pass because some neighbors have not been computed yet.
The trick is to split the computation into two passes. The first pass (top-left to bottom-right) considers the top and left neighbors. The second pass (bottom-right to top-left) considers the bottom and right neighbors. After both passes, every cell has considered all four directions, and the result is correct.
Any shortest path from a 1-cell to a 0-cell in a grid can be decomposed into horizontal and vertical moves. The first pass (top-left to bottom-right) correctly computes distances when the nearest 0 is above or to the left. The second pass (bottom-right to top-left) handles cases where the nearest 0 is below or to the right. Since these two passes together cover all four quadrants, the final result is correct. The m + n upper bound works because the maximum possible distance in an m x n grid is m - 1 + n - 1 = m + n - 2, so m + n is always safely larger than any real distance.
dist[i][j] = 0 for 0-cells and dist[i][j] = m + n (a safe upper bound) for 1-cells.i > 0, set dist[i][j] = min(dist[i][j], dist[i-1][j] + 1). If j > 0, set dist[i][j] = min(dist[i][j], dist[i][j-1] + 1).i < m-1, set dist[i][j] = min(dist[i][j], dist[i+1][j] + 1). If j < n-1, set dist[i][j] = min(dist[i][j], dist[i][j+1] + 1).