Last Updated: March 28, 2026
This is a shortest path problem in disguise. The signal spreads from node k to all other nodes through the directed, weighted edges. Each node receives the signal at the earliest possible time, which is the shortest distance from k to that node.
Once we compute the shortest path from k to every other node, the answer is the maximum of all those shortest distances. Why the maximum? Because the signal travels along all paths simultaneously, and we need every node to receive it. The last node to receive the signal determines the total delay time. If any node is unreachable, we return -1.
So the problem reduces to: compute single-source shortest paths from node k, then return the maximum distance. If any node has infinite distance, return -1.
1 <= n <= 100 → The number of nodes is small. Even O(n^3) algorithms like Floyd-Warshall would work, but that's overkill since we only need single-source shortest paths.1 <= times.length <= 6000 → Up to 6000 edges. This is a sparse-to-moderate graph. Edge list traversal is cheap.0 <= w_i <= 100 → All edge weights are non-negative. This is critical because it means Dijkstra's algorithm is valid. If weights could be negative, we'd need Bellman-Ford.The most straightforward shortest path algorithm that works on any graph (even with negative weights) is Bellman-Ford. The idea is simple: relax every edge, and repeat this V-1 times. After V-1 rounds, the shortest distances are guaranteed to be correct.
Why V-1 rounds? In the worst case, the shortest path from the source to any node passes through at most V-1 edges (visiting all other nodes along the way). Each round propagates shortest distances one hop further. So after V-1 rounds, even the longest shortest path has been fully resolved.
For this problem, we initialize a distance array where dist[k] = 0 (the source) and everything else is infinity. Then we repeatedly scan all edges and update distances when we find a shorter path.
n + 1 (nodes are 1-indexed), initialized to infinity. Set dist[k] = 0.n - 1 times:(u, v, w) in times, if dist[u] + w < dist[v], update dist[v] = dist[u] + w.dist[1..n].Bellman-Ford works but scans all edges every round, even when no distances change. What if we processed nodes in order of their shortest distance, so we only explore each node's edges once?
Dijkstra's algorithm is the gold standard for single-source shortest paths when all edge weights are non-negative. The key insight is greedy: the node with the smallest tentative distance is guaranteed to have its final shortest distance. We can "lock in" that node and only explore its neighbors, rather than scanning every edge repeatedly.
Here's the mental model: imagine the signal spreading outward from node k like a wave. The wave hits the closest node first, then the next closest, and so on. A min-heap (priority queue) naturally gives us nodes in this order. When we pop a node from the heap, its distance is finalized. We then push its unvisited neighbors with updated distances.
Dijkstra's greedy choice is correct because all weights are non-negative. When we pop a node u with distance d from the heap, no future path to u can be shorter. Any alternative path would go through some node still in the heap (which has distance >= d), then traverse edges with non-negative weights, resulting in a total distance >= d. So d is final.
The "skip if d > dist[u]" check handles "lazy deletion" in the heap. Since we push new entries instead of decreasing keys, the heap may contain stale entries for a node whose distance was already improved. This check skips those efficiently.
times.dist[k] = 0.(0, k) into a min-heap (distance, node).(d, u).d > dist[u], skip (we already found a shorter path to u).(v, w) of u, if dist[u] + w < dist[v], update dist[v] and push (dist[v], v) into the heap.dist[1..n].