Last Updated: March 21, 2026
We need to find the longest contiguous chunk of characters in s where no character appears more than once. The key word is "substring", which means the characters must be adjacent in the original string. A subsequence like "pwke" from "pwwkew" doesn't count because the characters aren't contiguous.
The core challenge is: how do we efficiently track which characters are in our current window and quickly detect when a duplicate enters? A naive approach would check every possible substring, but we can do much better by recognizing that when we find a duplicate, we don't need to start over from scratch. We just need to shrink the window until the duplicate is gone.
0 <= s.length <= 5 * 10^4 → With n up to 50,000, O(n^2) might pass but will be tight. O(n) is what we should aim for.length = 0), so we need to handle that edge case.The most direct approach is to check every possible substring and see if it has all unique characters. For each starting index i, we try every ending index j >= i and verify that the substring s[i..j] contains no duplicates. We track the longest valid substring we find.
To check whether a substring has all unique characters, we can use a set. Add each character one by one. If we ever try to add a character that's already in the set, that substring has a duplicate.
maxLen = 0i from 0 to n-1:j from i to n-1:s[j] is already in the set, break (any longer substring starting at i will also contain this duplicate)s[j] to the setmaxLen = max(maxLen, j - i + 1)maxLenThe bottleneck is that when we find a duplicate, we throw away everything and start fresh from the next index. But most of the characters we already verified are still valid. What if instead of restarting, we just slid the left boundary forward to remove the duplicate and continued expanding to the right?
Instead of checking every substring independently, we maintain a window [left, right] that always contains unique characters. We expand the window by moving right forward. When s[right] is already in the window, we shrink the window by moving left forward and removing characters from our set until the duplicate is gone.
This is the classic sliding window pattern. Both pointers only move forward, so each character is added and removed from the set at most once. That gives us O(n) time.
Both left and right only move forward, never backward. Each character is added to the set exactly once (when right reaches it) and removed at most once (when left passes it). So even though there's a while loop inside the for loop, the total number of operations across the entire run is at most 2n, not n^2. This is the key insight that makes sliding window problems O(n).
left = 0, maxLen = 0, and an empty hash setright from 0 to n-1:s[right] is already in the set:s[left] from the setlefts[right] to the setmaxLen = max(maxLen, right - left + 1)maxLenright and once by left. So the total work is O(2n) = O(n).This approach is already O(n), but there's a subtle inefficiency. When we hit a duplicate, we slide left forward one character at a time. What if we could jump left directly to the position right after the previous occurrence, skipping all those intermediate removals?
Instead of a set that only tells us whether a character is in the window, we use a hash map that stores the most recent index of each character. When we encounter a duplicate, we can jump the left pointer directly past the previous occurrence, no need to slide one step at a time.
The key detail: when we see that s[right] already exists in the map at index prevIndex, we set left = max(left, prevIndex + 1). The max is important because left might already be past prevIndex (if we jumped past it due to an earlier duplicate). Without the max, we'd move left backward, which would break the algorithm.
The hash map approach works because we never need to look at a character more than once. Each character gets its index recorded, and when we see it again, we know exactly where to move the left boundary. The max(left, prevIndex + 1) check ensures we never move left backward, which preserves the invariant that everything in [left, right] is unique.
Compared to the hash set approach, we avoid the inner while loop entirely. The left pointer might jump multiple positions in one step, but it still only moves forward. The total work across all iterations remains O(n), but with a smaller constant factor since we're not doing incremental removals.
left = 0, maxLen = 0, and an empty hash map (character to index)right from 0 to n-1:s[right] exists in the map and its stored index is >= left:left to map[s[right]] + 1 (jump past the previous occurrence)map[s[right]] = rightmaxLen = max(maxLen, right - left + 1)maxLenright pointer. The left pointer jumps forward but never backward, so total movement is at most n.