Last Updated: April 6, 2026
At first glance this looks like a system design question squeezed into a coding problem. We need two functions that are inverses of each other: encode maps a long URL to a short one, and decode maps that short URL back to the original. The short URL can be anything we want, as long as the round-trip works.
The interesting part is that there are multiple valid approaches, each with different trade-offs around collision handling, predictability, and determinism. This makes it a great problem for exploring hash maps and hashing strategies. The core challenge isn't algorithmic complexity (every reasonable approach is O(1) per call) but rather the design decisions you make and how you justify them.
1 <= url.length <= 10^4 → URLs can be moderately long, but we're not dealing with massive data. Any in-memory solution is fine.url is guaranteed to be a valid URL → No need for URL validation logic.decode is only called with URLs produced by encode → We don't need to handle invalid short URLs.The simplest approach is to assign each URL a unique number. Keep a counter that starts at 0 and increments every time a new URL is encoded. The short URL becomes , then , and so on. To decode, just look up the number in a map.
This is the "brute force" of URL shortening. It's dead simple, guarantees uniqueness without any collision handling, and both operations run in O(1). The downside? The URLs are predictable. Anyone can guess that if exists, so does . In a real system, that's a security concern, but for this problem it works perfectly.
encode: store the long URL with the current counter as key, build the short URL as http://tinyurl.com/{counter}, increment the counter, and return the short URL.decode: extract the numeric ID from the short URL, look it up in the hash map, and return the original long URL.The counter-based approach works, but the short URLs are sequential integers, making them predictable and easy to enumerate. Can we generate keys that are unpredictable and harder to guess?
Instead of predictable sequential IDs, generate a random alphanumeric string for each URL. Pick 6 characters from [a-zA-Z0-9] (62 possible characters per position), giving us 62^6 ≈ 56.8 billion possible keys. The chance of collision is astronomically low for any reasonable number of URLs.
We maintain two hash maps: one from short key to long URL (for decoding), and one from long URL to short key (to avoid encoding the same URL twice). This gives us the best of both worlds: unpredictable URLs and deduplication.
a-z, A-Z, 0-9.encode: check if the long URL was already encoded. If yes, return the existing short URL. Otherwise, generate a random 6-character key, check for collisions, store the mapping in both directions, and return the short URL.decode: extract the key from the short URL and look it up in the key-to-URL map.The random approach is excellent for most purposes, but it's non-deterministic. Two different instances of the class will produce different short URLs for the same input. What if we want a deterministic approach where the same URL always maps to the same short URL, without needing the url2code dedup map?
Instead of random keys, use a hash function on the long URL to produce a fixed-length key. Java's built-in hashCode() or any hash function maps the URL to an integer, which we then convert to a compact string. This approach is deterministic: the same URL always produces the same short URL, eliminating the need for a reverse map.
The trade-off is that hash collisions are possible (different URLs could produce the same hash). We handle this by checking for collisions and incrementing the key when they occur. But for the scale of this problem, collisions are extremely rare.
encode: compute the hash code of the long URL, convert it to a positive integer, use it as the key, store the mapping, and return the short URL.decode: extract the key from the short URL and look it up in the map.