Last Updated: March 29, 2026
We need to build a cache that remembers key-value pairs, but has a fixed size limit. When the cache is full and we insert a new key, we evict the entry that hasn't been used for the longest time. Both "getting" and "putting" count as "using" a key.
The tricky part is the O(1) requirement. A regular hash map gives O(1) lookup and insertion, but it has no concept of ordering. A queue or list can track usage order, but finding a specific element in it takes O(n). We need a data structure that does both: fast lookup AND fast reordering.
The key observation is that we need to combine a hash map (for O(1) lookup by key) with a doubly-linked list (for O(1) removal and reinsertion to track recency). This combination, often called a "linked hash map," is the classic solution to the LRU cache problem and one of the most frequently asked design-type questions in coding interviews.
1 <= capacity <= 3000 -> The cache can hold at most 3000 entries. This is small enough that even an O(n) eviction per operation would pass, but the problem explicitly requires O(1).0 <= key <= 10^4 -> Keys are non-negative integers up to 10,000. We could theoretically use an array of size 10,001, but that wouldn't track recency order.At most 2 * 10^5 calls -> With up to 200,000 operations, each must be O(1) to stay well within time limits.The most straightforward way to think about this: keep all key-value pairs in a list, along with a timestamp or ordering number. On get, scan the list to find the key. On put, scan to check if the key exists, update or insert, and if the cache is full, scan again to find the entry with the oldest usage time and remove it.
This works correctly but every operation involves scanning the entire cache. If the cache has n entries, each get and put is O(n).
(key, value, lastUsed).get(key): scan the list for the key. If found, update its lastUsed to the current counter and return its value. Otherwise return -1.put(key, value): scan for the key. If found, update its value and lastUsed. If not found and cache is full, find the entry with the smallest lastUsed, remove it, then insert the new entry.get and put scan the entire list to find a key, and eviction scans again to find the oldest entry.capacity entries, each with constant overhead.Every operation scans the full cache, and the problem requires O(1). What if we could look up any key instantly AND know which entry is the least recently used without scanning?
We need two things simultaneously: fast key lookup and fast recency tracking. A hash map gives O(1) lookup. A doubly-linked list gives O(1) insertion and removal (if you have a pointer to the node). The trick is to combine them.
Maintain a doubly-linked list where nodes are ordered by recency. The most recently used node is right after the head, and the least recently used is right before the tail. Store a hash map from each key to its corresponding node in the linked list. When we access a key, we find its node via the map in O(1), unlink it from its current position in O(1), and move it to the front in O(1). When we need to evict, we remove the node right before the tail in O(1).
The sentinel (dummy) head and tail nodes are a small but important detail. Without them, inserting at the front and removing from the back both require null checks for empty-list edge cases. With sentinels, the list is never truly empty, so every insert and remove follows the same code path.
The hash map gives us O(1) access to any node by key. The doubly-linked list gives us O(1) removal (because each node knows its neighbors) and O(1) insertion at the front. Together, they cover every operation the LRU cache needs.
The sentinel nodes are what make this clean. Without them, removeNode would need to check "is this the head? is this the tail?" and addToFront would need to check "is the list empty?" With sentinels, the real data nodes are always sandwiched between head and tail, so the pointer surgery is always the same four lines.
head and tail sentinel nodes. Connect them: head.next = tail, tail.prev = head.get(key): if the key is not in the map, return -1. Otherwise, find the node via the map, remove it from its current position, add it right after the head (marking it as most recently used), and return its value.put(key, value): if the key already exists, update its value, remove it from its current position, and add it right after the head. If the key doesn't exist, create a new node, add it right after the head, and put it in the map. If the cache now exceeds capacity, remove the node right before the tail (the LRU entry), and delete its key from the map.get and put operation. Hash map lookup is O(1). Removing a node from the doubly-linked list is O(1) since we have a direct pointer to the node. Adding to the front is O(1). Eviction removes the node before tail, also O(1).capacity nodes in the linked list and capacity entries in the hash map, plus two sentinel nodes.Approach 2 is already O(1) per operation, which is optimal. But it requires implementing a doubly-linked list from scratch. The next approach shows how to achieve the same result using built-in language features.
Most languages have a built-in data structure that combines a hash map with insertion-order tracking. Java has LinkedHashMap, Python has OrderedDict, and other languages have similar constructs. These are the same hash map + doubly-linked list combination from Approach 2, but already implemented for you.
This approach is worth knowing because interviewers sometimes ask "can you do this with a built-in?" or "how would you do this in production?" Using the standard library is the production answer. But many interviewers will then ask you to implement it from scratch, which brings you back to Approach 2.
Under the hood, these built-in structures use the exact same hash map + doubly-linked list combination from Approach 2. Java's LinkedHashMap with accessOrder=true automatically moves entries to the end on access. Python's OrderedDict.move_to_end() does it explicitly. JavaScript's Map maintains insertion order, so deleting and re-inserting effectively moves to the end.
get(key): if the key exists, move it to the end (most recently used) and return its value. Otherwise return -1.put(key, value): if the key exists, remove it first. Insert/update the key-value pair (it goes to the end as the most recently used). If the cache exceeds capacity, remove the first (oldest) entry.get and put operation. Same as Approach 2 since the underlying data structure is the same hash map + doubly-linked list combination.capacity entries.