AlgoMaster Logo

LRU Cache

Last Updated: March 29, 2026

medium
5 min read

Understanding the Problem

We need to build a cache that remembers key-value pairs, but has a fixed size limit. When the cache is full and we insert a new key, we evict the entry that hasn't been used for the longest time. Both "getting" and "putting" count as "using" a key.

The tricky part is the O(1) requirement. A regular hash map gives O(1) lookup and insertion, but it has no concept of ordering. A queue or list can track usage order, but finding a specific element in it takes O(n). We need a data structure that does both: fast lookup AND fast reordering.

The key observation is that we need to combine a hash map (for O(1) lookup by key) with a doubly-linked list (for O(1) removal and reinsertion to track recency). This combination, often called a "linked hash map," is the classic solution to the LRU cache problem and one of the most frequently asked design-type questions in coding interviews.

Key Constraints:

  • 1 <= capacity <= 3000 -> The cache can hold at most 3000 entries. This is small enough that even an O(n) eviction per operation would pass, but the problem explicitly requires O(1).
  • 0 <= key <= 10^4 -> Keys are non-negative integers up to 10,000. We could theoretically use an array of size 10,001, but that wouldn't track recency order.
  • At most 2 * 10^5 calls -> With up to 200,000 operations, each must be O(1) to stay well within time limits.

Approach 1: Brute Force (Array/List with Linear Search)

Intuition

The most straightforward way to think about this: keep all key-value pairs in a list, along with a timestamp or ordering number. On get, scan the list to find the key. On put, scan to check if the key exists, update or insert, and if the cache is full, scan again to find the entry with the oldest usage time and remove it.

This works correctly but every operation involves scanning the entire cache. If the cache has n entries, each get and put is O(n).

Algorithm

  1. Maintain a list of entries, where each entry stores (key, value, lastUsed).
  2. Keep a global counter that increments on every operation to track recency.
  3. On get(key): scan the list for the key. If found, update its lastUsed to the current counter and return its value. Otherwise return -1.
  4. On put(key, value): scan for the key. If found, update its value and lastUsed. If not found and cache is full, find the entry with the smallest lastUsed, remove it, then insert the new entry.

Example Walkthrough

1Initial: empty cache, capacity=2
1/8

Code

Every operation scans the full cache, and the problem requires O(1). What if we could look up any key instantly AND know which entry is the least recently used without scanning?

Approach 2: HashMap + Doubly-Linked List (Optimal)

Intuition

We need two things simultaneously: fast key lookup and fast recency tracking. A hash map gives O(1) lookup. A doubly-linked list gives O(1) insertion and removal (if you have a pointer to the node). The trick is to combine them.

Maintain a doubly-linked list where nodes are ordered by recency. The most recently used node is right after the head, and the least recently used is right before the tail. Store a hash map from each key to its corresponding node in the linked list. When we access a key, we find its node via the map in O(1), unlink it from its current position in O(1), and move it to the front in O(1). When we need to evict, we remove the node right before the tail in O(1).

The sentinel (dummy) head and tail nodes are a small but important detail. Without them, inserting at the front and removing from the back both require null checks for empty-list edge cases. With sentinels, the list is never truly empty, so every insert and remove follows the same code path.

Algorithm

  1. Create a doubly-linked list with dummy head and tail sentinel nodes. Connect them: head.next = tail, tail.prev = head.
  2. Create a hash map that maps each key to its node in the linked list.
  3. On get(key): if the key is not in the map, return -1. Otherwise, find the node via the map, remove it from its current position, add it right after the head (marking it as most recently used), and return its value.
  4. On put(key, value): if the key already exists, update its value, remove it from its current position, and add it right after the head. If the key doesn't exist, create a new node, add it right after the head, and put it in the map. If the cache now exceeds capacity, remove the node right before the tail (the LRU entry), and delete its key from the map.

Example Walkthrough

1Initial: empty list (HEAD <-> TAIL sentinels, shown as data nodes only)
null
1/10
1Initial: empty map
1/10

Code

Approach 2 is already O(1) per operation, which is optimal. But it requires implementing a doubly-linked list from scratch. The next approach shows how to achieve the same result using built-in language features.

Approach 3: Using Built-in LinkedHashMap/OrderedDict

Intuition

Most languages have a built-in data structure that combines a hash map with insertion-order tracking. Java has LinkedHashMap, Python has OrderedDict, and other languages have similar constructs. These are the same hash map + doubly-linked list combination from Approach 2, but already implemented for you.

This approach is worth knowing because interviewers sometimes ask "can you do this with a built-in?" or "how would you do this in production?" Using the standard library is the production answer. But many interviewers will then ask you to implement it from scratch, which brings you back to Approach 2.

Algorithm

  1. Use a language-provided ordered dictionary or linked hash map.
  2. On get(key): if the key exists, move it to the end (most recently used) and return its value. Otherwise return -1.
  3. On put(key, value): if the key exists, remove it first. Insert/update the key-value pair (it goes to the end as the most recently used). If the cache exceeds capacity, remove the first (oldest) entry.

Example Walkthrough

1Initial: empty ordered map, capacity=2
1/8

Code