Last Updated: January 22, 2026
LRU stands for Least Recently Used. LRU Cache is a type of cache replacement policy that evicts the least recently accessed item when the cache reaches its capacity.
Loading simulation...
In performance-critical systems (like web servers, databases, or OS memory management), caching helps avoid expensive computations or repeated data fetching. But cache memory is limited so when it's full, we need a policy to decide which item to remove.
LRU chooses the least recently accessed item, based on the assumption that:
“If you haven’t used something for a while, you probably won’t need it soon.”
The LRU strategy is both intuitive and effective. It reflects real-world usage patterns. We tend to access the same small subset of items frequently, and rarely go back to older, untouched entries.
This makes LRU a popular default caching policy in many systems where speed and memory efficiency are critical.
Suppose we have an LRU cache with capacity = 3, and we perform the following operations:
Final cache state: {3:C, 1:A, 4:D} (in order of usage from least to most recent)
In this chapter, we will explore the low level design of a LRU cache.
Lets start by clarifying the requirements:
Before starting the design, it's important to ask thoughtful questions to uncover hidden assumptions, clarify ambiguities, and define the system's scope more precisely.
Here is an example of how a discussion between the candidate and the interviewer might unfold:
Candidate: Should the LRU Cache support generic key-value pairs, or should we restrict it to specific data types?
Interviewer: The cache should be generic and support any type of key-value pair, as long as keys are hashable.
Candidate: Should the cache operations be limited to get and put, or do we also need to support deletion?
Interviewer: For now, we can limit operations to get and put.
Candidate: What should the get operation return if the key is not found in the cache?
Interviewer: You can return either null or a sentinel value like -1.
Candidate: How should we handle a put operation on an existing key? Should it be treated as a fresh access and move the key to the most recently used position?
Interviewer: Yes, an update through put should count as usage. The key should be moved to the front as the most recently used.
Candidate: Will the cache be used in a multi-threaded environment? Do we need to ensure thread safety?
Interviewer: Good question. Assume this cache will be used in a multi-threaded server environment. It must be thread-safe.
Candidate: What are the performance expectations for the get and put operations?
Interviewer: Both get and put operations must run in O(1) time on average.
After gathering the details, we can summarize the key system requirements.
get(key) operation: returns the value if the key exists, otherwise returns null or -1put(key, value) operation: inserts a new key-value pair or updates the value of an existing keyget and put operations should update the recency of the accessed or inserted item.<K, V>), provided the keys are hashable.get and put operations must run in O(1) time on average.Now that we understand what we're building, let's identify the building blocks of our system.
Unlike systems that model real-world concepts (such as users, products, or bookings), the design of an LRU Cache is centered around choosing the right data structures and internal abstractions to achieve the required functionality and performance.
The core challenge here is twofold:
Let's walk through our requirements and identify what needs to exist in our system.
"Both
getandputoperations must run in O(1) time"
To efficiently retrieve values by key, we need a data structure that supports constant-time key access.
The natural choice is a HashMap. It provides O(1) average-case lookup and update. When someone calls get("user_123"), we need to find that entry immediately, not scan through a list.
But here's the problem: a HashMap doesn't maintain any order. It can't tell us which entry was accessed least recently. If we only used a HashMap, we'd have to scan all entries to find the LRU item during eviction, making that operation O(n).
"If the cache exceeds its capacity, automatically evict the least recently used item"
The second challenge is maintaining the recency order of cache entries so we can:
An array won't work because inserting or moving elements from the middle involves shifting, which is O(n). A regular linked list won't work either because finding a specific node to move requires O(n) traversal.
The key insight is using a Doubly Linked List. Each node maintains references to both its prev and next nodes, allowing us to:
But how do we get a direct reference to a node without traversing the list? That's where the HashMap comes back into play.
The magic of achieving O(1) for both get and put lies in combining a HashMap and a Doubly Linked List:
This combination gives us the best of both worlds:
Beyond the HashMap and Doubly Linked List, we need two more classes to encapsulate and organize our logic:
Node: A simple internal class that represents an individual entry in the cache and a node in the linked list. It stores the key-value pair and maintains pointers to adjacent nodes.LRUCache: The main class that exposes the public cache API and coordinates all operations. It owns both the HashMap and the DoublyLinkedList.Why store the key in the Node?
When we evict from the tail, we need to remove the entry from the HashMap too. The HashMap needs the key to remove an entry, so each Node must remember its key.
Here's how these entities relate to each other:
These entities form the core abstractions of our LRU Cache. They enable the system to maintain a fixed-size cache, support constant-time access and updates, and evict the least recently used entry when necessary.
With our entities identified, let's define their attributes, behaviors, and relationships.
Now that we know what entities we need, let's flesh out their details. For each class, we'll define what data it holds (attributes) and what it can do (methods). Then we'll look at how these classes connect to each other.
We'll work bottom-up: simple types first, then the classes with real logic. This order makes sense because complex classes depend on simpler ones.
Node<K, V>The Node class represents an individual entry in the cache and serves as a node in the doubly linked list.
When we evict the LRU item from the tail, we need to remove it from the HashMap. The HashMap's remove() method requires the key. Without storing the key in the Node, we'd need to iterate through the HashMap to find which key maps to this Node, making eviction O(n).
The Node class is intentionally simple. It's a data container with minimal behavior. The prev and next pointers are mutable because the node's position in the list changes as items are accessed.
DoublyLinkedList<K, V>A utility class that manages the MRU (Most Recently Used) to LRU (Least Recently Used) ordering of cache entries.
Without them, we'd need special cases for:
The dummy nodes eliminate all edge cases. The real data always lives between head and tail, so every real node always has a valid prev and next pointer.
LRUCache<K, V>The main class that provides the public API (get and put) and manages the overall cache logic.
Key Design Principles:
get and put. They don't know about nodes, linked lists, or eviction mechanics.get and put should be synchronized to prevent race conditions in multi-threaded environments.How do these classes connect? Let's examine the relationships.
Composition means one object owns another. When the owner is destroyed, the owned objects are destroyed too.
Association means one object uses another, but doesn't control its lifecycle.
Unlike more complex systems like Tic-Tac-Toe, the LRU Cache doesn't require heavyweight design patterns like Strategy, Observer, or Singleton. The problem is fundamentally about data structure composition rather than behavioral flexibility.
You might think: "What if we want LFU (Least Frequently Used) or FIFO eviction instead?" The Strategy pattern seems appealing.
However, for this interview problem, we're explicitly building an LRU cache. The eviction policy is the identity of the system, not a pluggable behavior. Adding a Strategy interface when there's only one implementation is over-engineering.
If the interviewer asks about supporting multiple eviction policies, you could refactor then. But don't anticipate requirements that haven't been stated.
Unlike a game system where multiple components need notifications (scoreboard, analytics, replay), a cache is self-contained. There's no external entity that needs to know when items are evicted.
The lesson here is that good design isn't about applying patterns everywhere. It's about using the right tool for the job. The LRU Cache's elegance comes from smart data structure combination, not pattern complexity.
Now that we've designed our classes and relationships, let's put your understanding to the test before we reveal the solution.
Before looking at the complete implementation, try building the LRU Cache yourself. Below you'll find template stubs for all the classes we designed. Each stub includes method signatures and // TODO comments where you need to add the implementation logic.
Your task: Implement all the // TODO sections based on the design we discussed. Once complete, run the demo to verify your implementation produces the expected output.
Once you've implemented all the classes and verified the output matches, compare your solution with the complete implementation in the next section.
Now let's translate our design into working code. We'll build bottom-up: the Node class first, then the DoublyLinkedList, and finally the LRUCache that ties everything together.
The Node is the fundamental building block. It stores the key-value pair and maintains links to adjacent nodes in the list.
A few things to note:
prev and next for O(1) pointer manipulation.remove() requires the key, so the node must remember it.This class manages the ordering of cache entries. The head represents the most recently used position, and the tail represents the least recently used.
Let's walk through each method:
Constructor: Creates dummy head and tail nodes with null keys and values. These dummies never contain real data. They exist to eliminate null checks. After construction, the list looks like: HEAD <-> TAIL
addFirst(node): Inserts a node right after the head. The order of pointer updates matters:
If we updated head.next before using it to set node.next, we'd lose the reference.
remove(node): The beauty of doubly linked lists. We don't need to traverse to find the node's neighbors. We have direct references via node.prev and node.next. We simply make them point to each other, bypassing the removed node.
moveToFront(node): This is the key operation for maintaining LRU order. When an item is accessed, we move it to the front. Rather than implementing special-case logic, we just remove and re-add.
removeLast(): Returns the node just before the tail (the LRU item). We check for empty list by seeing if tail.prev == head (meaning no real nodes exist).
This is the main class that ties everything together. It coordinates the HashMap and DoublyLinkedList to provide O(1) operations.
Let's trace through the logic:
Constructor: Initializes the capacity, creates an empty HashMap, and creates an empty DoublyLinkedList. The cache starts with no entries.
get(key):
All operations are O(1): HashMap lookup, pointer manipulation in the list.
put(key, value):
Case 1: Key already exists
Case 2: Key is new
Thread Safety: Both methods are marked synchronized. This ensures that only one thread can execute either method at a time on the same cache instance. In a high-throughput scenario, you might use more fine-grained locking, but for interview purposes, synchronized demonstrates awareness of concurrency.
The following diagrams illustrate what happens during get and put operations:
Which core combination of data structures is most suitable to achieve O(1) time complexity for both get and put operations in an LRU Cache?