AlgoMaster Logo

Design LRU Cache

Ashish

Ashish Pratap Singh

What is an LRU Cache?

LRU stands for Least Recently Used. LRU Cache is a type of cache replacement policy that evicts the least recently accessed item when the cache reaches its capacity.

In performance-critical systems (like web servers, databases, or OS memory management), caching helps avoid expensive computations or repeated data fetching. But cache memory is limited so when it's full, we need a policy to decide which item to remove.

LRU chooses the least recently accessed item, based on the assumption that:

“If you haven’t used something for a while, you probably won’t need it soon.”

LRU Cache

The LRU strategy is both intuitive and effective. It reflects real-world usage patterns. We tend to access the same small subset of items frequently, and rarely go back to older, untouched entries.

This makes LRU a popular default caching policy in many systems where speed and memory efficiency are critical.

Example

Suppose we have an LRU cache with capacity = 3, and we perform the following operations:

Final cache state: {3:C, 1:A, 4:D} (in order of usage from least to most recent)

In this chapter, we will explore the low level design of a LRU cache.

Lets start by clarifying the requirements:

1. Clarifying Requirements

Before starting the design, it's important to ask thoughtful questions to uncover hidden assumptions, clarify ambiguities, and define the system's scope more precisely.

Here is an example of how a discussion between the candidate and the interviewer might unfold:

After gathering the details, we can summarize the key system requirements.

1.1 Functional Requirements

  • Support get(key) operation: returns the value if the key exists, otherwise returns null or -1
  • Support put(key, value) operation: inserts a new key-value pair or updates the value of an existing key
  • If the cache exceeds its capacity, it should automatically evict the least recently used item.
  • Both get and put operations should update the recency of the accessed or inserted item.
  • Keys and values should be generic (e.g., <K, V>), provided the keys are hashable.

1.2 Non-Functional Requirements

  1. Time Complexity: Both get and put operations must run in O(1) time on average.
  2. Thread Safety: The implementation must be thread-safe for use in concurrent environments.
  3. Modularity: The design should follow object-oriented principles with clean separation of responsibilities.
  4. Memory Efficiency: The internal data structures should be optimized for speed and space within the defined constraints.

After the requirements are clear, the next step is to identify the core entities that we will form the foundation of our design.

2. Identifying Core Entities

Unlike systems that model real-world concepts (such as users, products, or bookings), the design of an LRU Cache is centered around choosing the right data structures and internal abstractions to achieve the required functionality and performance specifically, O(1) time complexity for both get and put operations.

The core challenge here is twofold:

  1. We need fast key-based lookup for cache reads and updates.
  2. We need fast ordering to track item usage and enforce eviction based on recency.

Let’s break this down.

The Need for Fast Lookup

To efficiently retrieve values by key, we need a data structure that supports constant-time key access.

The natural choice for this is a Hash Map (or Dictionary) since it provides O(1) average-case lookup and update.

The Need for Fast Ordering

The second challenge is maintaining the recency order of cache entries so we can:

  • Move a recently accessed item to the front (marking it as Most Recently Used, or MRU)
  • Remove the least recently used item from the back when the cache exceeds its capacity
  • Insert new items to the front (they're considered most recently used)
  • Perform all of these operations in O(1) time

An array or a list won’t work, because inserting or moving elements from the middle involves shifting which is an O(N) operation.

Instead, we use a Doubly Linked List since it allows fast removal and insertion of nodes from any position.

Each node maintains references to both its prev and next nodes, allowing us to:

  • Remove a node from the list in O(1)
  • Move a node to the head in O(1)
  • Evict the least recently used node from the tail in O(1)

Combining Both Structures

The magic of achieving O(1) for both get and put lies in combining a hash map and a doubly linked list:

  • Hash Map: Provides O(1) lookup. Instead of storing the value directly, it will store a pointer/reference to the node in our Doubly Linked List.
  • Doubly Linked List: Maintains the usage order. The head of the list will always be the Most Recently Used (MRU) item, and the tail will be the Least Recently Used (LRU) item.

This combination gives us the best of both worlds:

  • To find an item, we use the Hash Map to get a direct pointer to its node in O(1).
  • To reorder the item (make it the MRU), we use that pointer to move the node to the head of the Doubly Linked List in O(1).
  • To evict an item, we remove the node from the tail of the list in O(1).

Additional Core Entities

Beyond the HashMap and Doubly Linked List, we need two more classes to encapsulate and organize our logic:

  • Node: A simple internal class that represents an individual entry in the cache and a node in the linked list.
  • LRUCache: The main class that exposes the public cache API and coordinates all operations.

These entities form the core abstractions of our LRU Cache. They enable the system to maintain a fixed-size cache, support constant-time access and updates, and evict the least recently used entry when necessary.

3. Designing Classes and Relationships

Now that we’ve identified the core entities required to build an LRU Cache, the next step is to translate those entities into a well-structured class design.

This includes defining each class's attributes and responsibilities, establishing clear relationships among them, and visualizing the overall architecture through a class diagram.

Our goal is to ensure that the system remains modular, efficient, and easy to reason about all while satisfying the requirement of O(1) time complexity for both get and put operations.

3.1 Class Definitions

We’ll start with the simplest components and build up toward the core coordinating class.

Node<K, V>

Represents an individual entry in the cache, and also serves as a node in the doubly linked list.

Node
Attributes:
  • key: K — the unique identifier
  • value: V — the value associated with the key
  • prev: Node<K, V> — reference to the previous node in the list
  • next: Node<K, V> — reference to the next node in the list
Responsibility:

Stores key-value pairs and enables fast movement and removal in the doubly linked list through its pointers.

DoublyLinkedList<K, V>

A utility class that manages the MRU (Most Recently Used) to LRU (Least Recently Used) ordering of cache entries.

DLL
Attributes:
  • head: Node<K, V> — dummy head node for easy insertion
  • tail: Node<K, V> — dummy tail node for easy deletion
Methods:
  • addToFront(Node<K, V> node) — adds a node right after the head
  • removeNode(Node<K, V> node) — detaches a node from its current position
  • moveToFront(Node<K, V> node) — removes and re-adds a node to the front
  • removeTail(): Node<K, V> — removes and returns the least recently used node (i.e., the node just before the tail)
Responsibility:

Maintains the usage order of cache entries and supports O(1) operations for insertion, movement, and removal.

LRUCache<K, V>

The main class that provides the public APIs (get and put) and manages the overall cache logic.

LRU Cache
Attributes:
  • capacity: int — the maximum number of entries allowed in the cache
  • map: Map<K, Node<K, V>> — maps keys to their corresponding list nodes
  • list: DoublyLinkedList<K, V> — maintains the recency order of nodes
Methods:

get(K key): V

put(K key, V value): void

Responsibility:

Coordinates all cache operations, including lookup, insertion, eviction, and recency tracking.

3.2 Class Relationships

Composition ("has-a")

  • LRUCache has-a DoublyLinkedList
  • LRUCache has-a Map<K, Node<K, V>>
  • DoublyLinkedList has-a Node<K, V> for both head and tail

Association ("uses-a")

  • LRUCache uses Node to store cache entries and manipulate ordering
  • DoublyLinkedList uses Node to manage the list structure

3.3 Full Class Diagram

4. Implementation

4.1 Node Class

This is a generic doubly linked list node that stores:

  • A key-value pair (to help identify and remove nodes from the map),
  • Pointers to the previous and next nodes.

This enables constant-time addition and removal of nodes from anywhere in the list, a key requirement for efficient LRU eviction.

4.2 DoublyLinkedList Class

Implements a doubly linked list with dummy head and tail nodes to simplify edge-case handling.

  • addFirst(node): Adds a node right after the head (most recently used position).
  • remove(node): Detaches a node from its current position.
  • moveToFront(node): Moves an existing node to the front, marking it as recently used.
  • removeLast(): Removes and returns the least recently used node (just before the tail).

This list manages access order, with the head representing most recently used and the tail representing least recently used.

4.3 LRUCache Class

The LRUCache class uses two main data structures:

  • A hash map for O(1) key lookup.
  • A doubly linked list to maintain the order of use (most recent to least recent).

This ensures that both get() and put() operations run in O(1) time.

get() Method

If the key exists, the corresponding node is moved to the front of the list (most recently used), and its value is returned. If not found, returns null.

put() Method

  • If the key exists, update its value and move it to the front.
  • If the key is new and the cache is full, evict the least recently used item from both the list and the map.
  • Then insert the new node at the front and add it to the map.

This ensures LRU behavior while maintaining constant-time operations.

remove() Method

Allows explicit removal of an entry from the cache, updating both the list and the map.

Thread Safety: The get and put methods are marked as synchronized to ensure that the cache behaves correctly in a multi-threaded environment, preventing race conditions that could corrupt the state of the map and the list.

4.4 Usage Example

The demo code shows the LRUCache in action and illustrates the eviction policy.

5. Run and Test

Languages
Loading...
Loading editor...

6. Quiz

Test Your Understanding

Q1.
Question 1 of 0