Last Updated: January 15, 2026
In the cache-aside pattern, your application is the orchestrator. It checks the cache, queries the database on misses, populates the cache, and handles invalidation. Every service touching the data needs to implement this logic correctly. Miss one spot, and you have stale data.
Read-through and write-through patterns flip this model. The cache becomes the orchestrator. Your application talks only to the cache. The cache handles database interaction behind the scenes. This simplifies application code but requires a cache layer that understands how to reach your database.
These two patterns often work together: read-through handles fetching data, write-through handles persisting changes. Together, they create a unified data access layer where the cache and database appear as a single system.
In a read-through cache, the application only interacts with the cache. When data is not in the cache, the cache itself fetches it from the database.
Step-by-step (cache miss scenario):
cache.get("user:123")The application code is simple:
For read-through to work, the cache needs a way to fetch data. This is typically configured through a "loader" function:
| Advantage | Explanation |
|---|---|
| Simplified application code | No cache miss handling logic in every service |
| Consistent loading logic | One place defines how data is fetched |
| Reduced bugs | Cannot forget to populate cache after a miss |
| Easier testing | Application code has no database dependencies |
| Disadvantage | Explanation |
|---|---|
| Cache layer complexity | Cache must understand your data model |
| Tight coupling | Cache configuration tied to database schema |
| Cold start latency | First access to any key hits database |
| Limited flexibility | Harder to have key-specific caching logic |
Write-through caching ensures that every write goes to both the cache and the database synchronously. The write is only considered complete when both have been updated.
Step-by-step:
cache.set("user:123", userData)The key property: the cache and database are always in sync after a write completes.
Like read-through, write-through requires a "writer" function:
These patterns work well together, creating a unified data access layer:
Benefits of combining:
| Aspect | Cache-Aside | Read-Through | Write-Through |
|---|---|---|---|
| Who manages cache population? | Application | Cache | N/A (for reads) |
| Who manages database writes? | Application | N/A (for writes) | Cache |
| Application complexity | Higher | Lower | Lower |
| Cache layer complexity | Lower | Higher | Higher |
| Write latency | Database only | N/A | Database + cache |
| Consistency model | Eventual (with TTL) | Depends on write pattern | Strong (after write) |
| Cold cache performance | Application handles | Cache handles | N/A |
Use cache-aside when:
Use read-through when:
Use write-through when:
Write-through adds latency to every write operation:
The latency is similar, but write-through is strictly sequential. If your cache latency is low, this rarely matters. If your database is slow, every write feels it.
Interview Insight: When asked about write-through latency, acknowledge the sequential nature but note that for most applications the difference is negligible compared to network and database latency. The consistency guarantee is often worth the small overhead.
Each pattern handles failures differently:
If the database is unavailable during a read-through miss, the error propagates to the application. The cache cannot serve what it does not have.
Strategies:
Write-through failures are trickier because you have two systems to update:
Libraries like Caffeine (Java), Guava (Java), and python-cachetools support read-through:
Redis itself does not support read-through natively, but you can build it:
Some distributed caches support read-through and write-through natively:
| System | Read-Through | Write-Through |
|---|---|---|
| Hazelcast | Yes (MapLoader) | Yes (MapStore) |
| Apache Ignite | Yes (CacheLoader) | Yes (CacheWriter) |
| Oracle Coherence | Yes | Yes |
| Redis | No (needs wrapper) | No (needs wrapper) |
| Memcached | No | No |
A challenge with read-through is cold cache performance. On startup or cache clear, every key results in a database query.
Passive warming: Accept slow initial performance. Each miss populates the cache. Performance improves over time.
Active warming: Preload frequently accessed keys before traffic arrives.
Interview Insight: When discussing read-through, interviewers may ask about cold start scenarios. Explain warming strategies and how to protect the database during the warming period (rate limiting, gradual traffic increase).
A key difference in write handling:
| Aspect | Cache-Aside | Write-Through |
|---|---|---|
| After write completes | Cache is empty (invalidated) | Cache has new value |
| Next read | Cache miss, fetch from DB | Cache hit |
| Consistency | Eventual (next read gets fresh) | Immediate |
| Write complexity | Two operations (DB + invalidate) | One operation (to cache) |
Write-through eliminates the cache miss after a write:
This is beneficial for read-after-write patterns where users immediately see their changes.
Write-through provides stronger consistency but is not perfect:
Mitigation: Use optimistic locking or versioning:
If the database has triggers that modify data, write-through may cache pre-trigger values:
Mitigation: Have the cache fetch the final state from the database after write, or avoid triggers that modify data.
Read-through and write-through patterns move cache management from the application to the cache layer:
cache.get(), and the cache handles database queries. This simplifies application code but requires configuring a loader function.Both read-through and write-through are synchronous patterns. The application waits for all operations to complete.
But what if write latency is critical and you can tolerate some delay in persistence? This is where write-behind caching comes in, trading immediate consistency for better write performance, which we will cover next.