In today’s distributed systems, data is almost never stored in a single place. It’s replicated across multiple servers, often spread across data centers around the world to ensure high availability, fault tolerance, and performance.
This global distribution enables scalability but it comes with a critical challenge:
How do we ensure every user and system component sees a consistent and accurate view of the data, especially when updates are happening all the time?
This is where consistency models come into play. They define the rules for how and when changes to data become visible across the system.
Two of the most widely discussed models are:
In this article, we'll break down these two models, explore their trade-offs, when to use each, and how to choose the right model depending on your application's goals.
In a single-server system, consistency is straightforward: when you write data, any subsequent read returns the most recent value. There’s only one copy of the data.
But in distributed systems, where data is replicated across multiple nodes (for availability, fault tolerance, or low latency), things become more complex.
Imagine you write an update to Node A. That change takes time to propagate to Node B and Node C. During this replication lag, different nodes may temporarily hold inconsistent versions of the same data.
Consistency Models define the rules and guarantees a system provides about:
In essence, consistency models answer the question:
“If I write a piece of data, when and under what conditions will others (or even I) see that new value?”
Strong consistency guarantees that once a write is successfully completed, any read operation from any client or replica will reflect that write or a newer one.
This means that the system always returns the most up-to-date value, regardless of where the read is performed.
It behaves as if there is a single, globally synchronized copy of the data, and all operations occur in a clear and consistent global order.
To achieve strong consistency, the system performs coordinated communication between replicas before confirming a write:
This behavior is made possible through consensus algorithms that ensure all replicas agree on the order of operations. Common protocols used include Paxos and Raft.
Strong consistency is the right choice when your application needs immediate correctness and absolute accuracy.
It is especially important when inconsistencies could result in lost data, incorrect decisions, or broken trust such as:
Eventual consistency is a weaker consistency model that guarantees all replicas in a distributed system will converge to the same value, eventually as long as no new updates are made.
In simpler terms:
If you stop writing to a piece of data, and wait long enough, everyone will eventually see the same (latest) version of that data.
There’s no guarantee about how soon this convergence happens. In the meantime, different replicas may serve different versions of the data leading to temporary inconsistencies.
This model is often used in distributed systems where high availability and partition tolerance are prioritized over immediate consistency.
This replication process allows the system to stay highly available and responsive, even in the presence of network partitions or server failures.
A temporary inconsistency is acceptable in many scenarios, especially when the data being read is not critical to business correctness and a slight delay in consistency does not harm the user experience.
Let’s say you're using a social media platform and you update your profile picture.
This temporary inconsistency is acceptable because the correctness of the system doesn't depend on everyone seeing the same thing at the exact same moment.
Eventual consistency is a good fit for applications that require high availability and can tolerate temporary inconsistencies. It is especially effective when systems are distributed globally or need to operate at massive scale.
Common use cases include:
It's important to note that "eventual consistency" is a broad term. There are stronger forms of eventual consistency that provide better guarantees:
If operation A causally precedes operation B (e.g., B reads a value written by A), then all processes see A before B. Operations that are not causally related can be seen in different orders.
Example: In a comment thread, replies should appear after the comment they respond to even if the system is eventually consistent overall.
After a client performs a write, any subsequent reads by that same client will always reflect the write (or a newer version). Other clients may still see stale data.
Example: You update your profile bio. When you refresh the page, your new bio appears immediately even if it takes a few seconds for others to see it.
If a client reads a value, any future reads by the same client will return the same or a newer value. The client will never see an older version of the data.
Example: You see your post has 10 likes. After refreshing, you might see 10 or 11 likes but never fewer than 10.
Writes from the same client are executed in the order they were issued by that client.
Example: You post two comments: “Hello” followed by “World.” Other users will always see “Hello” before “World.”
Even in an eventually consistent system, applying client-centric guarantees helps preserve a sense of order, responsiveness, and trust for individual users.
There's no "best" consistency model. The right choice depends heavily on your application's specific requirements:
How critical is it that all users see the most up-to-date, correct data at all times?
Will users notice or care about stale data? Can the UI manage it gracefully?
For systems where users expect immediate feedback, you can often use:
Strong consistency reduces confusion, but eventual consistency can be acceptable with the right UX design.
How important is low latency for reads and writes?
Can the system tolerate downtime or errors during network partitions?
Does the system need to support massive scale across regions or data centers?
How much complexity are you willing to handle at the application layer?
What happens if two users write conflicting data at the same time?
In eventually consistent systems, concurrent writes to different replicas must be reconciled when replicas sync.
Common strategies: