Last Updated: January 8, 2026
Traditional databases store the current state of data. When you update a customer's address, the old address is gone. The database tells you what is, but not how it got there.
Event sourcing takes a fundamentally different approach. Instead of storing current state, you store a sequence of events that describe everything that happened. The current state is derived by replaying these events. Nothing is ever deleted or updated in place. Every change is recorded as a new event.
In this chapter, you will learn:
Let us compare the two approaches with a concrete example: a shopping cart.
The database stores the current cart contents. When the user adds an item, you update the row. When they remove an item, you update the row. History is lost.
Every action is stored as an immutable event. Current state is derived by replaying all events.
| Aspect | State-Based | Event-Sourced |
|---|---|---|
| Storage | Current state only | Sequence of events |
| Updates | Overwrite previous state | Append new events |
| History | Lost unless explicitly logged | Preserved automatically |
| Audit | Requires separate audit table | Built-in audit trail |
| State | Stored directly | Derived from events |
| Schema changes | Migrate existing data | Events are immutable |
An event is an immutable record of something that happened.
Events are grouped into streams, typically one per aggregate or entity:
| Technology | Characteristics |
|---|---|
| EventStoreDB | Purpose-built for event sourcing, projections built-in |
| Apache Kafka | Distributed log, high throughput, retention policies |
| PostgreSQL | JSONB columns, familiar SQL, ACID transactions |
| DynamoDB | Managed, scalable, append-only patterns |
| Amazon Kinesis | Managed streaming, integrates with AWS |
In event sourcing, the database doesn’t store “the order.” It stores everything that happened to the order.
To get the current order, you derive it by replaying those events.
This process is usually called:
Same idea: events go in, state comes out.
Think of an aggregate (like Order) as a pure function of its history:
CurrentState = fold(apply, initialState, events)
Where apply(state, event) is your event handler.
When the system needs the current state of order-456, it does something like this:
order-456 from the event storeThis is why event ordering and versioning matter. You’re rebuilding reality by replaying history.
Replaying events is elegant, but it has a performance cliff.
If an aggregate has thousands of events, rebuilding it on every load becomes expensive.
An order with heavy editing history (item changes, address updates, partial refunds) might have 10,000 events.
Loading that order means:
That’s too slow for interactive systems.
A snapshot is a saved checkpoint of the aggregate state at a particular event version.
To load current state:
With 10,000 events, snapshot at 9,900:
| Strategy | When to Snapshot |
|---|---|
| Every N events | After every 100 events |
| Time-based | Every hour |
| On command | When explicitly requested |
| On read | If more than N events since last snapshot |
In event sourcing, commands do not directly modify state. They produce events that are then stored.
Because the event store is append-only, concurrency control is usually done with versions.
Each aggregate stream has a version:
expectedVersion=5 → succeeds → stream is now version 6expectedVersion=5 → fails (conflict)Then user B:
expectedVersion=6Unlike traditional databases where you migrate data, events are immutable. How do you handle schema changes?
You keep old events as-is, but when reading you transform them into the latest shape.
Include version in event type.
For complex migrations, copy events to new stream with transformation:
Event sourcing pairs naturally with CQRS:
Write Side: Commands processed by aggregates, events stored in event store.
Read Side: Events projected into multiple read models optimized for queries.
| Capability | How It Works |
|---|---|
| Replay | Rebuild read models from scratch by replaying events |
| New projections | Add new read models by projecting historical events |
| Temporal queries | "What was the state at time T?" - replay to that point |
| Debugging | See exactly what happened and when |
Event sourcing flips the storage model:
Instead of persisting order.status = "shipped", you persist: OrderCreated → ItemAdded → PaymentReceived → OrderShipped
That shift buys you several powerful capabilities.
Every change is recorded. You can see exactly what happened, when, and often why.
Example: Order 456 history
Because you have the full event stream, you can answer questions about past states.
When something goes wrong, event sourcing gives you the exact sequence of steps that produced the outcome.
In event sourcing, read models are typically projections derived from events. The huge advantage is:
If you need a new read model, you can build it by replaying history.
Scenario: You want a new dashboard showing order analytics.
Solution
order_analyticsThis makes new features safer: you don’t mutate your “source of truth,” you derive from it.
Event sourcing is powerful, but it’s not “free.”
You’re no longer saving objects. You’re appending facts and reconstructing state.
Traditional:
Event Sourced:
More concepts to understand: events, aggregates, projections, snapshots.
In most architectures, you write events to the event store and then update read models asynchronously.
So the timeline looks like:
That gap might be milliseconds—or seconds—or longer during incidents.
You must design for “write succeeded, read still stale.”
Events are meant to be immutable: once recorded, they represent history.
If you don’t plan for this early, schema evolution becomes painful later.
In a state-based system, you store the latest record. In an event-sourced system, you store the full history.
This is not necessarily a dealbreaker but you still need:
Events are optimized for appending and replay, not ad-hoc querying.
State-based
Event-sourced
Every query pattern needs a projection.
| Scenario | Why Event Sourcing Helps |
|---|---|
| Audit requirements | Complete history built-in |
| Complex domain | Aggregates, business rules, events map to DDD |
| Temporal queries | "What was state at time X?" |
| Event-driven architecture | Events are the native currency |
| Debugging/analytics | Full history for investigation |
| Regulatory compliance | Immutable audit trail |
| Scenario | Why Event Sourcing Hurts |
|---|---|
| Simple CRUD | Massive overkill |
| Ad-hoc queries | Need projections for every query |
| Strong consistency | Read models are eventually consistent |
| Simple domain | Complexity not justified |
| Small team | Learning curve and maintenance overhead |
Banks have used this pattern (the ledger) for centuries.
Every order state change is an event: created, paid, shipped, delivered, returned. Enables: order tracking, analytics, customer service history.
Patient records as events: visits, diagnoses, prescriptions, tests. Enables: complete medical history, temporal queries, compliance.
Event sourcing stores all changes as immutable events rather than overwriting current state:
Event sourcing represents a fundamentally different approach to data persistence. It trades some simplicity for powerful capabilities around history, auditing, and temporal analysis.