Why Event Sourcing
The problem with current state
Most applications store the result of what happened, not what happened. A customer's address is "123 Oak Street" but was it always? Did they move from "456 Elm Avenue" last Tuesday, or was it corrected from a typo? The database doesn't know. That information was overwritten the moment someone clicked Save.
Every UPDATE is an act of forgetting.
For simple systems, this is fine. For anything with business consequences (orders, payments, compliance, disputes) it's a ticking time bomb.
What breaks first
A customer calls about order #4821. The order shows Status: Cancelled. But who cancelled it? When? Was it cancelled before or after the payment was captured? Your database says "cancelled." That's all it knows.
So you add a LastModifiedBy column. Then a ModifiedAt column. Then an AuditLog table with before/after snapshots. Now you're maintaining two parallel systems: the "real" data and a shadow copy that tries to remember what the real data forgot.
Then Finance says yesterday's revenue was €42,000. The dashboard says €38,500. Both queried the same tables, but someone modified an order between the two queries. The old state is gone. There's no way to reconstruct what the database looked like at 5pm yesterday.
Then you need to notify the warehouse when something changes. So you add triggers, or a CDC pipeline, or a polling job that compares timestamps. Each solution is brittle because you're trying to derive changes from a system that only stores results.
Events as the source of truth
Event sourcing inverts the model. Instead of storing the current state, you store the sequence of events that produced it:
Events are immutable. Once written, they never change. The current state comes from replaying them in order:
State is a left fold over events.
This isn't a new idea. Your bank account works this way: a ledger of transactions, not a single balance. Git works this way: a chain of commits, not a snapshot of files.
What you gain
Complete audit trail
You don't add auditing as an afterthought. The event stream is the audit trail. Every change is recorded with what happened, when, and (via metadata) who caused it.
// The event stream tells the full story
OrderPlaced { ProductId: "SKU-42", Quantity: 2, Price: 29.99 } // 2024-01-15 10:23
PaymentCaptured { Amount: 59.98, Method: "credit-card" } // 2024-01-15 10:24
AddressChanged { Street: "123 Oak St", Reason: "customer-request" } // 2024-01-16 09:11
ItemShipped { TrackingNumber: "1Z999AA10123456784" } // 2024-01-17 14:30No shadow tables. No audit log that can drift out of sync. The events are the truth.
Temporal queries
What was the state of an order at 3pm yesterday? Replay events up to that timestamp. Every address a customer has ever had? Read the event stream. Compare today's inventory with last week's? Replay both.
In a CRUD system, this requires point-in-time backups or a temporal database extension. With event sourcing, it's a fold with a filter.
Multiple views from one stream
A single event stream can feed many read models. A customer dashboard shows order status. A warehouse view shows items to pick. A finance report aggregates revenue by period. A search index feeds full-text search. Each reads from the same events, shaped for its own use case.
Each view is a projection: a function that subscribes to events and builds a read model. You can delete one and rebuild it from scratch. Add a new one years later and it processes the entire history.
Integration without coupling
When another service needs to react to changes in your domain, it subscribes to your events. No polling, no webhooks, no shared database.
This is how the modular monolith works: bounded contexts communicate through events. Today they run in the same process. Tomorrow they can be separate services. The event contract stays the same.
What you give up
Eventual consistency. Read models lag behind the event stream by milliseconds to seconds. If you need immediate read-after-write consistency, read from the aggregate directly instead of a projection.
Schema evolution. Events are immutable, but schemas change. You need a strategy for old event formats. Nagare provides upcasters for this.
Storage growth. Event streams grow forever. For high-volume aggregates, use archiving and snapshots.
Learning curve. Thinking in events rather than state takes time to internalize.
When to use it
Use event sourcing when business decisions matter: orders, payments, shipments, claims, compliance. When audit trails are required. When you need multiple views of the same data. When integration with other systems is a real concern. When you need to answer "what was true at time T?"
Skip it when the data is simple CRUD with no business rules (user preferences, settings), when there's no value in the history, or when the team doesn't have time to learn the model properly.
The Nagare approach
Nagare (流れ, flow) is built so that event sourcing is approachable without being simplistic. You shouldn't need distributed systems theory to persist your first event. But when you're ready to build projections, integrate with Kafka, or split your monolith into services, the tools are already there.
The framework handles storage, loading, subscriptions, and concurrency. You write the domain: what commands come in, what events come out, and what those events mean.
Next: Core Concepts