The CRDT Dictionary: A Field Guide to Conflict-Free Replicated Data Types

Back around 2014, I kept hearing about this cool database called Riak, a distributed database that could survive network partitions and keep accepting writes. Some really interesting companies were using it at massive scale, and I was curious about it. One of the big selling points was that it could handle concurrent writes without any coordination or consensus. I was intrigued, and I started reading about it. Underlying all of this was the concept of CRDTs, Conflict-free Replicated Data Types.1

At the time, I was working on a beer startup called Brewtown with a friend: a beer review social site and delivery subscription service. It failed for other reasons, but I was a little too enamored with and shiny tech back then, and CRDTs and Riak fit the bill for shiny tech. I kept trying to find excuses to shoehorn CRDT stuff into our codebase when, honestly, we didn’t need any of it. Postgres would’ve been fine. Live and learn.

Anyways, the idea sounded like pure sorcery: data structures that replicate across nodes and merge deterministically, without coordination, without losing information. I got excited, read a few papers, played with some toy implementations… and then we gave up on the beer startup. I didn’t really have a reason to mess with CRDTs for a while.

Fast forward to 2025, I’ve just had Thanksgiving dinner, and I’m curious again. What’s the state of the art? What have I forgotten? Which CRDT should I reach for when? So I’m writing this, both as a refresher for myself and a reference for the next time I need to remember why OR-Sets exist or what WOOT stands for. (“WithOut Operational Transformation.” Yes, really.2)

So, grab a coffee.

Commutative. Replicated. Data Types.

In isolation, all of the words make sense. But when you look at the literature, it’s overwhelming:

Suddenly you’ve moved beyond the simple terms and start seeing things like G-Counters, PN-Counters, LWW-Sets, OR-Sets, 2P-Sets, RGAs, WOOTs, Logoots (wtf?)… Each with subtle tradeoffs. Each paper assuming you’ve read the previous five. It’s overwhelming.

This guide will hopefully cut through that. We’ll build intuition through interactive demos and concrete examples. You’ll see how merges actually work, watch conflicts resolve (or not resolve), and develop a feel for which CRDT fits which problem.

What You Need to Know

You don’t need a PhD in distributed systems. If you understand:

  • Why network failures happen
  • What “eventual consistency” means
  • Basic set operations (union, intersection)

…you’re good.

The Problem

Picture this: Alice and Bob are both editing a shared counter. Alice increments it. Bob increments it. The network is flaky, so neither sees the other’s change immediately. Later, they reconnect. What should the counter show?

Option 1: Consensus - Use Paxos/Raft to agree on who went first. Works great! Until the network partitions and half your users can’t write because they can’t reach a quorum. Not ideal for offline-first apps.

Option 2: Last-Write-Wins - Use timestamps. Whoever wrote last “wins.” Easy to implement! Except Bob’s increment gets completely erased if Alice’s timestamp was later. Data loss.

Option 3: CRDTs - Design the data structure so that merging is deterministic. Both increments survive. No coordination needed. No data loss. However, you have to be okay with some level of eventual consistency.

What’s the trick? How do CRDTs achieve this?

Roughly speaking, you are working with a CRDT if your merge operation is:

  • commutative (order doesn’t matter)
  • associative (grouping doesn’t matter)
  • idempotent (duplicates don’t matter)

Once you achieve these properties, then you can use your merge operation to ensure that replicas automatically converge to the same state.

A Quick Detour: Lattices and Why They Matter

Before we dive into specific CRDTs, let’s build some intuition about what makes merging work. In CRDT literature, this is often referred to as a “lattice”.

Think about natural numbers with max as the merge operation. If you have 3 and 5, taking max(3, 5) = 5 makes sense. It doesn’t matter if you compute max(3, max(5, 7)) or max(max(3, 5), 7) - you get 7 either way. And max(5, 5) = 5, so duplicates are harmless.

This forms a partial order: some values are “greater than” others (5 > 3), and there’s a join operation (max) that gives you the least upper bound. The fancy math term is “join-semilattice,” but think of it as: a way to consistently pick “more recent” or “more complete” information.

Here’s the key insight: if your data structure’s states form a lattice, and updates only move “upward” in the ordering, then:

  • You can apply updates in any order
  • You can apply the same update twice
  • Eventually, everyone agrees on the maximum state

Consider a counter where each replica tracks its own count: {A: 3, B: 5}. The partial order is pointwise: {A: 3, B: 5} ≥ {A: 2, B: 5} because each component is greater-or-equal. To join, take the max of each component. This is exactly how the G-Counter CRDT works!

Why does this matter? Because if you can design your data structure so that:

  1. States form a lattice (there’s always a sensible “join”)
  2. Operations only move upward (you can’t un-increment a counter)

Then merging becomes trivial: just take the join. No coordination needed. No conflicts possible. The math guarantees convergence.

Not all CRDTs fit this clean model (some need timestamps or version vectors to determine what’s “greater”), but the lattice intuition often guides the design. When you see merge = unionWith max or merge = union, you’re seeing some pure, beautiful math-brained lattice thinking.

State-Based vs Operation-Based

Moving on…

There are two fundamental approaches to CRDTs:

State-based CRDTs (CvRDTs) send the entire state to other replicas, which merge it with their local state using a join operation. The state must form a join-semilattice.3

Operation-based CRDTs (CmRDTs) send operations to other replicas, which apply them to their local state. Operations must be commutative when applied concurrently.4

In this guide, we’ll primarily discuss state-based CRDTs, as they’re conceptually simpler and the ideas translate naturally to the operation-based variants.

The Core Laws

For a data structure to be a state-based CRDT, its merge operation must satisfy:

Associativity: (a ⊔ b) ⊔ c = a ⊔ (b ⊔ c) where denotes the merge/join operation

Commutativity: a ⊔ b = b ⊔ a

Idempotence: a ⊔ a = a

These properties ensure that:

  • Merging in any order produces the same result
  • Re-receiving the same state is harmless
  • Partial merges can be composed

Additionally, the state must form a monotonic semilattice: updates only move “upward” in the partial order, never downward. This ensures convergence: once all updates have been delivered, all replicas reach the same state.

For the curious, The symbol ⊔ is called (square cup) or square union. I have no idea why regular union symbol isn’t used. Pointy-headed researchers, I guess.

Anyways, it’s commonly used to denote:

  • Disjoint union - union of sets treated as disjoint
  • Join operation in lattice theory - the least upper bound (supremum) of two elements
  • Merge operation in CRDTs - combining two states by taking their least upper bound

With these foundations in place, let’s explore the CRDT zoo.

G-Counter: Grow-Only Counter

Let’s start with the simplest CRDT: a counter that only goes up.5

The Idea

Instead of storing one global count, each replica tracks its own count. The total is the sum of all replica counts. When replicas merge, they take the max of each replica’s count.

Why max? Because counts only increase. If replica A shows that replica B has counted to 5, and replica B shows it’s counted to 3, we know A has seen newer information. Taking the max ensures we never lose increments.6

Implementation

type GCounter = Map ReplicaId Nat

value :: GCounter -> Nat
value counter = sum (Map.elems counter)

increment :: ReplicaId -> GCounter -> GCounter
increment r counter = Map.insertWith (+) r 1 counter

merge :: GCounter -> GCounter -> GCounter
merge = Map.unionWith max

Laws and Invariants

The merge operation forms a join-semilattice where the partial order is defined pointwise: c1 ≤ c2 if for all replicas r, c1[r] ≤ c2[r].

  • Associative: max is associative
  • Commutative: max is commutative
  • Idempotent: max(x, x) = x
  • Monotonic: Each replica’s count only increases

Intuition

Think of each replica as having its own tally marks. When replicas sync, they each adopt the maximum tally for each replica they’ve seen. Since tallies only grow, taking the maximum ensures we never lose increments.

Tradeoffs

Advantages:

  • Simple and efficient
  • No metadata overhead beyond replica counts
  • Perfect for increment-only scenarios (page views, likes, etc.)

Disadvantages:

  • Cannot decrement
  • Size grows with number of replicas (though typically small)
  • No garbage collection (all replica counts retained forever)

When to Use

Use G-Counter when you need to count upward-only events in a distributed system: analytics counters, like counts, view counts, or any monotonically increasing metric. (If you need to decrement, well… keep reading.)

Interactive Demo

Try it yourself! Increment counters on different replicas and see how the merge operation works:

G-Counter: Grow-Only Counter

Replica A
0
A:0
B:0
C:0
Replica B
0
A:0
B:0
C:0
Replica C
0
A:0
B:0
C:0

Try it: Click "Increment" on different replicas multiple times without merging. Each replica tracks its own count independently. Then click the merge buttons (← A, ← B, ← C) to sync state. Watch how the merge uses max - you never lose increments, and all replicas eventually reach the same total.

PN-Counter: Positive-Negative Counter

What if we need to decrement? Enter the PN-Counter. The trick is beautifully simple.

Definition

A PN-Counter contains two G-Counters: one for increments, one for decrements:

data PNCounter = PNCounter
  { increments :: GCounter
  , decrements :: GCounter
  }

The value is the difference:

value :: PNCounter -> Int
value (PNCounter inc dec) = value inc - value dec

What I love about PN-Counters as a broader insight for CRDTs is that you can often build more complex CRDTs by combining simpler ones.

Operations

Increment (on replica r):

increment r (PNCounter inc dec) = PNCounter (increment r inc) dec

Decrement (on replica r):

decrement r (PNCounter inc dec) = PNCounter inc (increment r dec)

Merge:

merge (PNCounter i1 d1) (PNCounter i2 d2) =
  PNCounter (merge i1 i2) (merge d1 d2)

Laws and Invariants

Since both components are G-Counters with valid merge operations, the PN-Counter’s merge inherits their properties and forms a semilattice.

Intuition

A PN-Counter is like having two separate tally sheets: one for additions, one for subtractions. The current value is the difference between them. When replicas sync, they merge both sheets independently.

Tradeoffs

Advantages:

  • Supports both increment and decrement
  • Deterministic convergence
  • Simple to understand and implement

Disadvantages:

  • Double the space of a G-Counter
  • Can never truly garbage collect old replica entries
  • No bound on the value range (can overflow)
  • Cannot reset the counter atomically

When to Use

Use PN-Counter for any metric that can increase or decrease over time: inventory counts, resource pools, etc.

Variants

Some implementations use a single map with integer values instead of two separate maps, but the principle is the same.

G-Set: Grow-Only Set

Moving from numbers to collections, we consider the simplest CRDT set.

Definition

A G-Set is simply a set that supports addition but not removal:

type GSet a = Set a

Operations

Add:

add :: Ord a => a -> GSet a -> GSet a
add = insert

Merge:

merge :: Ord a => GSet a -> GSet a -> GSet a
merge = union

Laws and Invariants

Sets with union form a semilattice under the subset relation.

  • Associative: Set union is associative
  • Commutative: Set union is commutative
  • Idempotent: A ∪ A = A
  • Monotonic: Sets only grow

Intuition

Once an element is added to any replica, it will eventually appear in all replicas. There’s no way to remove it.

Tradeoffs

Advantages:

  • Minimal overhead (just the set elements)
  • Simple and efficient
  • Familiar set semantics

Disadvantages:

  • Cannot remove elements
  • Grows unbounded
  • No garbage collection

When to Use

Use G-Set for append-only collections where removal is never needed: event logs, collected tags, or immutable registries.

2P-Set: Two-Phase Set

The natural extension of G-Set to support removal.

Definition

A 2P-Set (Two-Phase Set) contains two G-Sets: one for added elements, one for removed elements:

data TwoPhaseSet a = TwoPhaseSet
  { added :: GSet a
  , removed :: GSet a
  }

An element is in the set if it’s been added but not removed:

member :: Ord a => a -> TwoPhaseSet a -> Bool
member x (TwoPhaseSet a r) = x `Set.member` a && x `Set.notMember` r

Operations

Add:

add x (TwoPhaseSet a r) = TwoPhaseSet (insert x a) r

Remove:

remove x (TwoPhaseSet a r) = TwoPhaseSet a (insert x r)

Merge:

merge (TwoPhaseSet a1 r1) (TwoPhaseSet a2 r2) =
  TwoPhaseSet (union a1 a2) (union r1 r2)

Laws and Invariants

Bias toward removal: If an element appears in the removed set, it’s not in the 2P-Set, even if it’s also in the added set.

Once removed, forever removed: Once an element is removed at any replica, it will eventually be removed from all replicas and cannot be re-added.

Intuition

The 2P-Set is like marking items in a ledger: you can add entries and you can cross them out, but you can’t un-cross-out an entry. Once something is crossed out (removed), that decision is permanent.

Tradeoffs

Advantages:

  • Supports both add and remove
  • Simple to understand
  • Deterministic convergence

Disadvantages:

  • Cannot re-add removed elements (the “2P” means two-phase: add, then remove, no going back)
  • Both sets grow monotonically (removed items never truly disappear)
  • No garbage collection
  • Not suitable for scenarios where elements might be removed and re-added

When to Use

Use 2P-Set when elements have a lifecycle of “not present → added → removed” and never need to be re-added: task completion tracking, tombstones, or revoked permissions.

LWW-Element-Set: Last-Write-Wins Element Set

What if we want to re-add elements? We need timestamps.

Definition

An LWW-Element-Set associates each element with a timestamp for additions and removals:

data LWWSet a = LWWSet
  { addTimes :: Map a Timestamp
  , removeTimes :: Map a Timestamp
  }

An element is in the set if its most recent operation was an add:

member :: Ord a => a -> LWWSet a -> Bool
member x (LWWSet adds removes) =
  case (Map.lookup x adds, Map.lookup x removes) of
    (Just t1, Just t2) -> t1 > t2
    (Just _, Nothing) -> True
    _ -> False

Operations

Add (with timestamp t):

add x t (LWWSet adds removes) = LWWSet (insert x t adds) removes

Remove (with timestamp t):

remove x t (LWWSet adds removes) = LWWSet adds (insert x t removes)

Merge:

merge (LWWSet a1 r1) (LWWSet a2 r2) =
  LWWSet (unionWith max a1 a2) (unionWith max r1 r2)

Laws and Invariants

The merge operation is well-defined because max over timestamps forms a semilattice.

Timestamp monotonicity: Each replica must generate increasing timestamps (typically using wall clocks plus replica IDs as tiebreakers).

Bias: We must decide what happens when add and remove timestamps are equal. Common choices: bias toward add, or bias toward remove.

Intuition

Each element has a timestamp for when it was last added and when it was last removed. The most recent operation wins. When merging, we take the latest add timestamp and latest remove timestamp we’ve seen.

Tradeoffs

Advantages:

  • Supports add, remove, and re-add
  • Can garbage collect old timestamps (carefully)
  • Natural semantics for many applications

Disadvantages:

  • Requires synchronized clocks (or logical clocks with careful replica ID handling)
  • Concurrent add/remove on the same element may surprise users (one operation is discarded)
  • Loses information: if two users concurrently add the same element, only one timestamp survives
  • The “last write wins” semantics mean data loss is possible

When to Use

Use LWW-Element-Set when you need a set with add/remove/re-add capability and can tolerate last-write-wins semantics: user preferences, feature flags, or cached collections where perfect consistency isn’t critical.

Clock Considerations

The biggest pitfall of LWW-Element-Set is clock skew. If replica A’s clock is ahead of replica B’s, then A’s operations will always “win” over B’s, even if B’s operations happened later in real time. Solutions include:

  • Use hybrid logical clocks (HLC) instead of wall clocks
  • Use replica IDs as tiebreakers (e.g., timestamps are (wall_time, replica_id) pairs)
  • Accept the inconsistency as a tradeoff

OR-Set: Observed-Remove Set

The most sophisticated set CRDT, solving the re-add problem without LWW semantics.7

Definition

An OR-Set (Observed-Remove Set) associates each element with a set of unique tags:

type ORSet a = Map a (Set Tag)

Tags are unique identifiers generated when adding an element (e.g., (replica_id, sequence_number) pairs).

An element is in the set if it has any tags:

member :: Ord a => a -> ORSet a -> Bool
member x set = case Map.lookup x set of
  Just tags -> not (Set.null tags)
  Nothing -> False

Operations

Add (with fresh tag t):

add x t set = Map.insertWith union x (singleton t) set

Remove (with observed tags ts):

remove x ts set = Map.update (\\tags ->
  let remaining = tags \\ ts
  in if Set.null remaining then Nothing else Just remaining) x set

The critical insight: removal removes only the tags that were observed. If concurrent adds create new tags, those survive.

Merge:

merge = Map.unionWith union

Laws and Invariants

The merge operation forms a semilattice where s1 ≤ s2 if for all elements x, s1[x] ⊆ s2[x].

Add wins: If an add and remove happen concurrently (the add’s tag wasn’t observed by the remove), the add wins.

Causal consistency: You can only remove tags you’ve observed (seen in a prior state).

Intuition

Think of each addition as dropping a unique token into a bucket for that element. Removal takes specific tokens out of the bucket. If someone concurrently added a new token you haven’t seen, your removal doesn’t affect it. An element is present if its bucket has any tokens.

This gives us add-wins semantics: concurrent add and remove means the element stays in the set (because the remove didn’t observe the add’s tag).

Tradeoffs

Advantages:

  • Supports add, remove, and re-add with intuitive semantics
  • No timestamp requirements
  • Add-wins semantics are often more desirable than LWW
  • Properly handles concurrent operations

Disadvantages:

  • Larger space overhead (tags per element)
  • More complex implementation
  • Need garbage collection strategy for tags
  • Remove operations need to carry the observed tags (larger messages)

When to Use

Use OR-Set when you need a set with full add/remove/re-add support and can’t tolerate LWW’s data loss: collaborative editing, shopping carts, or any scenario where concurrent adds should be preserved.

Garbage Collection

Old tags can accumulate. Strategies include:

  • Tombstones: Keep removed tags for a grace period before discarding
  • Version vectors: Use causal history to determine which tags are safe to remove
  • Bounded tags: Limit the number of tags per element, using LWW within that bound

Interactive Demo

Experience the add-wins semantics of OR-Set:

OR-Set: Observed-Remove Set

Replica A
Elements:
(empty set)
Replica B
Elements:
(empty set)

Try it: Add the same element (e.g., "apple") on both replicas without merging first. Each creates a unique tag. Then remove it from one replica and immediately merge. Notice the element stays because the remove only deleted the tags it could see - the other replica's tag survives. This is add-wins semantics.

LWW-Register: Last-Write-Wins Register

Registers store single values. The simplest register CRDT uses last-write-wins.

Definition

An LWW-Register pairs a value with a timestamp:

data LWWRegister a = LWWRegister
  { value :: a
  , timestamp :: Timestamp
  }

Operations

Write (with timestamp t):

write x t _ = LWWRegister x t

Merge:

merge r1@(LWWRegister v1 t1) r2@(LWWRegister v2 t2)
  | t1 > t2 = r1
  | t1 < t2 = r2
  | otherwise = r1  -- tiebreaker (could use replica ID)

Laws and Invariants

The merge operation is a semilattice with partial order defined by timestamps.

One value wins: When concurrent writes occur, only one survives (the one with the higher timestamp).

Tradeoffs

Advantages:

  • Simple and efficient
  • Small size (just value + timestamp)
  • Easy to understand

Disadvantages:

  • Loses concurrent updates
  • Requires clock synchronization
  • No way to detect or recover lost updates

When to Use

Use LWW-Register for single-value cells where you can tolerate lost updates: user profile fields, configuration settings, or cached computed values.

Interactive Demo

See data loss in action with last-write-wins semantics:

LWW-Register: Last-Write-Wins Register

Global Clock: 0
Replica A
(no value)
Replica B
(no value)

⚠️ Data Loss Alert: Write different values on replica A and B without merging. Each gets its own timestamp. Then merge A into B (or vice versa). Watch one value completely disappear! The higher timestamp wins. This is why LWW is dangerous for important data - concurrent writes = lost data.

MV-Register: Multi-Value Register

What if we want to preserve concurrent writes instead of discarding them?

Definition

An MV-Register stores a set of value-timestamp pairs:

type MVRegister a = Set (a, Timestamp)

When reading, you get back all concurrently written values (values with incomparable timestamps).

Operations

Write (with timestamp t):

write x t reg = Set.singleton (x, t)

Merge:

merge reg1 reg2 =
  let combined = union reg1 reg2
      maxTime = maximum (map snd combined)
      concurrent = filter (\\(_, t) -> t == maxTime) combined
  in fromList concurrent

More sophisticated: keep values with causally concurrent timestamps, not just the maximum.

Laws and Invariants

The merge preserves all values that might be “current” from different replicas’ perspectives.

Concurrent values preserved: If two writes happened concurrently, both values appear until a subsequent write supersedes them.

Tradeoffs

Advantages:

  • No data loss on concurrent updates
  • Application can detect and resolve conflicts
  • More information available for conflict resolution

Disadvantages:

  • Returns sets of values, not single values
  • Application must handle conflict resolution
  • More complex semantics
  • Slightly larger space overhead

When to Use

Use MV-Register when concurrent updates must be detected and resolved by application logic: collaborative text fields, conflict-aware configuration, or any scenario where losing an update is unacceptable.

Conflict Resolution

When reading an MV-Register returns multiple values, the application must resolve the conflict. Strategies include:

  • Present all values to the user (collaborative editing)
  • Apply a deterministic merge function (e.g., union of tags)
  • Use application-specific semantics (e.g., prefer non-empty values)

OR-Map: Observed-Remove Map

Maps are common. How do we make them CRDTs?

Definition

An OR-Map is a map where each key is associated with an OR-Set of tagged values:

type ORMap k v = Map k (ORSet (v, Tag))

Alternatively, implement as a composition of OR-Set (for keys) with per-key CRDTs (for values).

Operations

Put (with fresh tag t):

put k v t map = Map.insertWith union k (singleton (v, t)) map

Remove key:

removeKey k map = Map.delete k map

Remove specific value:

removeValue k v tags map = -- similar to OR-Set remove

Merge:

merge = Map.unionWith (OR-Set merge)

Tradeoffs

Advantages:

  • Full map operations with CRDT semantics
  • Can nest other CRDTs as values
  • Compositional

Disadvantages:

  • Complex metadata management
  • Garbage collection challenges
  • Larger overhead

When to Use

Use OR-Map when you need a distributed key-value store with CRDT guarantees: collaborative JSON documents, distributed configuration, or nested data structures.

RGA: Replicated Growable Array

Sequences are hard. How do you handle insertions in the middle when replicas disagree on positions?8

Definition

RGA (Replicated Growable Array) assigns each element a unique ID and stores the sequence as a tree structure based on insertion order and causality.9

data RGA a = RGA
  { elements :: Map UID (a, UID)  -- element ID -> (value, parent ID)
  , root :: UID
  }

Each element knows its “parent” (the element after which it was inserted).

Operations

Insert (after element with ID p, with fresh ID uid):

insert p x uid rga = -- complex tree manipulation

Delete (element with ID uid):

delete uid rga = -- mark as tombstone, don't actually remove

Merge: Merge trees by reconciling insertion orders.

Laws and Invariants

The challenge is that positional indices change as elements are inserted/removed. RGA solves this by using immutable IDs and causal relationships.

Causal order preserved: If element A was inserted before element B on the same replica, that relationship is preserved globally.

Intuition

Instead of “insert at position 5,” you say “insert after element X.” Since X has a unique ID, this instruction is unambiguous even when other replicas are concurrently inserting elsewhere.

Tradeoffs

Advantages:

  • Supports arbitrary insertions and deletions
  • Eventual consistency for sequences
  • Handles concurrent edits intuitively

Disadvantages:

  • Complex implementation
  • Large overhead (IDs, tombstones)
  • No compaction without coordination
  • Performance degrades with many deletes (tombstones accumulate)

When to Use

Use RGA for collaborative text editing or any replicated sequence where insertions at arbitrary positions must be supported: shared lists, collaborative documents, or distributed queues.

Alternatives

Other sequence CRDTs include:

  • WOOT (Without Operational Transformation): similar idea, different structure
  • Logoot: uses position identifiers between elements
  • LSEQ: adaptive allocation of position identifiers
  • YATA: optimizations for text editing workloads10

Each has different tradeoffs in space overhead, time complexity, and behavior under specific edit patterns.

Causal CRDTs: Adding Causality

Advanced CRDTs incorporate causal tracking using version vectors or similar mechanisms. This enables more sophisticated semantics.

Version Vectors

A version vector tracks the logical clock for each replica:11

type VersionVector = Map ReplicaId Nat

Operations include the version vector, allowing replicas to determine causality: whether one operation happened-before another, or whether they were concurrent.

Causal Register

Pairs an MV-Register with version vectors:

data CausalRegister a = CausalRegister
  { values :: Map VersionVector a
  }

Only keeps values with concurrent version vectors, discarding those that are causally dominated.

Advantages of Causality

  • More precise conflict detection (concurrent vs. causally ordered)
  • Better garbage collection (can discard superseded operations)
  • Foundation for stronger consistency guarantees

Disadvantages of Causality

  • Larger metadata (version vectors grow with number of replicas)
  • More complex logic
  • Still doesn’t eliminate conflicts, just detects them more precisely

Interactive Demo

Explore how version vectors track causality:

Vector Clocks: Tracking Causality

Replica A
Vector Clock:
A:0
B:0
C:0
Receive from:
Replica B
Vector Clock:
A:0
B:0
C:0
Receive from:
Replica C
Vector Clock:
A:0
B:0
C:0
Receive from:

Try it: Click "Perform Operation" on replica A, then on replica B (without clicking "Receive from"). These operations don't know about each other - they're concurrent! Now click two operations in the history to compare their vector clocks. Concurrent operations show as yellow warnings - they need special conflict resolution.

Delta CRDTs: Efficient State Transmission

State-based CRDTs have a problem: sending the entire state on every sync is wasteful. Delta CRDTs solve this.12

The Problem

Consider a G-Counter with 1000 replicas. If replica A increments its count, must it send all 1000 entries to replica B? That’s inefficient: only one entry changed!

The Solution

Instead of sending full state, send only the delta: the part of the state that changed since the last sync.

type Delta a = a  -- same type as state, but represents only changes

merge :: CRDT a => a -> Delta a -> a

For G-Counter, a delta might be just {A: 1} instead of the full map.

Definition

A Delta CRDT extends a state-based CRDT with delta operations:

data DeltaCRDT a = DeltaCRDT
  { state :: a
  , lastSent :: Map ReplicaId a  -- track what we've sent to each replica
  }

delta :: ReplicaId -> DeltaCRDT a -> Delta a
delta replica crdt = state crdt `since` lastSent[replica]

Laws and Invariants

Delta CRDTs must satisfy the same semilattice properties as regular state-based CRDTs, plus:

Delta-state equivalence: Merging deltas incrementally must be equivalent to merging full states.

Delta composition: Deltas can be composed: delta1 ⊔ delta2 is itself a valid delta.

Tradeoffs

Advantages:

  • Dramatically reduced bandwidth (send only changes)
  • Same convergence guarantees as state-based CRDTs
  • Can batch multiple deltas together
  • Easier to implement than operation-based CRDTs

Disadvantages:

  • Must track what has been sent to each replica
  • Slightly more complex than pure state-based
  • Still need full state for new replicas joining

When to Use

Use Delta CRDTs when network bandwidth is a concern or state size is large. Most production CRDT systems use delta-state internally (Riak, Automerge). If you’re implementing your own CRDT system from scratch, start with deltas. Your future self will thank you.

Example: Delta G-Counter

increment :: ReplicaId -> GCounter -> (GCounter, Delta GCounter)
increment r counter =
  let newCounter = insertWith (+) r 1 counter
      delta = singleton r 1  -- only the change!
  in (newCounter, delta)

The delta is just the single updated entry, not the entire counter.

WOOT: Without Operational Transformation

WOOT is a sequence CRDT that predates RGA, with different design choices.2

Definition

WOOT represents a sequence as a set of character objects with unique IDs, where each character stores references to its previous and next characters:

data WChar a = WChar
  { charId :: UID
  , value :: a
  , prevId :: UID
  , nextId :: UID
  , isVisible :: Bool
  }

type WOOT a = Set (WChar a)

Key Insight

Instead of storing a linear sequence, WOOT stores constraints: “this character comes after X and before Y.” When multiple characters claim to be between X and Y, a deterministic ordering (based on UID) resolves the conflict.

Operations

Insert (after character with ID prev, before character with ID next):

insert :: a -> UID -> UID -> UID -> WOOT a -> WOOT a
insert val uid prev next woot =
  insert (WChar uid val prev next True) woot

Delete (character with ID uid):

delete :: UID -> WOOT a -> WOOT a
delete uid woot = -- mark character as invisible, don't remove

Linearization

To read the sequence, perform a topological sort respecting the prev/next constraints, filtering out invisible characters.

Tradeoffs

Advantages:

  • Strong eventual consistency
  • No need for causal delivery (constraints handle ordering)
  • Intuitive model (characters reference neighbors)

Disadvantages:

  • Tombstones accumulate (deleted characters remain)
  • Linearization has O(n²) worst case
  • More complex than RGA
  • UIDs must be globally unique and ordered

Comparison with RGA

  • RGA: Uses a tree structure, parent-child relationships
  • WOOT: Uses bidirectional constraints, more flexible but slower linearization

When to Use

WOOT is primarily of historical interest. Modern implementations prefer RGA or YATA for better performance. But it’s a neat design, and the name alone makes it worth knowing about.

Logoot: Scalable Position Identifiers

Logoot takes a different approach to sequences: instead of linking elements, assign each element a position in a dense order.13

Definition

Each element has a position identifier that is a sequence of (digit, replicaId) pairs:

type Position = [(Int, ReplicaId)]

data LogootElement a = LogootElement
  { position :: Position
  , value :: a
  , isDeleted :: Bool
  }

type Logoot a = Set (LogootElement a)

Positions are ordered lexicographically.

Key Insight

Positions form a dense order: between any two positions, you can always allocate a new position. To insert between positions p1 and p2, generate a new position p such that p1 < p < p2.

Operations

Insert (between positions before and after):

insert :: a -> Position -> Position -> Logoot a -> Logoot a
insert val before after logoot =
  let newPos = allocatePosition before after currentReplicaId
      element = LogootElement newPos val False
  in insert element logoot

Position Allocation:

allocatePosition :: Position -> Position -> ReplicaId -> Position
allocatePosition before after replicaId =
  -- Find a position between before and after
  -- Use replicaId as tiebreaker for deterministic ordering

Delete:

delete :: Position -> Logoot a -> Logoot a
delete pos logoot = -- mark element at pos as deleted

Laws and Invariants

Deterministic ordering: Elements are always ordered by their positions.

Unique positions: Each insert generates a unique position (using replica ID in the position).

Tradeoffs

Advantages:

  • No need to reference other elements by ID
  • Simpler merge than WOOT
  • Positions are self-describing (no need to look up IDs)
  • Can insert without knowing the full document structure

Disadvantages:

  • Position identifiers grow over time (especially with many edits)
  • Still accumulates tombstones
  • Position allocation algorithm is complex
  • Pathological cases where positions become very long

LSEQ: Adaptive Positions

LSEQ improves on Logoot by using an adaptive allocation strategy. Instead of always allocating positions the same way, LSEQ alternates between strategies to keep positions shorter on average.14

When to Use

Use Logoot/LSEQ when you need a sequence CRDT and want simpler semantics than RGA/WOOT. The tradeoff is position identifier growth.

Tree CRDTs: Hierarchical Data

Extending CRDTs to trees is challenging because parent-child relationships must be maintained consistently.

The Problem

Trees have structural constraints:

  • Each node has exactly one parent (except root)
  • No cycles allowed
  • Moving a node changes parent-child relationships

How do we handle concurrent operations like:

  • Two replicas move the same node to different parents?
  • One replica moves node A under node B while another moves B under A?

Approaches

OR-Tree: Combine OR-Set with parent pointers, using conflict resolution strategies when multiple parents are observed.

CRDT-Tree: Use causal ordering to determine which move operations take precedence.

Log-based Trees: Store operations in a replicated log and rebuild tree structure on read.

OR-Tree Definition

type ORTree a = Map NodeId (ORSet ParentId, a)

Each node stores an OR-Set of potential parents. Conflict resolution:

  • Last-write-wins: Use timestamps to pick winning parent
  • First-wins: The first parent observed wins
  • Merge: Allow nodes to have multiple parents temporarily, application resolves

Tradeoffs

Advantages:

  • Can represent hierarchical data distributedly
  • Handles concurrent structural changes

Disadvantages:

  • Complex conflict resolution strategies
  • Must prevent cycles (may require rejecting some operations)
  • Moving subtrees is complicated
  • High metadata overhead

When to Use

Use Tree CRDTs for file systems, organizational charts, or document outlines where the hierarchy must be replicated. Be prepared for complexity in handling concurrent structural changes.

Alternatives

For many use cases, an OR-Map with explicit parent fields is simpler than a full Tree CRDT, even if it doesn’t enforce tree constraints at the CRDT level.

Observed-Remove Shopping Cart

A practical example combining multiple CRDT concepts.

The Domain

An e-commerce shopping cart must support:

  • Add product to cart
  • Remove product from cart
  • Change quantity
  • Work offline and sync later

Naive Approach: LWW Map

type CartLWW = Map ProductId (Int, Timestamp)

Problems:

  • Concurrent additions of the same product (one wins)
  • Remove on one device, add on another (one wins, data loss)

Better: OR-Set + PN-Counter

type ShoppingCart = Map ProductId PNCounter
  • Use OR-Set semantics for which products are in cart
  • Use PN-Counter for quantities
  • Add-wins semantics for products (if concurrently added and removed, item stays)
  • Quantities merge correctly (concurrent +1 and +2 becomes +3)

Operations

Add product with quantity:

addToCart :: ProductId -> Int -> ReplicaId -> ShoppingCart -> ShoppingCart
addToCart pid qty replica cart =
  let counter = lookupOr emptyCounter pid cart
      incremented = incrementN replica qty counter
  in insert pid incremented cart

Remove product:

removeFromCart :: ProductId -> ShoppingCart -> ShoppingCart
removeFromCart pid cart = delete pid cart

Change quantity:

changeQuantity :: ProductId -> Int -> ReplicaId -> ShoppingCart -> ShoppingCart
changeQuantity pid delta replica cart =
  let counter = lookupOr emptyCounter pid cart
      updated = if delta > 0
                then incrementN replica delta counter
                else decrementN replica (-delta) counter
  in insert pid updated cart

Tradeoffs

Advantages:

  • Handles all operations correctly
  • No data loss on concurrent modifications
  • Intuitive semantics for users

Disadvantages:

  • PN-Counters can go negative (need validation)
  • Must track all replicas (for PN-Counter)
  • Slightly more overhead than simple LWW

This example shows how combining basic CRDTs creates sophisticated application-level data structures.

Practical Considerations

Choosing a CRDT

The choice of CRDT depends on your requirements:

Do you need only additions? Use G-Counter or G-Set.

Do you need removals but not re-additions? Use 2P-Set.

Can you tolerate last-write-wins? Use LWW-Element-Set or LWW-Register.

Do you need to preserve concurrent operations? Use OR-Set or MV-Register.

Do you have sequences? Use RGA or similar sequence CRDT.

Do you need nested structures? Use OR-Map with nested CRDTs.

Garbage Collection

Garbage collection is one of the most challenging practical problems with CRDTs. The fundamental tension: CRDTs achieve convergence by monotonically accumulating information, but production systems can’t grow unbounded forever.

The Problem in Detail

Consider an OR-Set used for a collaborative todo list. Each time someone adds a task and removes it, we accumulate:

  • A unique tag for the addition (never removed)
  • A tombstone tracking the removal (never removed)

After 10,000 tasks have been created and completed, our “empty” todo list still contains 10,000 tags worth of metadata. In a G-Counter tracking page views, we keep a separate count for every replica that has ever incremented the counter—even if that replica hasn’t been online in years. For sequence CRDTs like RGA or WOOT, every deleted character becomes a tombstone that must be retained indefinitely. A 1000-character document that’s been heavily edited might internally contain 50,000 tombstones.

The core issue: CRDTs converge by retaining enough information to handle any possible merge. If replica A discards metadata about some operation, and replica B (which has been offline for weeks) later tries to merge its state—which still references that metadata—the merge may produce incorrect results.

Why Can’t We Just Delete Old Data?

Let’s make this concrete with an OR-Set example:

-- Replica A's state
orset_a = {"todo-1": {tag_1, tag_2}}

-- Replica B's state (has been offline)
orset_b = {"todo-1": {tag_1, tag_2, tag_3}}

-- Replica A removes todo-1, observing tags {tag_1, tag_2}
-- Now A's state is:
orset_a = {}

-- If A garbage collects and forgets about tags {tag_1, tag_2},
-- then later merges with B:
merge(orset_a, orset_b) = {"todo-1": {tag_3}}

-- The element reappears! (Zombie resurrection)

The element we removed comes back because we lost the causal information about which tags we had observed and removed. This is the fundamental safety problem with CRDT garbage collection.

Strategies and Tradeoffs

Time-Based Expiry

The simplest approach: discard metadata older than some threshold (e.g., 30 days). This works well when you can guarantee all replicas sync within that window.

gcTombstones :: Timestamp -> ORSet a -> ORSet a
gcTombstones cutoff set =
  -- Remove tags older than cutoff
  Map.mapMaybe (\\tags ->
    let recent = Set.filter (\\t -> tagTime t > cutoff) tags
    in if Set.null recent then Nothing else Just recent) set

Advantages:

  • Simple to implement
  • No coordination required
  • Works well for frequently-syncing systems

Disadvantages:

  • Unsafe if replicas can be offline longer than the grace period
  • Must choose grace period conservatively (wasted space)
  • Zombie resurrection if threshold is too aggressive

When to use: Mobile apps where you can bound offline time (e.g., “you must sync at least once per week”).

Coordinated Garbage Collection

Use distributed consensus to agree on what’s safe to discard. Once all replicas acknowledge they’ve received a particular update, the corresponding metadata can be safely removed.

data GCState = GCState
  { pendingGC :: Set Tag  -- Tags eligible for GC
  , replicaAcks :: Map ReplicaId (Set Tag)  -- What each replica has seen
  }

-- When all replicas have acked a tag, it's safe to remove
safeToDiscard :: GCState -> Set Tag
safeToDiscard (GCState pending acks) =
  -- Tags that all known replicas have acknowledged
  Set.filter (\\tag -> all (Set.member tag) (Map.elems acks)) pending

Advantages:

  • Completely safe (no zombie resurrections)
  • Can garbage collect aggressively once consensus is reached
  • Works with arbitrary offline periods

Disadvantages:

  • Requires coordination (defeats CRDT’s main selling point!)
  • Slow convergence if some replicas are rarely online
  • Must track all replicas (what about replicas that never come back?)

When to use: When you have a bounded, known set of replicas and can tolerate periodic coordination rounds.

Version Vectors for Causal Tracking

Use version vectors to track causal history. Metadata can be discarded once it’s been causally superseded at all replicas.

data CausalORSet a = CausalORSet
  { elements :: Map a (Set (Tag, VersionVector))
  , replicaVersions :: Map ReplicaId VersionVector  -- Last known VV per replica
  }

-- A tag can be GC'd if its version vector is dominated by all known replicas
canDiscardTag :: (Tag, VersionVector) -> Map ReplicaId VersionVector -> Bool
canDiscardTag (_, tagVV) replicaVVs =
  all (\\replicaVV -> tagVV `happenedBefore` replicaVV) (Map.elems replicaVVs)

This is more sophisticated: we track causality explicitly and can safely discard tags that are in the causal past of all known replicas.

Advantages:

  • More precise than time-based expiry
  • No coordination needed for the happy path
  • Safe as long as causal tracking is correct

Disadvantages:

  • Version vectors add significant overhead (O(replicas) per operation)
  • Still requires tracking all replicas
  • Complex to implement correctly
  • What about new replicas that join later?

When to use: Systems already using version vectors for causal consistency (Riak, Cassandra-style systems).

Bounded Structures with Fallback

Limit metadata size and use LWW semantics when bounds are exceeded. For example, keep at most 1000 tags per element in an OR-Set. If we exceed that, discard the oldest tags and accept potential anomalies.

addWithBound :: Ord a => a -> Tag -> Int -> ORSet a -> ORSet a
addWithBound x tag maxTags set =
  let currentTags = Map.findWithDefault Set.empty x set
      newTags = Set.insert tag currentTags
      boundedTags = if Set.size newTags > maxTags
                    then Set.fromList $ take maxTags $ 
                         sortBy (comparing tagTimestamp) (Set.toList newTags)
                    else newTags
  in Map.insert x boundedTags set

Advantages:

  • Bounded space overhead (guaranteed)
  • No coordination needed
  • Graceful degradation (becomes LWW-ish when bounded)

Disadvantages:

  • Correctness sacrificed for space
  • May lose concurrent operations
  • Choosing the bound is difficult (too small = frequent anomalies, too large = still wasteful)

When to use: When you must have bounded space (embedded systems, strict SLAs) and can tolerate occasional anomalies.

Checkpoint and Rebase

Periodically create a “checkpoint” snapshot and discard history before that point. New replicas joining after the checkpoint start from the snapshot.

data CheckpointedCRDT a = CheckpointedCRDT
  { baselineState :: a  -- Snapshot at checkpoint
  , checkpointTime :: Timestamp
  , deltaSince :: [Delta a]  -- Operations since checkpoint
  }

-- Create a new checkpoint, discarding old deltas
checkpoint :: CheckpointedCRDT a -> CheckpointedCRDT a
checkpoint crdt = CheckpointedCRDT
  { baselineState = foldl merge (baselineState crdt) (deltaSince crdt)
  , checkpointTime = currentTime
  , deltaSince = []
  }

Replicas that haven’t synced since before the checkpoint must do a full state sync rather than incremental merge.

Advantages:

  • Can aggressively prune old history
  • Conceptually clean (like Git’s shallow clones)
  • Works well with mostly-online systems

Disadvantages:

  • Replicas offline during checkpoint period lose incremental sync
  • Need to track which replicas are pre-checkpoint
  • Full state sync is expensive

When to use: Collaborative editing systems where most users are online most of the time (Google Docs, Figma).

Practical Recommendations

For most applications, a hybrid approach works best:

  1. Use time-based expiry with a conservative grace period (90 days)
  2. Track the oldest unsynced replica timestamp
  3. Only discard metadata older than: min(graceperiod, oldest_unsynced - safety_margin)
  4. Provide manual “compact” operations for administrators
  5. Use bounded structures for untrusted/public replicas

Without some form of garbage collection, CRDT state grows unbounded and will eventually exhaust memory or storage. The question isn’t whether to implement GC, but which tradeoffs you’re willing to accept.

And, realistically speaking, you’re unlikely to implement a system that only uses CRDTs and no other data storage. You’ll almost certainly have some sort of traditional database to store your data, which you can probably use to periodically coordinate garbage collection.

A note on Causal Consistency

CRDTs themselves don’t enforce causal delivery. You need a causal broadcast protocol to ensure operations are delivered respecting happens-before relationships. Without causal delivery, some CRDTs (especially operation-based ones) may behave incorrectly.

Performance

Different CRDTs have different performance characteristics. Consider your read/write ratio, expected contention, and replica count when choosing:

CRDT TypeSpace ComplexityAdd/InsertRemove/DeleteMergeRead/QueryNotes
G-CounterO(r)O(1)N/AO(r)O(r)Space: one counter per replica
PN-CounterO(r)O(1)O(1)O(r)O(r)Double the space of G-Counter
G-SetO(e)O(1)N/AO(e)O(1)Standard set operations
2P-SetO(e)O(1)O(1)O(e)O(1)Both added and removed sets grow
LWW-Element-SetO(e)O(1)O(1)O(e)O(1)Can GC old timestamps carefully
OR-SetO(e × t)O(1)O(t)O(e × t)O(1)Tags accumulate, needs GC
LWW-RegisterO(1)O(1)N/AO(1)O(1)Minimal overhead
MV-RegisterO(concurrent)O(1)N/AO(c)O(c)Returns set of concurrent values
OR-MapO(k × t)O(1)O(t)O(k × t)O(1)Per-key OR-Set overhead
RGAO(n + d)O(log n)O(log n)O(n + d)O(n)Tombstones accumulate
WOOTO(n + d)O(n²) worstO(log n)O(n + d)O(n²) worstLinearization is expensive
Logoot/LSEQO(n × p)O(log n)O(log n)O(n)O(n log n)Position identifiers grow

Legend:

  • r = number of replicas
  • e = number of elements in set
  • t = average tags per element (OR-Set)
  • k = number of keys in map
  • n = number of visible elements in sequence
  • d = number of deleted elements (tombstones)
  • c = number of concurrent writes
  • p = average position identifier length

Key Observations:

  • Counter CRDTs scale with replica count, not operation count. A billion increments still cost O(replicas) space.
  • Set CRDTs generally have constant-time operations, but OR-Set’s space grows with tags unless garbage collected.
  • Sequence CRDTs suffer from tombstone accumulation. RGA is typically faster than WOOT in practice despite similar asymptotic complexity.
  • Position-based sequences (Logoot/LSEQ) trade time complexity for avoiding explicit parent pointers, but position identifiers can grow pathologically.
  • Merge operations are often the bottleneck in high-throughput systems. Delta CRDTs dramatically improve merge performance by sending only changes.

Libraries and Implementations

Many CRDT libraries exist:

  • Automerge: Full-featured CRDT library for JSON-like documents15
  • Yjs: Optimized for collaborative editing16
  • Riak: Database with built-in CRDT support17
  • Redis Enterprise: CRDT-enabled Redis18
  • AntidoteDB: CRDT-native database19

Each makes different tradeoff decisions.

Further Reading

The CRDT literature is vast and honestly a bit scattered across conference proceedings. Here are the key papers worth reading:

Foundational:

Sequence CRDTs:

Advanced Topics:

Surveys:

Wrapping up

CRDTs are not a silver bullet. They trade coordination for metadata, strong consistency for eventual consistency, and simplicity for convergence guarantees. But in scenarios where availability matters more than immediate consistency, they’re remarkably powerful.

There is no “best” CRDT, only CRDTs suited to different problems; the CRDT you choose depends entirely on your application’s semantics:

  • What operations do you need (add, remove, re-add)?
  • Can you tolerate lost updates?
  • Do you need to detect conflicts or resolve them automatically?
  • What’s your tolerance for metadata overhead?

The CRDT abstraction is elegant in theory, but bewildering in practice because there are so many instances with subtle differences. Hopefully this guide has cut through some of the confusion, and given you a good intuition for how they work and when to use them.

I honestly still haven’t hit a use case for CRDTs that I couldn’t solve with a traditional database and some custom coordination logic. But sometimes we just want to learn for the sake of learning. If you beat me to it, let me know!

Footnotes

  1. The term “Conflict-free Replicated Data Type” was coined by Marc Shapiro, Nuno Preguiça, Carlos Baquero, and Marek Zawirski in their 2011 paper “Conflict-free Replicated Data Types” (technical report) and the 2011 SSS conference paper “A comprehensive study of Convergent and Commutative Replicated Data Types”. The theoretical foundations draw from earlier work on commutative replicated data types and optimistic replication.

  2. WOOT was introduced by Oster, Urso, Molli, and Imine in “Data Consistency for P2P Collaborative Editing” (2006). The name is a play on “OT” (Operational Transformation), emphasizing that it achieves similar goals “WithOut OT.” WOOT was one of the first practical sequence CRDTs and influenced many subsequent designs. 2

  3. State-based CRDTs are also called “convergent” replicated data types (CvRDT). The “Cv” stands for “convergent” - emphasizing that replicas converge to the same state by repeatedly applying the join operation.

  4. Operation-based CRDTs are also called “commutative” replicated data types (CmRDT). They require causal delivery of operations - if operation A happened before operation B on the same replica, B must not be delivered before A at any other replica.

  5. The G-Counter appears in Shapiro et al.’s 2011 technical report “A Comprehensive Study of Convergent and Commutative Replicated Data Types” as one of the foundational examples demonstrating CRDT principles.

  6. The space complexity is O(n) where n is the number of replicas, not the number of increments. This means G-Counters scale well with the number of operations but require tracking all replicas that have ever incremented the counter.

  7. The OR-Set (Observed-Remove Set) was introduced by Shapiro et al. in their 2011 technical report. It’s also known as the “Add-Wins Set” because concurrent add and remove operations result in the element remaining in the set. The key innovation is using unique tags to distinguish between different additions of the same element.

  8. Sequence CRDTs are particularly challenging because positional indices change as elements are inserted or deleted. Unlike sets or counters where elements have stable identity, sequences must maintain ordering despite concurrent modifications at arbitrary positions.

  9. RGA was introduced by Roh et al. in “Replicated Abstract Data Types: Building Blocks for Collaborative Applications” (2011). The name “Replicated Growable Array” emphasizes that it’s an array-like structure that can grow through replication.

  10. YATA (Yet Another Transformation Approach) was developed by Kevin Jahns for the Yjs collaborative editing library. It combines ideas from RGA and WOOT while optimizing for the common case of sequential insertions (typing). Yjs is used in production by companies like Braid, Row Zero, and others for real-time collaboration.

  11. Version vectors were introduced by Parker et al. in “Detection of Mutual Inconsistency in Distributed Systems” (1983). They extend Lamport’s logical clocks to track causality in distributed systems. Each replica maintains a vector of logical clocks (one for each replica), enabling precise causal ordering without requiring synchronized physical clocks.

  12. Delta CRDTs were introduced by Almeida, Shoker, and Baquero in “Delta State Replicated Data Types” (2018). They bridge the gap between state-based and operation-based CRDTs, achieving operation-based bandwidth efficiency while maintaining state-based simplicity. Most production CRDT systems (Riak, Automerge) use delta-state internally.

  13. Logoot was introduced by Weiss, Urso, and Molli in “Logoot: A Scalable Optimistic Replication Algorithm for Collaborative Editing” (2009). The name combines “log” (logarithmic complexity) with “oot” from WOOT, its predecessor. Logoot’s position-based approach influenced many subsequent CRDTs including LSEQ and Treedoc.

  14. LSEQ was introduced by Nédelec, Molli, Mostéfaoui, and Desmontils in “LSEQ: An Adaptive Structure for Sequences in Distributed Collaborative Editing” (2013). The key innovation is using different allocation strategies (boundary+ vs boundary-) based on tree depth, which keeps position identifiers shorter in practice compared to Logoot’s fixed strategy.

  15. Automerge, created by Martin Kleppmann and collaborators, implements a JSON CRDT described in “A Conflict-Free Replicated JSON Datatype” (2017). It uses a columnar encoding for efficiency and has been rewritten in Rust for performance. Used by production apps like Inkandswitch’s Pushpin.

  16. Yjs, created by Kevin Jahns, is optimized for text editing and uses the YATA algorithm. It’s notably faster than Automerge for text operations and includes bindings for popular editors like CodeMirror, Monaco, Quill, and ProseMirror.

  17. Riak, a distributed database from Basho, was one of the first production systems to adopt CRDTs (2012). It implements counters, sets, and maps as native data types, using Delta CRDTs internally to minimize bandwidth. Sadly, the company collapsed dramatically, and the project was abandoned for quite some time. I think it’s still around in a diminished form, but haven’t tried it in a while.

  18. Redis Enterprise’s CRDT support (Active-Active deployment) uses operation-based CRDTs with causal consistency. It supports strings, hashes, sets, and sorted sets with CRDT semantics, enabling multi-master Redis deployments.

  19. AntidoteDB is a research database from the SyncFree project that makes CRDTs the primary abstraction. Unlike other databases where CRDTs are a feature, AntidoteDB is designed from the ground up around CRDT semantics, providing highly available transactions over CRDTs.