Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
68 changes: 33 additions & 35 deletions docs/ARCHITECTURE.md
Original file line number Diff line number Diff line change
@@ -1,19 +1,10 @@
# Architecture

🚧 Work in progress 🚧
TreeCRDT is organized as a multi-runtime workspace with one shared model for operations, access control, and sync. The Rust crates provide the core engine and extension targets, while the TypeScript packages provide transport, auth, runtime adapters, and browser/node integration. The goal is to keep behavior consistent across native, WASM, and SQLite-backed deployments without forking protocol or data model semantics.

## Goals
- Kleppmann Tree CRDT in Rust with clean traits for storage/indexing/access control.
- Runs native and WASM; embeddable as a SQLite/wa-sqlite extension.
- TypeScript interface stays stable across native/WASM/SQLite builds.
- Strong tests (unit/property/integration) and benchmarks (Rust + TS/WASM).
## Package Map

## Package map

This diagram is meant to answer, "What depends on what in this repo?".

Arrow direction is **depends on / uses**.
Solid arrows are runtime dependencies. Dotted arrows are build time, dev, or test connections.
This map answers one question: which packages depend on which others. Arrow direction means "depends on / uses." Solid edges are runtime dependencies, and dotted edges are build, dev, or test-time relationships.

```mermaid
flowchart TD
Expand Down Expand Up @@ -69,26 +60,33 @@ flowchart TD
sqlite_node -. conformance tests .-> conformance
```

## Core CRDT shape
- Operation log with `(OperationId { replica, counter }, lamport, kind)`; kinds: insert/move/delete/tombstone.
- Deterministic application rules following Kleppmann Tree CRDT; extend to support alternative tombstone semantics if needed (per linked proposal).
- Access control hooks applied before state mutation.
- Partial sync support via subtree filters + index provider for efficient fetch.

## Trait contracts (Rust)
- `Clock`: lamport/HLC pluggable (`LamportClock` provided).
- `AccessControl`: guards apply/read.
- `Storage`: append operations, load since lamport, latest_lamport.
- `IndexProvider`: optional acceleration for subtree queries and existence checks.
- These traits are the seam for SQLite/wa-sqlite implementations; extension just implements them over tables/indexes.

## WASM + TypeScript bindings
- `treecrdt-wasm`: wasm-bindgen surface mapping to `@treecrdt/interface`.
- `@treecrdt/interface`: TS types for operations, storage adapters, sync protocol, access control.
- Provide both in-memory adapter and SQLite-backed adapter (via wa-sqlite) to satisfy the interface.

## Sync engine concept
- Transport-agnostic: push/pull batches with causal metadata + optional subtree filters.
- Progress hooks for UI, resumable checkpoints via lamport/head.
- Access control enforced at responder using subtree filters and ACL callbacks.
- Draft protocol: [`sync/v0.md`](sync/v0.md)
The diagram is intentionally scoped to library/runtime packages in this repository. Example applications such as Playground are left out to keep the dependency graph readable. `@treecrdt/crypto` is currently used in app/example flows for payload encryption and is not part of the runtime dependency chain between `@treecrdt/auth` and `@treecrdt/sync`.

## Core Data Model

At the center is an append-only operation log keyed by `OperationId { replica, counter }`, with Lamport ordering metadata and a kind (`insert`, `move`, `delete`, or `tombstone`). The implementation follows deterministic Tree CRDT rules so replicas converge from the same operation set, regardless of receive order. Access checks are applied before mutation, and partial replication is supported through subtree filters plus index-assisted lookups.

## Rust Integration Seams

The Rust core is designed around trait boundaries so the same CRDT logic can run over different storage/index backends:

| Trait | Responsibility |
| --- | --- |
| `Clock` | Provides Lamport/HLC progression (`LamportClock` is included). |
| `AccessControl` | Authorizes apply/read paths. |
| `Storage` | Persists and loads operations (`append`, `load since lamport`, `latest_lamport`). |
| `IndexProvider` | Optional acceleration for subtree queries and existence checks. |

`treecrdt-sqlite-ext` and related adapters implement these seams over SQLite tables and indexes instead of re-implementing CRDT rules.

## WASM and TypeScript Boundary

`treecrdt-wasm` exposes the Rust engine through `wasm-bindgen`, and `@treecrdt/interface` defines the shared TypeScript contract used by adapters and sync code. This keeps browser and node clients aligned on operation shape, storage adapter behavior, and sync/auth boundaries, whether the backing store is in-memory or wa-sqlite-based.

## Sync Model

The sync layer is transport-agnostic and exchanges operation batches with causal progress metadata. Subtree filters limit scope when needed, and responders enforce authorization at read time using capability checks plus filter constraints. Progress/checkpoint state is structured so sessions can resume without replaying the full history. The wire-level draft is documented in [`sync/v0.md`](sync/v0.md).

## Quality and Performance

The repository treats conformance and benchmarking as first-class architecture concerns. Rust and TypeScript tests cover unit and integration behavior, while `@treecrdt/sqlite-conformance` and `@treecrdt/benchmark` are used to validate correctness under realistic adapter and sync conditions, including browser/WASM paths.
75 changes: 53 additions & 22 deletions examples/playground/src/App.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -3,10 +3,13 @@ import { type Operation, type OperationKind } from "@treecrdt/interface";
import { bytesToHex } from "@treecrdt/interface/ids";
import { createTreecrdtClient, type TreecrdtClient } from "@treecrdt/wa-sqlite/client";
import { detectOpfsSupport } from "@treecrdt/wa-sqlite/opfs";
import { base64urlDecode } from "@treecrdt/auth";
import { encryptTreecrdtPayloadV1, maybeDecryptTreecrdtPayloadV1 } from "@treecrdt/crypto";
import {
encryptTreecrdtPayloadWithKeyringV1,
maybeDecryptTreecrdtPayloadWithKeyringV1,
type TreecrdtPayloadKeyringV1,
} from "@treecrdt/crypto";

import { loadOrCreateDocPayloadKeyB64 } from "./auth";
import { loadOrCreateDocPayloadKeyringV1, rotateDocPayloadKeyB64 } from "./auth";
import { hexToBytes16 } from "./sync-v0";
import { useVirtualizer } from "./virtualizer";

Expand Down Expand Up @@ -80,6 +83,8 @@ export default function App() {
});
const [online, setOnline] = useState(true);
const [payloadVersion, setPayloadVersion] = useState(0);
const [payloadRotateBusy, setPayloadRotateBusy] = useState(false);
const [payloadKeyKid, setPayloadKeyKid] = useState<string | null>(null);

const joinMode =
typeof window !== "undefined" && new URLSearchParams(window.location.search).get("join") === "1";
Expand All @@ -96,11 +101,12 @@ export default function App() {
const counterRef = useRef(0);
const lamportRef = useRef(0);
const opfsSupport = useMemo(detectOpfsSupport, []);
const docPayloadKeyRef = useRef<Uint8Array | null>(null);
const docPayloadKeyringRef = useRef<TreecrdtPayloadKeyringV1 | null>(null);
const refreshDocPayloadKey = React.useCallback(async () => {
const keyB64 = await loadOrCreateDocPayloadKeyB64(docId);
docPayloadKeyRef.current = base64urlDecode(keyB64);
return docPayloadKeyRef.current;
const keyring = await loadOrCreateDocPayloadKeyringV1(docId);
docPayloadKeyringRef.current = keyring;
setPayloadKeyKid(keyring.activeKid);
return keyring.keys[keyring.activeKid] ?? null;
}, [docId]);

const {
Expand All @@ -115,6 +121,7 @@ export default function App() {
showAuthAdvanced,
setShowAuthAdvanced,
authInfo,
setAuthInfo,
authError,
setAuthError,
authBusy,
Expand Down Expand Up @@ -215,13 +222,15 @@ export default function App() {
);

useEffect(() => {
docPayloadKeyRef.current = null;
docPayloadKeyringRef.current = null;
setPayloadKeyKid(null);
let cancelled = false;
void (async () => {
try {
const keyB64 = await loadOrCreateDocPayloadKeyB64(docId);
const keyring = await loadOrCreateDocPayloadKeyringV1(docId);
if (cancelled) return;
docPayloadKeyRef.current = base64urlDecode(keyB64);
docPayloadKeyringRef.current = keyring;
setPayloadKeyKid(keyring.activeKid);
} catch (err) {
if (cancelled) return;
setError(err instanceof Error ? err.message : String(err));
Expand All @@ -239,12 +248,13 @@ export default function App() {

const payloadByNodeRef = useRef<Map<string, PayloadRecord>>(new Map());

const requireDocPayloadKey = React.useCallback(async (): Promise<Uint8Array> => {
if (docPayloadKeyRef.current) return docPayloadKeyRef.current;
const next = await refreshDocPayloadKey();
if (!next) throw new Error("doc payload key is missing");
const requireDocPayloadKeyring = React.useCallback(async (): Promise<TreecrdtPayloadKeyringV1> => {
if (docPayloadKeyringRef.current) return docPayloadKeyringRef.current;
const next = await loadOrCreateDocPayloadKeyringV1(docId);
docPayloadKeyringRef.current = next;
setPayloadKeyKid(next.activeKid);
return next;
}, [refreshDocPayloadKey]);
}, [docId]);

const ingestPayloadOps = React.useCallback(
async (incoming: Operation[]) => {
Expand Down Expand Up @@ -275,9 +285,13 @@ export default function App() {
}

try {
const key = await requireDocPayloadKey();
const res = await maybeDecryptTreecrdtPayloadV1({ docId, payloadKey: key, bytes: payload });
payloads.set(node, { ...meta, payload: res.plaintext, encrypted: res.encrypted });
const keyring = await requireDocPayloadKeyring();
const res = await maybeDecryptTreecrdtPayloadWithKeyringV1({ docId, keyring, bytes: payload });
if (res.encrypted && res.keyMissing) {
payloads.set(node, { ...meta, payload: null, encrypted: true });
} else {
payloads.set(node, { ...meta, payload: res.plaintext, encrypted: res.encrypted });
}
changed = true;
} catch {
payloads.set(node, { ...meta, payload: null, encrypted: true });
Expand All @@ -287,17 +301,31 @@ export default function App() {

if (changed) setPayloadVersion((v) => v + 1);
},
[docId, replicaKey, requireDocPayloadKey]
[docId, replicaKey, requireDocPayloadKeyring]
);

const encryptPayloadBytes = React.useCallback(
async (payload: Uint8Array | null): Promise<Uint8Array | null> => {
if (payload === null) return null;
const key = await requireDocPayloadKey();
return await encryptTreecrdtPayloadV1({ docId, payloadKey: key, plaintext: payload });
const keyring = await requireDocPayloadKeyring();
return await encryptTreecrdtPayloadWithKeyringV1({ docId, keyring, plaintext: payload });
},
[docId, requireDocPayloadKey]
[docId, requireDocPayloadKeyring]
);

const handleRotatePayloadKey = React.useCallback(async () => {
setPayloadRotateBusy(true);
setAuthError(null);
try {
const rotated = await rotateDocPayloadKeyB64(docId);
await refreshDocPayloadKey();
setAuthInfo(`Rotated payload key (${rotated.payloadKeyKid}). Share a new invite/grant for peers.`);
} catch (err) {
setAuthError(err instanceof Error ? err.message : String(err));
} finally {
setPayloadRotateBusy(false);
}
}, [docId, refreshDocPayloadKey, setAuthError, setAuthInfo]);
const knownOpsRef = useRef<Set<string>>(new Set());

const treeStateRef = useRef<TreeState>(treeState);
Expand Down Expand Up @@ -1176,6 +1204,9 @@ export default function App() {
authTokenCount,
authTokenScope,
authTokenActions,
payloadKeyKid,
payloadRotateBusy,
rotatePayloadKey: handleRotatePayloadKey,
authScopeSummary,
authScopeTitle,
authSummaryBadges,
Expand Down
Loading