Deterministic, history-native computation.

AIΩN treats computation as a verifiable rewrite process: transformations are the primary artifact, not ephemeral state.

Published theory, working software, and a very opinionated claim: same inputs should converge to the same history, even when concurrency gets real.

Same inputs. Same hashes. Even under real concurrency.

Echo is the clearest software expression of AIΩN right now: a high-performance graph-rewrite engine where a committed tick is replayable, hashable, and explainable. The interesting part is not just replay. It is replay when multiple independent rewrites are admitted, sorted, executed in parallel, and still converge on one canonical result.

Same hashes

Cross-machine replay

Echo’s determinism claim is explicit: run the same inputs on different machines and the tick hashes should converge.

scope_hash → rule_id → nonce

Canonical scheduler order

The commit order is derived from stable keys rather than thread timing, so drain order remains explainable and reviewable.

BOAW

Parallel without surrendering determinism

Workers can fan out against immutable views, but the merge path is still forced back into one canonical committed worldline.

WSC

Zero-copy snapshot format

Write-Streaming Columnar snapshots are mmap-friendly and built for reload plus verification, not just storage.

Scheduler drain throughput

Current local Criterion artifacts stay in the same throughput band as batch size climbs from 10 rewrites to 30K rewrites.

940.4K/s816.5K/s692.6K/s
100.76M/s
1000.94M/s
1K0.91M/s
3K0.73M/s
10K0.71M/s
30K0.69M/s

Source: local Criterion artifacts in ~/git/echo/target/criterion/scheduler_drain

Parallelism study

Echo’s January sharding study showed that partitioned execution could hit roughly 4.7x the serial baseline on a 10-core Apple Silicon machine for the tested workload.

Serial baseline12.18 TPS

Single-threaded iteration over a monolithic store.

Parallel (Rayon)57.52 TPS

Thread-pool parallelism as an upper-bound reference for the compute path.

Sharded store56.92 TPS

Queue-per-CPU style partitioning came in at 4.67x the serial baseline.

Source: ~/git/echo/docs/benchmarks/parallelism-study.md

Independent worldlines. Shared graph state. No opaque merges.

git-warp takes the same AIΩN instinct and pushes it into collaborative computation. Each participant writes to an independent worldline backed by normal Git objects, and replicas deterministically materialize a shared graph state without rewriting history. The point is not a new database. The point is preserving intent and provenance in infrastructure developers already know how to operate.

One worldline each

Participant-local authorship

Each replica writes its own causal chain instead of pretending all collaboration should collapse into one mutable branch.

Git-native

Content-addressed storage you already have

git-warp rides on Git commits, which means no custom database and no special server requirement.

Deterministic materialization

Replicas stay faithful observers

Shared state is derived from causal history in a deterministic order, so replicas can agree without erasing provenance.

CRDT-friendly

Distributed by construction

The model is built for independent writers, partial connectivity, and replayable convergence rather than central coordination.

How git-warp works

git-warp is compelling because it keeps collaboration legible. The worldlines remain first-class, and the shared graph is derived from them rather than replacing them.

  1. 01

    A participant authors patches on an independent worldline.

  2. 02

    Git stores and transports those patches through normal push/pull mechanics.

  3. 03

    Replicas deterministically materialize a shared graph state from causal history.

  4. 04

    Intent and provenance stay visible because the original worldlines are preserved.

From worldline algebra to provenance sovereignty.

The AIΩN Foundations Series is cumulative literature, not a grab bag of papers. It starts by defining WARP graphs, then makes execution canonical, recovers full derivations from compact histories, formalizes observer geometry, pushes into emergence, confronts ethics, and finally carries the work toward architecture.

  1. A Worldline Algebra for Recursive Provenance

    Defines WARP graphs as the “graphs all the way down” substrate: one recursive object model for hierarchy, syntax, control flow, and provenance.

    Open paper →
  2. Canonical State Evolution and Deterministic Worldlines

    Gives WARP graphs deterministic concurrent semantics via double-pushout rewriting, so scheduler-admissible rewrites commit to the same successor regardless of internal serialization order.

    Open paper →
  3. Computational Holography & Provenance Payloads

    Shows that a deterministic worldline can be reconstructed from compact boundary data, then builds practical machinery for slicing, branching, and multi-tick compression.

    Open paper →
  4. Rulial Distance & Observer Geometry

    Introduces resource-bounded observers, translation cost between descriptions, and the Chronos-Kairos-Aion framing for time and abstraction.

    Open paper →
  5. Emergent Dynamics from Deterministic Rewrite Systems

    Pushes the framework into emergence: quantum-like interference, unitary behavior, and thermodynamic asymmetry as artifacts of deterministic rewrite histories under coarse-graining.

    Open paper →
  6. Ethics of Deterministic Replay & Provenance Sovereignty

    Extends the program into ethics: for mind-like systems, complete provenance is not merely diagnostics but interior life in executable form.

    Open paper →
  7. Architecture & Operating System

    Moves from theory to architecture: Continuum, observer interfaces, kernel semantics, and what a WARP-native operating system would actually look like.

    Coming soon

Continuum is the operating-system line implied by Paper VII and remains early. The strongest present-day software story is Echo and git-warp.

Related work.