Stop building projection infrastructure.

Append events, deploy a WASM function, query materialized state in milliseconds. No Kafka, no consumers, no infra to maintain.

How it works

Four API calls to production.

01

Create a log

Logs are immutable, ordered sequences of events. Create one per domain concept.

02

Append events

Send events as raw bytes. They're ordered, never modified, and never deleted.

03

Deploy a view

Upload a WebAssembly function that processes events into materialized state.

terminal
# 1. Create a log
curl -X POST /logs \
  -d '{"name": "orders"}'
→ {"log_id": "550e8400..."}

# 2. Append events
curl -X POST /logs/$LOG_ID/append \
  -d '{"type":"order.placed","total":99.99}'
→ {"sequence": 1}

# 3. Deploy a view (WASM binary)
curl -X POST /logs/$LOG_ID/views/totals \
  -H "Content-Type: application/octet-stream" \
  --data-binary @totals.wasm
→ {"view_id": "7c9e2f01..."}

# 4. Query your view
curl -X POST /logs/$LOG_ID/views/totals/query?after=1
→ {"total_orders": 1, "revenue": 99.99}

Events are raw bytes. Use JSON, Protobuf, or any format your views understand.

Why Primatomic

Built for engineers who ship

Ship faster

No Kafka maintenance, no Kubernetes clusters, no operational complexity. Just append events and query state.

Data you can trust

Immutable logs with full history. Automatic snapshots. S3-backed archives. Node failures don't lose data.

Performance you can design around

Consistent millisecond reads for materialized views, including tail latency you can depend on.

Use cases

Problems Primatomic solves

Replace your hand-rolled event replay

You're already storing events somewhere. Primatomic gives you immutable logs with built-in replay, so you can stop maintaining your own append-and-rebuild pipeline.

Audit logs that are actually queryable

Append every transaction to an immutable log. Materialized views give you real-time balances and positions without scanning history on every read.

Continuous feature materialization

Stream raw user events into WASM views that maintain up-to-date feature vectors. No batch jobs, no stale data.

Usage metering without the pipeline

Capture product events and materialize billing metrics and usage dashboards. Skip the Kafka-to-warehouse-to-dashboard chain.

Compare

Primatomic vs. alternatives

Primatomic DIY (Postgres + app code) Kafka + consumers
Time to production Minutes Weeks of custom code Weeks of infra + code
Replay from history Built-in Build it yourself Manual offset management
Operational burden None Your database + app Kafka, Zookeeper, consumers
Consistency model Read-after-write Depends on implementation Eventual

Reliability

Built for durability

Three layers of persistence ensure your data is always safe

S3 Archives

Every event persists to S3 independently. Replay from any point in history, even after stream retention expires.

Automatic Snapshots

View state checkpoints to S3 at configurable intervals. Fast recovery without replaying entire event history.

Automatic Failover

Cluster coordination ensures exactly-once processing. When a node fails, another picks up seamlessly.

Ready to build?

Start on the free tier. No credit card required. Scale when you need to.