Real-Time Aggregations

Live metrics without the duct tape.

Replace your Kafka-to-Redis-to-dashboard pipeline with one API. Append events, deploy an aggregation function, query the result.

Without Primatomic

Kafka → Flink/Spark → Redis → Dashboard

Consumer lag means dashboard is minutes behind

Changing an aggregation means redeploying a Flink job

Three separate systems to monitor and scale

With Primatomic

Append → WASM view → Query

Read-after-write, always current

Deploy a new WASM view, replay history

One managed API

How it works

Four calls to live aggregations

1

Create an event stream

One log for your event stream — page views, transactions, sensor readings, whatever.

2

Append events as they happen

Fire events from your app. Primatomic handles ordering and durability.

3

Deploy aggregation views

Upload WASM functions for counters, funnels, session tracking, or any aggregation.

4

Query live metrics

Pull current aggregated state with read-after-write consistency.

terminal
# Create an event stream
POST /logs
  -d '{"name": "product-events"}'
 {"log_id": "550e8400..."}

# Append an event
POST /logs/$LOG_ID/append
  -d '{"type": "page.viewed",
       "page": "/pricing",
       "session_id": "sess_8a2f"}'
 {"sequence": 1}

# Deploy an aggregation view
POST /logs/$LOG_ID/views/page-counts
  --data-binary @page_counts.wasm
 {"view_id": "d4e5f6a7..."}

# Query live metrics
POST /logs/$LOG_ID/views/page-counts/query
 {"/pricing": 4821, "/docs": 12040, "/signup": 891}

Why Primatomic

No pipeline to maintain.

One API replaces Kafka + a stream processor + a cache layer. Your ops team will thank you.

Replayable aggregations.

Made a mistake in your aggregation logic? Fix the WASM function, replay from history, and the corrected metrics are live in minutes.

Consistent reads.

No "the dashboard says X but the database says Y." Views are materialized with read-after-write consistency.

How Primatomic compares

Primatomic Kafka + Flink/Spark Tinybird / Rockset
Setup complexity One API, no infra Kafka cluster + stream processor Managed, SQL-centric
Custom aggregation logic Any WASM function Java/Scala/Python jobs SQL queries
Replay/reprocess Built-in, deploy new view Reset offsets, hope for the best Re-ingest data
Consistency Read-after-write Eventual Near-real-time
Cost at low volume Free tier Kafka minimum is expensive Compute-per-query pricing

Live metrics, zero pipeline.

Start on the free tier. No credit card required.