Usage Metering

Usage metering without the pipeline.

Capture every billable event. Materialize running totals. Query on demand. No Kafka, no batch jobs, no stale data.

Without Primatomic

Kafka → consumer → aggregator → cache → billing API

Batch jobs that run hourly, data always slightly stale

Revenue leakage from dropped events or double-counting

Separate pipeline per billing metric

With Primatomic

Append event → WASM view → query

Continuous materialization, read-after-write

Immutable log, exactly-once fold processing

One log, multiple views for different metrics

How it works

Four calls to production usage metering

1

Create a usage log

One log per customer, billing entity, or whatever maps to your pricing model.

2

Append usage events

Emit events from your API gateway, app layer, or anywhere usage happens.

3

Deploy billing views

Upload WASM functions for monthly totals, current period usage, rate limit checks.

4

Query at billing time

Pull aggregated usage for invoicing, rate limiting, or real-time dashboards.

terminal
# Create a usage log
POST /logs
  -d '{"name": "customer-acme-usage"}'
 {"log_id": "550e8400..."}

# Append a usage event
POST /logs/$LOG_ID/append
  -d '{"type": "api.call", "endpoint": "/v1/generate",
       "tokens_in": 150, "tokens_out": 420,
       "model": "gpt-4"}'
 {"sequence": 1}

# Deploy a billing view
POST /logs/$LOG_ID/views/monthly-totals
  --data-binary @monthly_totals.wasm
 {"view_id": "b2c3d4e5..."}

# Query usage for invoicing
POST /logs/$LOG_ID/views/monthly-totals/query
  ?month=2025-03
 {"gpt-4": {"tokens_in": 1482000, "tokens_out": 3291000}}

Why Primatomic

No batch lag.

Views update continuously. Your billing dashboard and rate limiter always reflect the latest event.

Auditable by default.

Every billable event is in the immutable log. Billing disputes become "let me query the log" instead of "let me check the pipeline."

Multiple views, one log.

Monthly totals, per-minute rate checks, cost allocation by team — all derived from the same event stream without duplicating data.

How Primatomic compares

Primatomic Kafka + custom pipeline Orb / Amberflo
Integration complexity One API call per event Kafka, consumers, aggregators SDK integration
Custom aggregation Any WASM function Your code, your infra Their supported dimensions
Real-time accuracy Read-after-write Eventual, depends on lag Near-real-time
Auditability Immutable event log If you built it Vendor-managed
Raw event access Always, forever Depends on retention Limited

Meter usage without the pipeline.

Start on the free tier. No credit card required.