Skip to content

Consistency Model

Primatomic views process events asynchronously. This document specifies the consistency guarantees and how to achieve read-after-write consistency.

  • Events within a log are assigned strictly monotonically increasing sequence numbers starting at 1.
  • Each event is stored exactly once with a unique sequence number.
  • The service never reorders or skips events in the log.
  • Views process events in sequence order.
  • View execution is at-least-once: events may be reprocessed after failures or leader changes.
  • Views need to be deterministic. Applying the same event sequence must produce the same state.
ModeGuarantee
With after parameterResponse reflects all events up to and including that sequence
Without after parameterResponse may be arbitrarily stale

When you append an event and immediately query a view, the view may not have processed your event:

Client Log View
| | |
|-- append event ------->| |
|<-- sequence: 5 --------| |
| | |
|-- query view --------------------------------->|
|<-- stale result (processed up to seq 3) ------|

To guarantee consistency, pass the after parameter when querying:

Terminal window
# Append returns sequence number
curl -X POST https://api.primatomic.com/logs/$LOG_ID/append \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/octet-stream" \
-d '{"action":"increment"}'
# Response: {"success": true, "sequence": 5}
# Query with after waits for view to catch up
curl -X POST "https://api.primatomic.com/logs/$LOG_ID/views/my-view/query?after=5" \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/octet-stream" \
--data-binary @query.bin
# Response: binary data (format defined by view)

When the after parameter is provided:

Client Log View
| | |
|-- append event ------->| |
|<-- sequence: 5 --------| |
| |-- event 5 ---------->|
| | |
|-- query (seq=5) ------------------------------->|
| | | (waits)
| | | (processes)
|<-- result (includes event 5) ------------------|

The service does not return a response until the view has processed all events up to the requested sequence.

Terminal window
curl -X POST https://api.primatomic.com/logs/$LOG_ID/views/my-view/query \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/octet-stream" \
--data-binary @query.bin

Use stale reads when:

  • Reading dashboards or analytics where eventual consistency is acceptable
  • Polling for updates
  • Latency is more critical than freshness
Terminal window
curl -X POST "https://api.primatomic.com/logs/$LOG_ID/views/my-view/query?after=5" \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/octet-stream" \
--data-binary @query.bin

Use consistent reads when:

  • Displaying results of a user action
  • Building UI that reflects recent mutations
  • Requiring deterministic results for testing

Clients implementing read-after-write consistency should:

  1. Store the sequence number returned from append operations
  2. Pass the sequence number to subsequent queries
  3. Handle timeout errors with retry logic
async function appendAndQuery(
logName: string,
viewName: string,
event: Uint8Array,
query: Uint8Array
): Promise<Uint8Array> {
// 1. Append event and capture sequence
const appendResult = await api.appendLog(logName, event);
const sequence = appendResult.sequence;
// 2. Query with sequence for consistency
const result = await api.queryView(
logName,
viewName,
query,
sequence // Required for consistency
);
return result;
}

The Idempotency-Key header enables safe retries for append operations using NATS JetStream’s message deduplication.

BehaviorDescription
Deduplication windowKeys are deduplicated within the stream’s duplicate window (configured via nats.duplicate_window_secs, default: 1 hour in production)
Outside windowThe same key may append again after the window expires
Payload mismatchIf a client reuses the same key with a different payload inside the window, JetStream still treats it as a duplicate (“first payload wins”, no error returned)

The Idempotency-Key header is optional for single appends:

Terminal window
curl -X POST https://api.primatomic.com/logs/$LOG_ID/append \
-H "Authorization: Bearer $TOKEN" \
-H "Idempotency-Key: evt-abc123" \
-H "Content-Type: application/octet-stream" \
-d '{"action":"increment"}'

If the request times out and the client retries with the same Idempotency-Key, the duplicate is silently ignored and the original sequence number is returned.

The Idempotency-Key header is required for batch appends. Each event in the batch gets a per-event key in the format {base}:{index}:

Terminal window
curl -X POST https://api.primatomic.com/logs/$LOG_ID/append_batch \
-H "Authorization: Bearer $TOKEN" \
-H "Idempotency-Key: batch-xyz789" \
-H "Content-Type: application/octet-stream" \
--data-binary @events.bin

This creates three events with keys: batch-xyz789:0, batch-xyz789:1, batch-xyz789:2.

Why this matters: If a network failure occurs after 7 of 10 events are published, retrying the entire batch is safe. JetStream deduplicates the first 7 events and only appends the remaining 3.

ScenarioRecommendation
Critical business eventsUse idempotency keys
Batch operationsRequired (enforced)
Analytics/telemetryOptional if duplicates are acceptable
Retryable operationsUse idempotency keys

Keys should be:

  • Unique per logical operation (e.g., UUID, request ID)
  • Deterministic for retries (same key on retry)
  • Scoped appropriately (per-user, per-session, or global)
// Good: UUID per operation
const key = crypto.randomUUID();
// Good: Deterministic from operation context
const key = `user-${userId}-order-${orderId}`;
// Bad: Timestamp (not deterministic on retry)
const key = Date.now().toString();

If your application requires stronger guarantees (e.g., detecting payload mismatches), include a unique identifier in your event payload and deduplicate in your view:

fn append(event: Vec<u8>) {
let event: Event = serde_json::from_slice(&event).unwrap();
if self.processed_ids.contains(&event.event_id) {
return; // Already processed, skip
}
self.processed_ids.insert(event.event_id.clone());
// Process event...
}
PropertySpecification
Starting valueSequence numbers start at 1
IncrementEach append increments the sequence by exactly 1
GapsSequence numbers have no gaps
ScopeSequence numbers are scoped to a single log
UniquenessEach sequence number is assigned to exactly one event
ParameterValue
Default timeout4 seconds
Client overrideNot supported (server-enforced)
HTTP status on timeout504 Gateway Timeout
ScopePer-request, leader-local

If the view cannot reach the requested sequence within the timeout period, the service returns:

HTTP Status: 504 Gateway Timeout

{
"error": "Timeout waiting for view {view_key} to catch up (target: {sequence}, current: {current})"
}

This error may occur when:

  • The view is processing a large backlog
  • The leader node is overloaded
  • Network issues exist between nodes
  • The requested sequence does not exist (higher than log high watermark)

Handle this error with exponential backoff:

async function queryWithRetry(
logName: string,
viewName: string,
query: Uint8Array,
after: number,
maxRetries: number = 5
): Promise<Uint8Array> {
for (let i = 0; i < maxRetries; i++) {
try {
return await api.queryView(logName, viewName, query, after);
} catch (error) {
// Retry on timeout or leader change
const isRetryable = error.status === 504
|| error.status === 502;
if (isRetryable && i < maxRetries - 1) {
await sleep(100 * Math.pow(2, i));
continue;
}
throw error;
}
}
throw new Error('Max retries exceeded');
}

Check view processing progress:

Terminal window
curl .../logs/my-log/views/my-view/stats
{
"view_name": "my-view",
"processed_sequence": 42,
"leader_status": "ready"
}
FieldDescription
processed_sequenceThe highest sequence number the view has processed
leader_status"ready" indicates the view is caught up; null indicates no active leader

Use this endpoint to monitor view lag before querying with high sequence numbers.