Lanes (v1): core execution patterns

IndexBus has a small set of standard execution patterns that show up across crates, examples, and ops guidance.

Historically these were sometimes referenced using lane numbers. This repo intentionally avoids numbered naming now; the patterns are described by what they do:

  • Router loop (fanout): a single-writer router step that moves events from a producer ring into one or more consumer rings.
  • Sequencer (gating): a monotonic sequence + gating/cursor scheme for strict ordering and coordinated consumption.
  • Router-enforced credits: credit accounting + optional detach/park policy to bound consumer lag and enforce backpressure.
  • Journal (append + tail/replay): an append-only segmented log optimized for tailing and replay-style reads.

This guide is explanatory (not normative). For exact semantics, see the v1 spec in docs/spec/.

Quick chooser

Use this to pick the “lane” you want:

  • Choose Router loop (fanout) when you want very high-throughput distribution from 1 producer to N consumers.
  • Choose Sequencer (gating) when you need strict ordering and stage coordination with minimal per-item overhead.
  • Choose Router-enforced credits when consumer lag must be bounded and overload behavior must be explicit (drop vs park/detach).
  • Choose Journal (append + tail/replay) when you need a replayable stream, tailing, or inspection/debug capture.

Router loop (fanout)

Router loop: What it is

A router step reads from a source queue (often a producer SPSC/MPSC ring) and writes into one or more consumer rings.

Typical roles:

  • A single router instance acts as a single writer to downstream consumer rings.
  • The router can batch to amortize per-event overhead.

Router loop: Use-cases

  • Market-data style fanout (1 producer, multiple low-latency consumers).
  • Edge gateway distribution (ingest → normalize → fanout).
  • CPU isolation: keep producers fast; do routing work in a dedicated thread.

Router loop: Performance profile (what to expect)

  • The router loop adds an extra hop versus direct producer→consumer, but can recover much of that cost via batching.
  • Throughput is typically limited by memory traffic (copies, cache locality) and per-event bookkeeping.
  • Latency is sensitive to batching settings: larger batches usually improve throughput at the cost of added queueing latency.

Router loop: Operational signals

  • Watch for queue-depth growth and sustained drops indicating downstream pressure.
  • If using the router CLI, the periodic stats line exposes drops_full, drops_all_full, and batch characteristics.

Router loop: Tuning knobs (common ones)

  • Batch sizing/time (batch_max, batch_time_us): throughput vs latency trade-off.
  • Yield/idle behavior (yield_every, idle_spin_limit): CPU vs tail-latency trade-off.

Sequencer (gating)

Sequencer: What it is

A sequencer pattern provides a monotonic sequence and gating/cursor primitives so consumers can coordinate when an item becomes visible/consumable.

This is a good fit for staged designs where:

  • Producers publish in strict sequence order.
  • Consumers wait for a sequence to be committed/advanced.
  • Multiple dependent consumers can use gating/cursors as a barrier.

Sequencer: Use-cases

  • Low-latency staged flows: stage A publishes, stage B consumes only after a commit point.
  • Deterministic “read after publish” coordination without heavier locking.
  • Multi-stage fanout where ordering guarantees are required.

Sequencer: Performance profile (what to expect)

  • Very low per-item overhead when the hot path is a small number of atomic loads/stores.
  • Tail-latency depends heavily on the chosen wait strategy (busy spin vs backoff vs wake-based blocking).
  • Some operations scale with the number of gating cursors (e.g., scanning/gating updates), so keep the gating set tight when chasing extreme tail latency.

Sequencer: Operational signals

  • If consumers are consistently behind, you’ll see growing distance between produced sequence and consumed/gated sequence.

Router-enforced credits

Router-enforced credits: What it is

Credits add a bounded-lag contract to routing/fanout:

  • Each consumer has a credit budget (a maximum amount it is allowed to lag / occupy).
  • The router enforces that budget when attempting to deliver.
  • On exhaustion, policy determines behavior (e.g., drop vs park; optionally detach after sustained exhaustion).

This is intentionally explicit overload behavior: instead of hiding pressure as “queue got big”, the system surfaces it as credit exhaustion.

Router-enforced credits: Use-cases

  • Protect downstream consumers from being overwhelmed.
  • Prevent runaway memory usage / unbounded lag.
  • Make overload behavior observable and testable (drops, detaches, parks).

Router-enforced credits: Performance profile (what to expect)

  • Some additional per-item overhead versus pure routing due to credit accounting and policy checks.
  • Under overload, behavior diverges by policy:
    • Drop tends to preserve producer/router progress and provide stable latency at the cost of data loss.
    • Park tends to preserve data at the cost of router progress/latency (and is typically used where bounded blocking is acceptable).

Router-enforced credits: Operational signals

The router CLI surfaces credit stats/counters including (names may vary by mode):

  • drops_no_credit: events dropped because a consumer had no remaining credit.
  • credit_waits: how often routing attempted delivery but had to wait/loop due to credit policy.
  • detach_count: how often a consumer was detached due to sustained credit exhaustion.

If these counters trend upward under normal load, either:

  • consumers are slower than the offered rate,
  • credit budgets are too small for the expected burstiness, or
  • batching/idle parameters are pushing too much work into bursts.

Journal (append + tail/replay)

Journal: What it is

The journal pattern is an append-only segmented log designed for:

  • fast appends,
  • fast tail/poll from subscribers,
  • bounded storage via fixed-size segments,
  • replay-style reads by walking segments.

In v1 this is a shared-memory region type and is not the same as durable storage; it’s a fast in-memory log.

Journal: Use-cases

  • Capture-and-replay debugging for production incidents.
  • “Late joiner” subscribers that need recent history.
  • Inspectable streams where the ability to tail is as important as fanout latency.

Journal: Performance profile (what to expect)

  • Appends are typically very fast; the main cost is data movement + segment bookkeeping.
  • Tail latency depends on subscriber polling strategy and whether subscribers overrun.
  • Overrun behavior is a core trade-off: fast producers can outpace subscribers; subscribers then detect overrun and recover by skipping ahead.

Journal: Operational signals

  • Segment metadata (tail, segment id/len) indicates whether producers are outrunning subscribers.
  • Look for overrun paths in benchmarks/examples as a guide for tuning subscriber behavior.

How to benchmark these patterns

The workspace includes criterion benchmarks that map closely to these four patterns.

Run the suite:

  • cargo bench -p indexbus-bench --bench patterns

When comparing patterns, capture at least:

  • publish→consume latency (p50/p95/p99 if you have a harness that records it),
  • steady-state throughput (events/sec),
  • CPU utilization and context switches,
  • cache-miss / branch-miss behavior (if using perf).

A good rule: benchmark in the same topology you will deploy (same core pinning, same NUMA node, same cross-process vs in-process boundary).

  • Spec: docs/spec/v1-semantics.md
  • Spec: docs/spec/v1-failure-lifecycle.md
  • Ops: docs/ops/v1-triage.md
  • Production posture: docs/contract/v1-production-profile.md
Provenance
Need the canonical source?
Use the public hub to orient yourself, then jump to repo-owned docs or rustdoc when you need contract-level detail.