Shared-Memory Transport

IndexBus regions are file-backed shared memory (mmap). Multiple processes map the same file and communicate via atomic operations on the shared #[repr(C)] layout.

How SHM Regions Work

  Process A              /dev/shm/my_region              Process B
  ┌──────────┐          ┌──────────────────┐           ┌──────────┐
  │ Producer │──mmap───▶│  SharedFanout     │◀──mmap───│ Consumer │
  │          │          │  Layout (repr(C)) │           │          │
  └──────────┘          └──────────────────┘           └──────────┘

Lifecycle

  1. Create — The creator opens or creates the SHM file and initializes the layout (initialized: 0→1→2).
  2. Open — Other processes open the file and validate before use: magic, version, capabilities, layout_bytes, and initialized state.
  3. Crash handling — The safest approach is to recreate the region rather than reattach to a potentially corrupted one.

Production Requirements

RequirementRecommendation
Filesystemtmpfs or hugetlbfs for SHM files
PermissionsRestrictive (owner-only or group-restricted)
Memory sizingEnsure mapped_bytes >= layout_bytes; check at validation
Page lockingmlockall to avoid page faults on the hot path
Huge pagesTested and supported; reduces TLB pressure for large regions
Core pinningPin router and critical consumers to dedicated cores
NUMAPlace region on same NUMA node as its participants

Security Considerations

IndexBus SHM is designed for same-host, trusted-participant IPC. It is NOT:

  • A network security boundary
  • A cryptographic secrecy or integrity mechanism
  • Safe for multi-tenant use without strong OS-level isolation

Risks and Mitigations

RiskMitigation
Unauthorized readRestrictive file permissions and ownership
Unauthorized writeRestrict SHM paths; separate per-environment namespaces
Data lifetimeBytes may remain after participants exit; clean up explicitly
CorruptionValidate before use; recreate on failed validation

Delivery Semantics Summary

PrimitiveDelivery GuaranteeOrderingNotes
SPSC eventsAt-most-onceFIFONonblocking; loss by design
MPSC eventsAt-most-oncePer-producer FIFOCross-producer order not guaranteed
Fanout (Broadcast)Best-effort per consumerPer-producer FIFOPartial delivery allowed
Fanout (WorkQueue)Exactly-one consumerPer-producer FIFORound-robin selection
State streamOverwrite-latestN/A (snapshot)Even seq = stable; odd = in-progress
Sequencer + gatingCoordination-onlyStrict monotonicWrap prevention bounds producer
JournalAppend-onlyAppend orderOverrun detection for slow subs

Cross-stream and cross-producer ordering is not guaranteed.

Memory visibility uses Acquire/Release atomics. Publication commit is the synchronization point.

Provenance
Need the canonical source?
Use the public hub to orient yourself, then jump to repo-owned docs or rustdoc when you need contract-level detail.