Benchmarking

ByteOr publishes benchmark material as reproducible baseline evidence for transport and runtime paths. Use these numbers to compare configurations and track regressions, not as blanket guarantees for every deployment.

What this section is for

  • explain why benchmark numbers appear in the public docs
  • separate benchmark evidence from day-two tuning guidance
  • point to the repo-owned docs and harnesses that back published numbers

Read these as baselines, not promises

  • Public benchmark numbers are evidence from checked-in harnesses and conservative perf gates.
  • They are not universal production guarantees and they are not Cloud service latency promises.
  • Host posture matters: CPU pinning, scheduler policy, memlock, SHM placement, NUMA locality, and huge-page configuration can all shift the result materially.
  • Compare like-for-like: same topology, same hardware, same kernel, same privilege posture, and the same startup-inclusive versus steady-state assumption.

Methodology expectations

When you publish or compare numbers, include at least:

  • the benchmark name and topology
  • payload size and batching posture
  • wait strategy and placement choices
  • whether the run is startup-inclusive or steady-state
  • machine, kernel, and filesystem details
  • throughput and latency percentiles, not throughput alone

Prefer checked-in baselines and deterministic harnesses over screenshots, ad-hoc shell transcripts, or one-off local runs with undocumented host tuning.

Start here

Current public benchmark surface

Today, the public docs expose benchmark-related material in four layers:

  1. Portal-level operations guidance explains how to read and compare numbers safely.
  2. Repo-owned ByteOr OSS docs explain the checked-in harness, baseline file, and perf-gate semantics.
  3. Repo-owned IndexBus docs explain the Criterion suites and topology-specific reporting rules.
  4. Repo-owned enterprise docs explain product-surface benchmark posture and the checked-in perf gate.

That keeps the developer portal readable while still grounding performance claims in checked-in artifacts owned by the relevant repo.

What stays out of this section

  • Cloud control-plane latency claims without service-level context, limits, and SLO framing
  • isolated screenshots or one-off lab runs with no reproducible setup notes
  • doctor or preflight output treated as benchmark evidence instead of host-contract evidence

Enterprise operator guidance already draws this line explicitly: treat doctor as a contract check, not a benchmark.

Publishing pattern

The preferred publishing path is:

  1. keep the harness, baseline file, and detailed benchmark notes in the owning repo
  2. whitelist any public benchmark docs through that repo's docs/public-docs.json
  3. sync the page into this portal under Reference
  4. link to it from this operations page when the benchmark surface is mature enough for public interpretation

This matches the platform publishing model used across OSS, IndexBus, Enterprise, and Cloud.

Provenance
Need the canonical source?
Use the public hub to orient yourself, then jump to repo-owned docs or rustdoc when you need contract-level detail.