Router counters reference (v1, best-effort)
Synced from repo docs
This page is synced from docs/ops/router-counters.md via docs/public-docs.json. Edit the owning repo source instead of this generated copy. GitHub source: https://github.com/byteor-systems/indexbus/blob/master/docs/ops/router-counters.md
This document defines the intended meaning of counters printed by the v1 router loop tools.
It is best-effort guidance. Counters are an operational aid and may evolve; they must not contradict the normative v1 spec.
Related:
- v1 ops triage: ./v1-triage.md
- v1 failure model: ../spec/v1-failure-lifecycle.md
Rustdoc entry points
For the router implementation surface behind these counters, start with:
Key principles
- Counters are typically derived from queue head/tail deltas over an interval.
- Drop attribution is best-effort and must not be treated as a strict accounting contract.
- In broadcast mode, per-consumer attempts can exceed routed messages.
Common fields
Names may vary slightly by binary/format.
-
sent/sec- Approximate producer publish rate into the producer→router source queue.
- Often derived from the source tail delta.
-
routed/sec- Router throughput: messages dequeued from the producer→router queue.
-
delivered/sec- Total successful per-consumer enqueues performed by the router.
- In broadcast, can be greater than
routed/sec.
-
recv/sec- Approximate consumer dequeue rate (sum of consumer head deltas).
Drop counters
-
drops/sec- Total messages the router dequeued and did not deliver (for the relevant definition of "deliver").
-
drops_full/sec- Best-effort count of drops attributed to destination queues being full at routing time.
-
drops_no_credit/sec- Best-effort count of drops attributed to consumers being over the credit limit.
-
drops_all_full/sec- Work-queue specific: no eligible consumer had capacity/credit at that moment.
Credit counters
-
credit_waits/sec- Work-queue: iterations where policy caused the router to wait/park due to credit/full pressure.
-
detaches/sec- Number of detaches performed by a detach-capable credit policy (router-local).
Queue depth fields
qdepth: src_*- Source backlog proxy.
qdepth: consumers=[..]- Per-consumer backlog proxy.
Depth is computed using wrapping subtraction on head/tail counters.
How to interpret safely
- Rising
qdepth: src_*with lowrouted/sec⇒ router bottleneck or router not running. - Rising consumer depths with stable
routed/sec⇒ consumer bottleneck. - Drops rising with consumer depths near capacity ⇒ overload; apply explicit policy (drop/park/detach at edges).