Skip to content

feat: Upgrade RuVector crates to v2.0.5 + adopt SONA & coherence (ADR-067) #296

@ruvnet

Description

@ruvnet

What this is about

We vendor 5 core crates from ruvector for signal processing, subcarrier analysis, and cross-viewpoint fusion. They're currently pinned at v2.0.4 — upstream has moved to v2.0.5 with performance improvements and new capabilities relevant to our detection pipeline.

Full technical details: docs/adr/ADR-067-ruvector-v2.0.5-upgrade.md


Proposed changes (4 phases)

Phase 1 — Version bump (quick win)

What: Update 5 crate versions from 2.0.4 → 2.0.5 in Cargo.toml.

Why: The ruvector-mincut crate got 10-30% faster by switching to a flat capacity matrix with allocation reuse. Since we run min-cut subcarrier partitioning on every CSI frame, this is a free performance gain. Also picks up security fixes (removed unsafe indexing, fixed a WASM panic).

Risk: Very low — semver minor bump, no API changes.


Phase 2 — Add spectral coherence for multi-node stability

What: Add the new ruvector-coherence crate (with spectral feature) to measure how consistent the CSI signal is across multiple ESP32 nodes.

Why: Multiple users report flickering detection with 2+ nodes (#292, #280, #237). The current coherence check is phase-phasor-based (works for single link). Spectral coherence uses graph theory (Fiedler eigenvalue) to detect when a node's signal quality drops or a new reflector appears — giving the detection pipeline a "confidence check" before counting persons.

In plain terms: Right now if one ESP32 sends noisy data, the system can't tell and just averages it in. With spectral coherence, the system can say "node 2's signal structure changed — downweight it" before it causes a false person detection.


Phase 3 — SONA adaptive learning (replaces manual training)

What: Add the sona crate (Self-Optimizing Neural Architecture) as an optional backend for the adaptive classifier.

Why: Today, users must manually record labeled CSI data ("empty room for 60s", "one person walking for 60s", etc.) then hit the /train endpoint. Most users don't know this — issues #288, #249 show people confused about why detection is poor out of the box.

SONA can learn from implicit feedback (was the person count stable? did it oscillate?) and persist learned patterns across server restarts. It also uses EWC++ to avoid "forgetting" — so calibrating for your living room won't overwrite what it learned for the office.

In plain terms: Instead of "record 3 labeled sessions then train", the system would gradually learn your room's signal patterns just from normal use, and remember them next time you restart.


Phase 4 — CSI embeddings (exploratory)

What: Evaluate ruvector-core's ONNX embedding engine for learned CSI feature vectors.

Why: The current person detection uses 4 hand-tuned features (variance, change points, motion band power, spectral power) with fixed scaling. Learned embeddings could capture room geometry and person signatures in a single vector, enabling similarity search ("this frame looks like a known 2-person pattern").

Status: Research phase. Requires training data collection. Not a near-term priority.


Priority

Phase Effort Priority Depends on
1 ~1 hour High Nothing
2 ~1 day Medium Phase 1
3 ~3 days Medium Phase 1
4 ~1-2 weeks Low Phase 3

Related

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions