背景
RT-Claw 已具备坚实基础:跨 FreeRTOS 和 RT-Thread 的 OSAL 抽象、基于服务注册的 Gateway 消息路由、带 SWARM_CAP_* 能力声明的 Tool Use 框架、AI 对话服务、IM 集成(Telegram/飞书)以及多平台支持(ESP32-C3/S3/vexpress-a9)+ QEMU。
当前缺失的是观测 RTOS 内部状态并利用这些数据进行 AI 辅助调试的能力。嵌入式开发者至今仍依赖 printf、手动断点和日志分析来诊断线程饥饿、优先级反转、内存泄漏和中断风暴等问题。这些方法无法扩展到多核异构架构、OTA 驱动的回归测试或边缘 AI 工作负载。
与此同时,更好方案的构建模块已经存在:
- RT-Thread v5.2+ 提供 CPU/线程使用率追踪、增强回溯和栈溢出检测
- RT-Trace 和 SystemView 提供实时线程/中断可视化
- OpenClaw 风格的 AI Agent 框架(持久记忆、Tool Use、Skills)日趋成熟
- RT-Claw 自身架构(OSAL、Gateway、Tool Use)天然具备可扩展性
本 issue 提出分阶段技术路线图,将 RT-Claw 从运行在 RTOS 上的软件代理演进为硬件感知的调试代理,能够观测、分析、上报,并最终跨设备协同。
设计原则
每个阶段必须交付独立价值,不依赖后续阶段完成才有意义。
观测 (看见它) → 分析 (理解它) → 上报 (发送它) → 协同 (关联它) → 生态 (扩展它)
Phase 1: RTOS 可观测层
目标
为 RT-Claw 提供通过统一 API 观察 RTOS 内部状态的能力。
新接口:claw_observe.h
typedef struct {
char name[CLAW_THREAD_NAME_MAX];
uint32_t state; /* running / ready / blocked / suspended */
uint32_t priority;
uint32_t stack_size;
uint32_t stack_used; /* 栈高水位 */
uint32_t cpu_usage; /* 每线程, 0-1000 (0.1% 精度) */
} ClawThreadInfo;
typedef struct {
uint32_t total_bytes;
uint32_t used_bytes;
uint32_t peak_bytes;
} ClawMemInfo;
typedef struct {
uint32_t irq_count; /* 自启动以来的总中断次数 */
uint32_t max_nesting; /* 最深嵌套层数 */
uint32_t max_latency_us; /* 最长 ISR 耗时 */
} ClawIrqInfo;
/* 返回系统级 CPU 负载, 0-100 */
int claw_observe_cpu_load(void);
/* 填充最多 max 个线程信息, 返回实际数量 */
int claw_observe_thread_list(ClawThreadInfo *buf, int max);
/* 填充当前内存统计 */
int claw_observe_mem_usage(ClawMemInfo *info);
/* 填充中断统计 */
int claw_observe_irq_stats(ClawIrqInfo *info);
双 RTOS 实现
| 指标 |
RT-Thread |
FreeRTOS |
| CPU 负载 |
cpuusage 组件 |
vTaskGetRunTimeStats() |
| 线程状态 |
rt_thread_get_info() |
uxTaskGetSystemState() |
| 内存 |
rt_memory_info() |
xPortGetFreeHeapSize() |
| 栈水位 |
thread stack 字段 |
uxTaskGetStackHighWaterMark() |
约束
- RAM 开销:< 2 KB 观测数据缓冲
- 采样率:可配置,默认 1 Hz,最大 100 Hz
- 观测任务以最低优先级运行,不得影响 RTOS 实时性
- 遵循 OSAL 模式:接口头文件 + 每 RTOS 实现文件
交付物
Shell 命令 claw observe 在 QEMU 上实时显示 CPU 负载、线程列表(含栈使用率)、内存统计和 IRQ 信息。
Phase 2: AI 调试助手 (MVP)
目标
单设备会话式 RTOS 调试。这是核心价值验证节点。
架构
用户: "为什么 CPU 负载高?"
│
├─→ Gateway 路由到 AI 服务
│ │
│ ├─→ AI 服务调用 observe_tool (通过 Tool Use 注册)
│ │ │
│ │ └─→ 收集 CPU / 线程 / 内存 / IRQ 数据 (Phase 1 API)
│ │
│ ├─→ 将观测数据格式化为结构化 prompt 上下文
│ │
│ └─→ 调用云端 LLM → 返回诊断
│
└─→ 通过 Telegram / 飞书 / Shell 回复
关键工作
- 新建
observe_tool.c,在 Tool Use 框架中注册 SWARM_CAP_OBSERVE
- 调试 prompt 模板,将观测数据格式化为 LLM 可理解的形式
- 无需本地 LLM — 复用现有云端 AI 服务管线
演示场景
- 在 QEMU 上运行 RT-Claw,刻意制造线程死锁 / 内存泄漏 / 优先级反转
- 通过 Telegram 询问:"系统状态如何?" → RT-Claw 自动采集 RTOS 数据 → 发送至云端 LLM → 返回根因分析及可操作建议
交付物
端到端演示:用户提问调试问题,RT-Claw 观测 RTOS 状态,AI 分析,用户收到诊断 — 全部在现有 IM 通道内完成。
Phase 3: 云端 Hub 与数据管线
目标
持续上报观测数据,云端聚合与可视化。
技术选型
| 组件 |
选择 |
理由 |
| 传输 |
MQTT |
轻量、QoS 支持、嵌入式友好 |
| 后端 |
Python FastAPI |
快速原型 |
| 时序数据库 |
InfluxDB |
专为指标设计 |
| 可视化 |
Grafana |
零前端开发 |
设备端变更
- 扩展
claw_net.h,增加 MQTT 客户端接口:claw_mqtt_publish()
- 周期性上传观测数据(复用 Phase 1 采集)
- ESP32 真机:WiFi + MQTT;QEMU:通过
api-proxy.py 中继
云端(独立仓库 rt-claw-hub)
- MQTT broker 数据接入
- FastAPI 数据接收和查询 API
- Grafana 仪表盘模板
- 异常告警:检测周期性 CPU 尖峰、内存缓慢增长等模式
交付物
Web 仪表盘展示设备实时 RTOS 状态、历史趋势和异常告警。
Phase 4: 多设备协同
目标
多个 RT-Claw 节点相互发现并协作诊断。
架构(星型拓扑,Hub 中继)
节点 A ──┐
├──→ 云端 Hub (消息路由) ──→ AI 引擎
节点 B ──┘ │
▼
跨设备聚合分析
节点之间不直连。Hub 负责发现、路由和数据关联,简化网络和安全。
关键工作
- 设备注册:节点启动时注册(设备 ID +
SWARM_CAP_* 声明)
- 协同消息:扩展 Gateway 消息类型
CLAW_MSG_OBSERVE_REQUEST — 请求另一设备的观测数据
CLAW_MSG_OBSERVE_RESPONSE — 返回观测数据
CLAW_MSG_DIAGNOSIS_SHARE — 广播诊断结论
- 联合分析:Hub 聚合多设备数据 → 构建跨设备上下文 → LLM 执行联合根因分析
演示场景
- 两个 QEMU 实例运行不同工作负载
- 节点 A 检测到 CAN 总线异常 → 请求节点 B 电源监控数据 → Hub 执行联合 AI 分析 → 定位跨节点根因
交付物
QEMU 上的双设备协同诊断演示。
Phase 5: 技能生态与边缘智能 (2028+)
目标
社区驱动的演进,可下载的技能包和设备端推理。
组件
- 技能包标准:基于 Tool Use 框架的可下载调试分析模块(
.claw-skill 格式)
- RT-SkillsHub 平台:发布 / 下载 / 评价技能包
- 边缘 TinyML:部署轻量异常检测模型到 MCU(TFLite Micro),本地预筛选 + 云端深度分析
- 多 RTOS 扩展:Zephyr OSAL 后端
此阶段需要社区参与,没有硬性截止日期。
时间线概览
2026 Q1-Q2 Q2-Q3 Q3-Q4 2027 H1 2027 H2 2028+
│ │ │ │ │ │
▼ ▼ ▼ ▼ ▼ ▼
Phase 0 Phase 1 Phase 2 Phase 3 Phase 4 Phase 5
稳定基础 观测 AI 调试 云端 Hub 多节点 生态
v0.3.0 v0.4.0 v0.5.0 ★MVP v0.6.0 v0.7.0 v1.0.0
坚实核心 看见它 理解它 发送它 关联它 扩展它
Phase 2 (v0.5.0) 是关键里程碑。如果会话式 RTOS 调试能产出有说服力的演示,后续阶段就有推进的动力。
与现有 Issue 的关系
起源
本路线图提炼自一份探讨 RT-Claw 作为硬件感知 AI 调试平台长期潜力的愿景文档。原始愿景描述了 2035 年的场景:调试仪器(示波器、逻辑分析仪、JTAG 调试器)成为通过云端聊天室协作的自主 AI 代理。
本 issue 保留了技术上可行的要素,并将其组织为递增的、交付物驱动的计划。推测性要素(设备个性、社交图谱、意识协议)已被移除,取而代之的是具体 API、可测量约束和可工作的演示。
Background (English)
RT-Claw already has a solid foundation: OSAL abstraction across FreeRTOS and RT-Thread,
gateway message routing with service registry, Tool Use framework with SWARM_CAP_*
capability declarations, AI chat service, IM integration (Telegram/Feishu), and
multi-platform support (ESP32-C3/S3/vexpress-a9) with QEMU.
What is still missing is the ability to observe RTOS internals and use that data
for AI-assisted debugging. Today, embedded developers still rely on printf, manual
breakpoints, and log analysis to diagnose issues like thread starvation, priority
inversion, memory leaks, and interrupt storms. These techniques do not scale to
multi-core heterogeneous architectures, OTA-driven regression, or edge AI workloads.
Meanwhile, the building blocks for a better approach already exist:
- RT-Thread v5.2+ provides CPU/thread usage tracking, enhanced backtrace, and stack
overflow detection
- RT-Trace and SystemView offer real-time thread/interrupt visualization
- OpenClaw-style AI Agent frameworks (persistent memory, tool use, skills) are maturing
- RT-Claw's own architecture (OSAL, gateway, Tool Use) is designed for extensibility
This issue proposes a phased technical roadmap that evolves RT-Claw from a software
agent running on RTOS into a hardware-aware debugging agent that can observe,
analyze, report, and eventually coordinate across devices.
Design Principle
Each phase must deliver standalone value. No phase should depend on all subsequent
phases being completed to be useful.
observe (see it) → analyze (understand it) → report (send it) → coordinate (link it) → ecosystem (scale it)
Phase 1: RTOS Observability Layer
Goal
Give RT-Claw the ability to see RTOS internal state through a unified API.
New Interface: claw_observe.h
typedef struct {
char name[CLAW_THREAD_NAME_MAX];
uint32_t state; /* running / ready / blocked / suspended */
uint32_t priority;
uint32_t stack_size;
uint32_t stack_used; /* high water mark */
uint32_t cpu_usage; /* per-thread, 0-1000 (0.1% resolution) */
} ClawThreadInfo;
typedef struct {
uint32_t total_bytes;
uint32_t used_bytes;
uint32_t peak_bytes;
} ClawMemInfo;
typedef struct {
uint32_t irq_count; /* total since boot */
uint32_t max_nesting; /* deepest observed nesting */
uint32_t max_latency_us; /* longest ISR duration */
} ClawIrqInfo;
/* Returns system-wide CPU load, 0-100 */
int claw_observe_cpu_load(void);
/* Fills buf with up to max thread entries, returns actual count */
int claw_observe_thread_list(ClawThreadInfo *buf, int max);
/* Fills info with current memory statistics */
int claw_observe_mem_usage(ClawMemInfo *info);
/* Fills info with interrupt statistics */
int claw_observe_irq_stats(ClawIrqInfo *info);
Dual-RTOS Implementation
| Metric |
RT-Thread |
FreeRTOS |
| CPU load |
cpuusage component |
vTaskGetRunTimeStats() |
| Thread state |
rt_thread_get_info() |
uxTaskGetSystemState() |
| Memory |
rt_memory_info() |
xPortGetFreeHeapSize() |
| Stack watermark |
thread stack fields |
uxTaskGetStackHighWaterMark() |
Constraints
- RAM overhead: < 2 KB for observation data buffer
- Sampling rate: configurable, default 1 Hz, max 100 Hz
- Observation task runs at lowest priority, must not affect RTOS real-time behavior
- Follows OSAL pattern: interface header + per-RTOS implementation files
Deliverable
Shell command claw observe displays live CPU load, thread list with stack usage,
memory stats, and IRQ info on QEMU.
Phase 2: AI Debugging Assistant (MVP)
Goal
Single-device conversational RTOS debugging. This is the core value validation point.
Architecture
User: "Why is CPU load high?"
│
├─→ Gateway routes to AI service
│ │
│ ├─→ AI service invokes observe_tool (registered via Tool Use)
│ │ │
│ │ └─→ Collects CPU / thread / memory / IRQ data (Phase 1 API)
│ │
│ ├─→ Formats observation data as structured prompt context
│ │
│ └─→ Calls cloud LLM → returns diagnosis
│
└─→ Reply via Telegram / Feishu / Shell
Key Work
- New
observe_tool.c registered in Tool Use framework with SWARM_CAP_OBSERVE
- Debugging prompt templates that format observation data for LLM comprehension
- No local LLM needed — reuse existing cloud AI service pipeline
Demo Scenario
- Run RT-Claw on QEMU, deliberately induce thread deadlock / memory leak / priority
inversion
- Ask via Telegram: "What is the system status?" → RT-Claw auto-collects RTOS data →
sends to cloud LLM → returns root cause analysis with actionable suggestions
Deliverable
End-to-end demo: user asks a debugging question, RT-Claw observes RTOS state, AI
analyzes, user receives diagnosis — all within the existing IM channels.
Phase 3: Cloud Hub and Data Pipeline
Goal
Continuous observation data reporting with cloud-side aggregation and visualization.
Technical Choices
| Component |
Choice |
Rationale |
| Transport |
MQTT |
Lightweight, QoS support, embedded-friendly |
| Backend |
Python FastAPI |
Fast prototyping |
| Time-series DB |
InfluxDB |
Purpose-built for metrics |
| Visualization |
Grafana |
Zero frontend development needed |
Device-Side Changes
- Extend
claw_net.h with MQTT client interface: claw_mqtt_publish()
- Periodic observation data upload (reuses Phase 1 collection)
- ESP32 real hardware: WiFi + MQTT; QEMU: via
api-proxy.py relay
Cloud-Side (separate repo rt-claw-hub)
- MQTT broker ingestion
- FastAPI data receive and query API
- Grafana dashboard templates
- Anomaly alerting: detect patterns like periodic CPU spikes, memory creep
Deliverable
Web dashboard showing live RTOS state, historical trends, and anomaly alerts for
connected devices.
Phase 4: Multi-Device Coordination
Goal
Multiple RT-Claw nodes discover each other and collaborate on diagnosis.
Architecture (Star Topology, Hub-Relayed)
Node A ──┐
├──→ Cloud Hub (message routing) ──→ AI engine
Node B ──┘ │
▼
Aggregated cross-device analysis
Nodes do not connect directly to each other. The Hub handles discovery, routing, and
data correlation. This simplifies networking and security.
Key Work
- Device registry: node registers on boot (device ID +
SWARM_CAP_* declarations)
- Coordination messages: extend Gateway message types
CLAW_MSG_OBSERVE_REQUEST — request observation data from another device
CLAW_MSG_OBSERVE_RESPONSE — return observation data
CLAW_MSG_DIAGNOSIS_SHARE — broadcast diagnostic conclusions
- Joint analysis: Hub aggregates multi-device data → builds cross-device context →
LLM performs joint root cause analysis
Demo Scenario
- Two QEMU instances running different workloads
- Node A detects CAN bus anomaly → requests Node B power monitoring data → Hub performs
joint AI analysis → pinpoints root cause across both nodes
Deliverable
Two-device coordinated diagnosis demo on QEMU.
Phase 5: Skill Ecosystem and Edge Intelligence (2028+)
Goal
Community-driven evolution with downloadable skill packs and on-device inference.
Components
- Skill pack standard: downloadable debugging analysis modules built on Tool Use
framework (.claw-skill format)
- RT-SkillsHub platform: publish / download / rate skill packs
- Edge TinyML: deploy lightweight anomaly detection models to MCU (TFLite Micro),
local pre-screening + cloud deep analysis
- Multi-RTOS expansion: Zephyr OSAL backend
This phase requires community participation and has no hard deadline.
Timeline Overview
2026 Q1-Q2 Q2-Q3 Q3-Q4 2027 H1 2027 H2 2028+
│ │ │ │ │ │
▼ ▼ ▼ ▼ ▼ ▼
Phase 0 Phase 1 Phase 2 Phase 3 Phase 4 Phase 5
Stabilize Observe AI Debug Cloud Hub Multi-Node Ecosystem
v0.3.0 v0.4.0 v0.5.0 ★MVP v0.6.0 v0.7.0 v1.0.0
solid core see it understand send it link it scale it
Phase 2 (v0.5.0) is the critical milestone. If conversational RTOS debugging
produces a compelling demo, subsequent phases have the momentum to proceed.
Relationship to Existing Issues
Origin
This roadmap is distilled from a vision document that explored the long-term potential
of RT-Claw as a hardware-aware AI debugging platform. The original vision described
a 2035 scenario where debugging instruments (oscilloscopes, logic analyzers, JTAG
debuggers) become autonomous AI agents that collaborate through cloud chat rooms.
This issue retains the technically grounded elements and organizes them into an
incremental, deliverable-driven plan. Speculative elements (device personalities,
social graphs, consciousness protocols) were removed in favor of concrete APIs,
measurable constraints, and working demos.
背景
RT-Claw 已具备坚实基础:跨 FreeRTOS 和 RT-Thread 的 OSAL 抽象、基于服务注册的 Gateway 消息路由、带
SWARM_CAP_*能力声明的 Tool Use 框架、AI 对话服务、IM 集成(Telegram/飞书)以及多平台支持(ESP32-C3/S3/vexpress-a9)+ QEMU。当前缺失的是观测 RTOS 内部状态并利用这些数据进行 AI 辅助调试的能力。嵌入式开发者至今仍依赖 printf、手动断点和日志分析来诊断线程饥饿、优先级反转、内存泄漏和中断风暴等问题。这些方法无法扩展到多核异构架构、OTA 驱动的回归测试或边缘 AI 工作负载。
与此同时,更好方案的构建模块已经存在:
本 issue 提出分阶段技术路线图,将 RT-Claw 从运行在 RTOS 上的软件代理演进为硬件感知的调试代理,能够观测、分析、上报,并最终跨设备协同。
设计原则
每个阶段必须交付独立价值,不依赖后续阶段完成才有意义。
Phase 1: RTOS 可观测层
目标
为 RT-Claw 提供通过统一 API 观察 RTOS 内部状态的能力。
新接口:
claw_observe.h双 RTOS 实现
cpuusage组件vTaskGetRunTimeStats()rt_thread_get_info()uxTaskGetSystemState()rt_memory_info()xPortGetFreeHeapSize()uxTaskGetStackHighWaterMark()约束
交付物
Shell 命令
claw observe在 QEMU 上实时显示 CPU 负载、线程列表(含栈使用率)、内存统计和 IRQ 信息。Phase 2: AI 调试助手 (MVP)
目标
单设备会话式 RTOS 调试。这是核心价值验证节点。
架构
关键工作
observe_tool.c,在 Tool Use 框架中注册SWARM_CAP_OBSERVE演示场景
交付物
端到端演示:用户提问调试问题,RT-Claw 观测 RTOS 状态,AI 分析,用户收到诊断 — 全部在现有 IM 通道内完成。
Phase 3: 云端 Hub 与数据管线
目标
持续上报观测数据,云端聚合与可视化。
技术选型
设备端变更
claw_net.h,增加 MQTT 客户端接口:claw_mqtt_publish()api-proxy.py中继云端(独立仓库
rt-claw-hub)交付物
Web 仪表盘展示设备实时 RTOS 状态、历史趋势和异常告警。
Phase 4: 多设备协同
目标
多个 RT-Claw 节点相互发现并协作诊断。
架构(星型拓扑,Hub 中继)
节点之间不直连。Hub 负责发现、路由和数据关联,简化网络和安全。
关键工作
SWARM_CAP_*声明)CLAW_MSG_OBSERVE_REQUEST— 请求另一设备的观测数据CLAW_MSG_OBSERVE_RESPONSE— 返回观测数据CLAW_MSG_DIAGNOSIS_SHARE— 广播诊断结论演示场景
交付物
QEMU 上的双设备协同诊断演示。
Phase 5: 技能生态与边缘智能 (2028+)
目标
社区驱动的演进,可下载的技能包和设备端推理。
组件
.claw-skill格式)此阶段需要社区参与,没有硬性截止日期。
时间线概览
Phase 2 (v0.5.0) 是关键里程碑。如果会话式 RTOS 调试能产出有说服力的演示,后续阶段就有推进的动力。
与现有 Issue 的关系
起源
本路线图提炼自一份探讨 RT-Claw 作为硬件感知 AI 调试平台长期潜力的愿景文档。原始愿景描述了 2035 年的场景:调试仪器(示波器、逻辑分析仪、JTAG 调试器)成为通过云端聊天室协作的自主 AI 代理。
本 issue 保留了技术上可行的要素,并将其组织为递增的、交付物驱动的计划。推测性要素(设备个性、社交图谱、意识协议)已被移除,取而代之的是具体 API、可测量约束和可工作的演示。
Background (English)
RT-Claw already has a solid foundation: OSAL abstraction across FreeRTOS and RT-Thread,
gateway message routing with service registry, Tool Use framework with
SWARM_CAP_*capability declarations, AI chat service, IM integration (Telegram/Feishu), and
multi-platform support (ESP32-C3/S3/vexpress-a9) with QEMU.
What is still missing is the ability to observe RTOS internals and use that data
for AI-assisted debugging. Today, embedded developers still rely on printf, manual
breakpoints, and log analysis to diagnose issues like thread starvation, priority
inversion, memory leaks, and interrupt storms. These techniques do not scale to
multi-core heterogeneous architectures, OTA-driven regression, or edge AI workloads.
Meanwhile, the building blocks for a better approach already exist:
overflow detection
This issue proposes a phased technical roadmap that evolves RT-Claw from a software
agent running on RTOS into a hardware-aware debugging agent that can observe,
analyze, report, and eventually coordinate across devices.
Design Principle
Each phase must deliver standalone value. No phase should depend on all subsequent
phases being completed to be useful.
Phase 1: RTOS Observability Layer
Goal
Give RT-Claw the ability to see RTOS internal state through a unified API.
New Interface:
claw_observe.hDual-RTOS Implementation
cpuusagecomponentvTaskGetRunTimeStats()rt_thread_get_info()uxTaskGetSystemState()rt_memory_info()xPortGetFreeHeapSize()uxTaskGetStackHighWaterMark()Constraints
Deliverable
Shell command
claw observedisplays live CPU load, thread list with stack usage,memory stats, and IRQ info on QEMU.
Phase 2: AI Debugging Assistant (MVP)
Goal
Single-device conversational RTOS debugging. This is the core value validation point.
Architecture
Key Work
observe_tool.cregistered in Tool Use framework withSWARM_CAP_OBSERVEDemo Scenario
inversion
sends to cloud LLM → returns root cause analysis with actionable suggestions
Deliverable
End-to-end demo: user asks a debugging question, RT-Claw observes RTOS state, AI
analyzes, user receives diagnosis — all within the existing IM channels.
Phase 3: Cloud Hub and Data Pipeline
Goal
Continuous observation data reporting with cloud-side aggregation and visualization.
Technical Choices
Device-Side Changes
claw_net.hwith MQTT client interface:claw_mqtt_publish()api-proxy.pyrelayCloud-Side (separate repo
rt-claw-hub)Deliverable
Web dashboard showing live RTOS state, historical trends, and anomaly alerts for
connected devices.
Phase 4: Multi-Device Coordination
Goal
Multiple RT-Claw nodes discover each other and collaborate on diagnosis.
Architecture (Star Topology, Hub-Relayed)
Nodes do not connect directly to each other. The Hub handles discovery, routing, and
data correlation. This simplifies networking and security.
Key Work
SWARM_CAP_*declarations)CLAW_MSG_OBSERVE_REQUEST— request observation data from another deviceCLAW_MSG_OBSERVE_RESPONSE— return observation dataCLAW_MSG_DIAGNOSIS_SHARE— broadcast diagnostic conclusionsLLM performs joint root cause analysis
Demo Scenario
joint AI analysis → pinpoints root cause across both nodes
Deliverable
Two-device coordinated diagnosis demo on QEMU.
Phase 5: Skill Ecosystem and Edge Intelligence (2028+)
Goal
Community-driven evolution with downloadable skill packs and on-device inference.
Components
framework (
.claw-skillformat)local pre-screening + cloud deep analysis
This phase requires community participation and has no hard deadline.
Timeline Overview
Phase 2 (v0.5.0) is the critical milestone. If conversational RTOS debugging
produces a compelling demo, subsequent phases have the momentum to proceed.
Relationship to Existing Issues
directly into the RT Event Fabric's state awareness. The observe API provides the
data that event classification (P0-P3) needs for informed prioritization.
Phase 2's AI debugging assistant naturally fits into the Slow AI Plane described
in architecture: establish a hardware-first real-time interaction loop #2, consuming only events that warrant AI reasoning.
Origin
This roadmap is distilled from a vision document that explored the long-term potential
of RT-Claw as a hardware-aware AI debugging platform. The original vision described
a 2035 scenario where debugging instruments (oscilloscopes, logic analyzers, JTAG
debuggers) become autonomous AI agents that collaborate through cloud chat rooms.
This issue retains the technically grounded elements and organizes them into an
incremental, deliverable-driven plan. Speculative elements (device personalities,
social graphs, consciousness protocols) were removed in favor of concrete APIs,
measurable constraints, and working demos.