diff --git a/.claude/commands/invit.md b/.claude/commands/invit.md deleted file mode 100644 index f0e34ec51..000000000 --- a/.claude/commands/invit.md +++ /dev/null @@ -1,45 +0,0 @@ - -## osascript vs Task - -| 场景 | 工具 | 说明 | -|------|------|------| -| 相约(找人聊) | osascript | 启动新终端,用户主导交互 | -| worktree 初始化 | osascript | 启动新终端,用户主导交互 | -| 干活(执行任务) | Task | 子代理执行,返回结果 | - -❌ 干活时禁止用 osascript 启动 Agent,禁止任何子 agent使用此命令。 - -## 相约 vs 代办 - -| 说法 | 含义 | 工具 | -|------|------|------| -| "把 [WHO] 叫过来" | 他来我这儿,用户主导交互 | osascript | -| "去找 [WHO]" / "我想跟 [WHO] 聊" | 用户去他那儿,用户主导交互 | osascript | -| "你跟 [WHO] 聊一下" / "你去问问 [WHO]" | Claude 代办,获取信息后汇报 | Task | - -### 相约(osascript) - -用户想亲自交流 → 启动新终端,用户主导 - -流程: -1. 读取 `~/.claude/agents/.md` 获取工位路径(在 description 里) -2. AppleScript 启动新终端,进入对应目录,启动 claude -3. 用 `--agent` 参数指定 agent,附带唤醒 prompt -4. 默认 `--model opus`(相约是主动交流,用最强模型) - -唤醒 prompt 要求: -- 告诉他为什么被叫来(用户说了就转达,没说就说"用户有事找你") -- ❌ 不要追问用户具体什么事 -- ❌ 不要说"有什么需要帮忙的吗"这种反问 - -```bash -osascript -e 'tell app "Terminal" to do script "cd <目录> && claude --agent --model opus \"<唤醒prompt>\""' -``` - -### 代办(Task) - -用户让 Claude 去获取信息并汇报 → 用 Task 工具调用对应 agent - -``` -Task(subagent_type=, prompt="<任务描述>") -``` diff --git a/.claude/commands/spec.md b/.claude/commands/spec.md deleted file mode 100644 index 8540c066f..000000000 --- a/.claude/commands/spec.md +++ /dev/null @@ -1,76 +0,0 @@ -# 更新 Leon 架构规格 - -维护 `teams/specs/current/` 中的架构 ground truth。 - -## 参数 - -$ARGUMENTS = 可选的聚焦范围(如 `providers`、`sandbox lifecycle`、`commands`)。无参数则全量刷新。 - -## 写入边界 - -只能写 `teams/specs/current/*.md`。不能改代码、不能改其他目录。 - -## Hard Rules - -1. 每条事实必须有 `path:line` 证据,否则标 `UNKNOWN`。 -2. 代码 > 文档 > 日志 > 计划。冲突时以代码为准。 -3. 完整覆写文件,不追加。 -4. 不确定的事情直说,不猜。 -5. 不写关于 spec 系统自身的内容——只写 Leon 架构事实。 -6. 幂等:如果现有内容已经准确反映代码事实,保持原文不动。不要为了重写而重写。 -7. 自清理:覆写时自然淘汰过时事实。在汇报中说明删除了什么、为什么。 - -## 目标文件 - -``` -teams/specs/current/ -├── 00_scope.md # 本次分析了什么、跳过了什么 -├── 10_architecture.md # 组件树 + 接线关系 -├── 20_lifecycle.md # session/terminal/lease 生命周期 -├── 30_commands.md # 命令执行流 -├── 40_providers.md # provider 能力矩阵 -├── 50_data.md # schema、状态、持久化 -├── 60_tests.md # 测试证据 -└── 90_gaps.md # 风险、矛盾、未知项 -``` - -## 执行流程 - -### 1. 了解现状 - -读取 `teams/specs/current/` 所有文件,了解当前记录了什么。 - -### 2. 探索代码 - -自行决定需要读哪些源文件。核心入口点供参考: -- `agent.py` — agent 核心 -- `sandbox/` — sandbox 系统 -- `middleware/` — 中间件栈 -- `core/command/` — 命令执行 -- `backend/web/` — Web API -- `sandbox/providers/` — 各 provider 实现 -- `tests/` — 相关测试 - -如果 $ARGUMENTS 指定了范围,聚焦该范围;否则全面扫描。 - -### 3. 覆写 current/*.md - -对每个文件:读代码 → 提取事实(带 `path:line`)→ 覆写整个文件。 - -格式要求: -- 用 section 组织,不用全局编号列表 -- 证据内联写在事实旁边 -- `UNKNOWN` 必须说明原因 -- 保持简洁,不写废话 - -### 4. 更新 00_scope.md - -记录本次实际读了哪些文件、跳过了什么、为什么跳过。 - -### 5. 更新 90_gaps.md - -记录发现的风险、代码与文档的矛盾、以及仍然未知的领域。 - -### 6. 汇报 - -简要说明:改了什么、发现了什么新事实、还有什么是 UNKNOWN。 diff --git a/.claude/commands/test_leon.md b/.claude/commands/test_leon.md deleted file mode 100644 index 2558f70e0..000000000 --- a/.claude/commands/test_leon.md +++ /dev/null @@ -1,174 +0,0 @@ -# 测试 Leon 新功能 - -在开发 Leon 时,用此命令验证新功能是否正常工作。 - -**重要**:Middleware 级别的更新必须通过此测试才能交付。 - -## 使用方法 - -```bash -/test-leon # Middleware 级别更新,执行完整测试 -/test-leon <测试场景描述> # 指定测试场景 -``` - -## 何时必须执行 - -以下情况**必须**执行完整测试后才能交付: - -- 新增或修改 `middleware/` 下的模块 -- 修改 `agent.py` 核心逻辑 -- 修改 `tui/app.py` 或 `tui/runner.py` -- 新增工具或修改工具行为 -- 修改 checkpointer / session 相关逻辑 - -## 执行流程 - -### 1. 清理环境 - -测试前先清理可能卡住的进程: - -```bash -# 杀掉所有 leonai 相关进程 -pkill -9 -f "leonai" 2>/dev/null || true -pkill -9 -f "context7\|upstash" 2>/dev/null || true -sleep 1 -``` - -### 2. 重新安装 - -每次测试前**必须**按以下顺序执行,确保测试的是最新代码: - -```bash -# 1. 清除缓存(必须!否则可能安装旧版本) -# 如果提示 "Cache is currently in-use",加 --force -uv cache clean leonai --force - -# 2. 强制重新安装(必须用 --force) -uv tool install . --force -``` - -⚠️ 注意: -- `uv cache clean` 必须加 `--force`,否则可能因为有进程占用而卡住 -- `uv tool install` 必须加 `--force`,否则不会覆盖已安装版本 - -### 3. 基础回归测试 - -无参数调用时,先执行基础回归,确保没有破坏已有功能: - -#### 3.1 基础响应测试 - -```bash -leonai run -d "你好,用一句话介绍你自己" -``` - -验证:Agent 能正常响应,无报错。 - -#### 3.2 工具调用测试 - -```bash -leonai run -d "列出当前目录的文件" -``` - -验证:`[TOOL_CALL] list_dir` 出现,`[TOOL_RESULT]` 返回文件列表。 - -#### 3.3 命令执行测试 - -```bash -leonai run -d '用 run_command 执行: echo "hello"' -``` - -验证:`[TOOL_CALL] run_command` 出现,`[TOOL_RESULT]` 返回 `hello`,无 `[ERROR]`。 - -#### 3.4 多轮对话测试 - -```bash -cat << 'EOF' | leonai run --stdin -d -你好 - -列出当前目录的 Python 文件 - -读取 agent.py 的前 5 行 -EOF -``` - -验证:3 轮对话都正常完成,工具调用正确。 - -#### 3.5 Thread 持久化测试 - -```bash -leonai run --thread test-mem-$(date +%s) "记住数字 42" -# 记录上面的 thread id,然后: -leonai run --thread <同一个thread-id> "我让你记住的数字是多少?" -``` - -验证:第二轮能正确回忆出 42。 - -### 4. 变更针对性测试(核心步骤) - -基础回归通过后,**必须**针对本次变更设计并执行场景测试。 - -#### 4.1 分析变更 - -```bash -git diff HEAD # 查看未提交的变更 -git diff HEAD~1 HEAD # 或查看最近一次提交 -``` - -阅读 diff,理解本次改了什么功能、影响哪些工具/行为。 - -#### 4.2 设计测试场景 - -根据变更内容,构造能**触发变更路径**的测试用例。原则: - -- 每个新增/修改的行为至少一个正向测试(验证功能生效) -- 关键的错误路径也要覆盖(验证错误提示正确) -- 测试应能通过 `leonai run` 的输出直接验证结果 - -#### 4.3 执行测试 - -**单轮快速验证**: - -```bash -leonai run -d "<测试消息>" -``` - -**多轮验证**(stdin): - -```bash -cat << 'EOF' | leonai run --stdin -d --thread test-$(date +%s) -<第一轮消息> - -<第二轮消息> -EOF -``` - -**持久化验证**(跨命令): - -```bash -leonai run --thread "<第一轮>" -leonai run --thread "<第二轮验证>" -``` - -**交互模式**(需要手动输入,如测试 Queue Mode): - -```bash -leonai run -i -d -``` - -#### 4.4 用户指定场景 - -如果用户通过 `/test-leon <场景描述>` 指定了测试场景,优先测试该场景,基础回归可酌情精简。 - -### 5. 结果分析 - -检查 debug 输出: -- `[TOOL_CALL]` - 工具是否被正确调用 -- `[TOOL_RESULT]` - 工具返回是否正确 -- `[QUEUE]` - 队列状态是否符合预期 -- `[ASSISTANT]` - AI 响应是否合理 -- `[SUMMARY]` - 总轮次和工具调用数 - -## 测试通过标准 - -- 基础回归:所有测试无报错,工具调用符合预期,多轮上下文正确,Thread 持久化正常 -- 变更测试:本次变更的新功能/修复已被验证,正向和错误路径均符合预期 \ No newline at end of file diff --git a/.claude/commands/wtls.md b/.claude/commands/wtls.md deleted file mode 100644 index 7f4bc886b..000000000 --- a/.claude/commands/wtls.md +++ /dev/null @@ -1,95 +0,0 @@ -# 列出所有 Worktree 状态 - -一眼看清所有 worktree 的当前状态,并识别可清理的过期分支。 - -## Step 0:定位主仓库 - -```bash -MAIN_REPO=$(git worktree list | head -1 | awk '{print $1}') -``` - -在主仓库或任意 worktree 下执行均可。 - -## Step 1:批量采集数据 - -对每个 worktree(跳过主仓库自身)执行: - -```bash -git worktree list --porcelain # 列出所有 worktree -git fetch origin # 同步远端状态 - -# 对每个 worktree 采集: -git -C status --short # DIRTY -git -C rev-list origin/main..HEAD --count # AHEAD -git -C rev-list HEAD..origin/main --count # BEHIND -git -C log -1 --format="%ai" # 最近 commit 时间 -git -C log --format="%ai" $(git -C merge-base HEAD origin/main) -1 # 分叉时间 -gh pr view --json state,url,number -R -b 2>/dev/null # PR 状态 - -# 读取 worktree config(新创建的 worktree 才有): -git -C config --worktree --get worktree.ports.backend 2>/dev/null -git -C config --worktree --get worktree.ports.frontend 2>/dev/null -git -C config --worktree --get worktree.description 2>/dev/null -``` - -## Step 2:输出表格 - -``` -worktree branch ports ahead behind dirty PR description -────────────────────────────────────────────────────────────────────────────────────────────────────────────── -~/worktrees/leon--feat-x feat/x 8002:5174 +3 -0 ✗ #12 open 评估系统开发 -~/worktrees/leon--fix-y fix/y 8003:5175 +0 -5 ✓ none 修复登录 bug -worktrees/old-feat old/feat - +1 -12 ✗ #8 merged ⚠ (旧路径) -``` - -状态标注: -- `⚠` = PR 已 merged/closed,建议清理 -- `✓` dirty = 有未提交改动 -- behind 较多 = 落后 main,建议 rebase -- `-` ports = 旧路径 worktree,无端口配置 - -## Step 3:输出 Mermaid 时间轴 - -根据采集的分叉时间和最近 commit 时间,生成 gantt 图: - -``` -gantt 图规则: -- 每个 worktree 一行 -- 起点 = 分叉时间(从 main 创建分支的时间) -- 终点 = 最近 commit 时间(或今天) -- 颜色(section): - active = PR open - done = PR merged/closed(可清理) - crit = dirty(有未提交改动) - 默认 = 无 PR,开发中 -``` - -示例输出: - -```` -```mermaid -gantt - title Worktree 时间轴 - dateFormat YYYY-MM-DD - axisFormat %m-%d - - section active - feat/x (PR #12) :active, 2025-01-10, 2025-02-18 - - section crit - fix/y (dirty) :crit, 2025-01-20, 2025-02-19 - - section done - old/feat (merged) :done, 2024-12-01, 2025-01-15 -``` -```` - -## Step 4:清理建议 - -列出状态为 `merged` 或 `closed` 的 worktree,提示可执行 `wtrm`: - -``` -⚠ 以下 worktree 对应的 PR 已关闭,可以清理: - - worktrees/old-feat (old/feat) → PR #8 merged - 执行:/wtrm old/feat -``` diff --git a/.claude/commands/wtnew.md b/.claude/commands/wtnew.md deleted file mode 100644 index 59d9746cd..000000000 --- a/.claude/commands/wtnew.md +++ /dev/null @@ -1,178 +0,0 @@ -# 创建 Worktree - -基于最新 `origin/main` 创建隔离的 worktree 开发环境。 - -## 参数 - -`$ARGUMENTS` = 分支名(如 `feat/eval`、`yyh/fix-bug`) - -## Step 0:定位主仓库 - -```bash -MAIN_REPO=$(git worktree list | head -1 | awk '{print $1}') -PROJECT_NAME=$(basename "$MAIN_REPO") -``` - -在主仓库或任意 worktree 下执行均可,自动找到主仓库根目录。 - -## Step 1:同步远端 - -```bash -git fetch origin -``` - -确保基于最新的 `origin/main` 创建,避免从过时的 base 分叉。 - -## Step 2:启用 worktreeConfig - -```bash -git config extensions.worktreeConfig true -``` - -幂等操作,已启用不报错。启用后每个 worktree 可拥有独立的 `config.worktree` 配置。 - -## Step 3:创建 worktree - -目录名规则:分支名中的 `/` 替换为 `-`(如 `feat/eval` → `feat-eval`) - -路径规则:`~/worktrees/<项目名>--<目录名>`(如 `~/worktrees/leon--feat-eval`) - -```bash -git worktree add "$HOME/worktrees/$PROJECT_NAME--<目录名>" -b $ARGUMENTS origin/main -``` - -- worktree 存放在 `~/worktrees/`,与主仓库完全隔离 -- 确保 `~/worktrees/` 目录存在(`mkdir -p ~/worktrees`) - -## Step 4:端口分配 - -为 worktree 分配独立的 backend + frontend 端口对,避免多 worktree 同时开发时端口冲突。 - -端口 8001/5173 保留给 main,worktree 从 offset=1 开始。 - -分配逻辑(**必须严格按以下脚本执行,不要自行简化,不要用 `&&` 把 while 和 for 串成一条命令**): - -```bash -# 用 git worktree list 获取所有 worktree 路径,逐个读取已声明端口 -MAIN_REPO=$(git worktree list | head -1 | awk '{print $1}') -declared_ports="" -while read -r wt_path _rest; do - [ "$wt_path" = "$MAIN_REPO" ] && continue - bp=$(git -C "$wt_path" config --worktree --get worktree.ports.backend 2>/dev/null) || true - fp=$(git -C "$wt_path" config --worktree --get worktree.ports.frontend 2>/dev/null) || true - [ -n "$bp" ] && declared_ports="$declared_ports $bp" - [ -n "$fp" ] && declared_ports="$declared_ports $fp" - true # 确保循环退出码为 0,避免 && 链断裂 -done < <(git worktree list | tail -n +2) -echo "已声明端口: $declared_ports" - -# 从 offset=1 开始找第一组未冲突的端口对 -ASSIGNED_BACKEND="" -ASSIGNED_FRONTEND="" -for offset in $(seq 1 20); do - bp=$((8001 + offset)) - fp=$((5173 + offset)) - # 检查 1:是否已被其他 worktree 声明 - if echo "$declared_ports" | grep -qw "$bp" || echo "$declared_ports" | grep -qw "$fp"; then - echo "跳过 $bp/$fp(已声明)" - continue - fi - # 检查 2:系统层是否占用 - if lsof -i :"$bp" >/dev/null 2>&1 || lsof -i :"$fp" >/dev/null 2>&1; then - echo "跳过 $bp/$fp(端口占用)" - continue - fi - ASSIGNED_BACKEND=$bp - ASSIGNED_FRONTEND=$fp - echo "✅ 分配: backend=$bp frontend=$fp" - break -done - -# 验证:确保没有分配 main 的端口 -if [ "$ASSIGNED_BACKEND" = "8001" ] || [ "$ASSIGNED_FRONTEND" = "5173" ]; then - echo "❌ 错误:不能将 main 的端口 (8001/5173) 分配给 worktree" - exit 1 -fi - -# 验证:确保成功分配了端口 -if [ -z "$ASSIGNED_BACKEND" ] || [ -z "$ASSIGNED_FRONTEND" ]; then - echo "❌ 错误:未能找到可用端口对(已尝试 offset 1-20)" - exit 1 -fi -``` - -**重要**:执行此步骤后,必须在输出中看到: -- "已声明端口: ..." 行 -- "✅ 分配: backend=xxxx frontend=xxxx" 行 -- 分配的端口必须 >= 8002/5174(不能是 8001/5173) - -## Step 5:写入 worktree config - -```bash -cd "$HOME/worktrees/$PROJECT_NAME--<目录名>" -git config --worktree worktree.ports.backend $ASSIGNED_BACKEND -git config --worktree worktree.ports.frontend $ASSIGNED_FRONTEND -git config --worktree leon.backend.port $ASSIGNED_BACKEND -git config --worktree leon.frontend.port $ASSIGNED_FRONTEND -git config --worktree worktree.description "" -git config --worktree worktree.created "$(date +%Y-%m-%d)" -git config --worktree worktree.project "$PROJECT_NAME" - -# 验证配置已正确写入 -echo "" -echo "✅ 配置已写入:" -git config --worktree --list | grep -E "(ports|leon\.(backend|frontend)\.port)" -``` - -description 由 AI 根据分支名和用户提供的上下文自动推断,简短描述这个分支的目的(中文,10-20 字)。 - -前后端代码会自动从 `git config --worktree` 读取端口,无需手动修改代码: -- `backend/web/main.py` → `_resolve_port()` 读取 `worktree.ports.backend` -- `frontend/app/vite.config.ts` → `getWorktreePort()` 读取 `worktree.ports.backend` 和 `worktree.ports.frontend` - -**重要**:执行此步骤后,必须在输出中看到配置验证信息,确认端口已正确写入。 - -## Step 6:初始化开发环境 - -在 worktree 目录下创建独立的 Python 和 Node 环境: - -```bash -cd "$HOME/worktrees/$PROJECT_NAME--<目录名>" - -# Python:创建 .venv 并安装依赖 -uv sync - -# Node:安装前端依赖 -cd frontend/app && npm install && cd ../.. -``` - -两个命令都必须成功,失败时停下来排查,不要跳过。 - -## Step 7:链接本地配置 - -`.claude/` 已纳入 Git 管理,worktree checkout 后自动包含。 -只需链接不在 Git 里的本地配置文件: - -```bash -cd "$HOME/worktrees/$PROJECT_NAME--<目录名>" -ln -s "$MAIN_REPO/CLAUDE.local.md" CLAUDE.local.md 2>/dev/null -``` - -## Step 8:确认结果 - -输出: -- worktree 路径 -- 分支名 -- 分配的端口(backend / frontend) -- 自动生成的描述 -- `CLAUDE.local.md` 符号链接状态 - -询问用户:是否在新 worktree 中打开新的 Claude 会话? - -如果是,用 osascript 打开新终端并启动 claude(**必须将路径替换为实际计算出的完整绝对路径,不得使用变量或占位符**): - -```bash -osascript -e 'tell app "Terminal" to do script "cd \"/Users/apple/worktrees/<项目名>--<目录名>\" && claude"' -``` - -关键:`cd` 和 `claude` 必须写在 osascript 的 `do script` 字符串内部,不是写在外层 Bash 命令里。 diff --git a/.claude/commands/wtrebaseall.md b/.claude/commands/wtrebaseall.md deleted file mode 100644 index 1743ef425..000000000 --- a/.claude/commands/wtrebaseall.md +++ /dev/null @@ -1,71 +0,0 @@ -# 批量 Rebase 所有 Worktree - -一个 PR 合并到 main 后,批量将所有 in-progress worktree rebase 到最新 `origin/main`。 - -## 使用时机 - -某个分支的 PR 合并后(尤其是 rebase and merge),其他 worktree 的 base 已过时,统一更新。 - -## Step 0:定位主仓库 - -```bash -MAIN_REPO=$(git worktree list | head -1 | awk '{print $1}') -``` - -在主仓库或任意 worktree 下执行均可。 - -## Step 1:同步远端 - -```bash -git fetch origin -``` - -## Step 2:遍历所有 worktree - -对每个 worktree 逐一处理(跳过主仓库本身),无论在 `~/worktrees/` 还是旧路径 `$MAIN_REPO/worktrees/`: - -```bash -git worktree list --porcelain -``` - -**跳过条件:** -- 当前 worktree 就是主仓库 -- 对应分支的 PR 已 merged/closed(标记建议用 `wtrm` 清理) - -**处理流程:** - -``` -DIRTY 检查 -├── 有未提交改动 → 跳过,标记为"需手动处理" -└── 干净 → git -C rebase origin/main - ├── 成功 → 标记 ✅ - └── 有冲突 → git -C rebase --abort(回滚) - 标记为"需手动处理",继续下一个 -``` - -冲突时自动 abort 而不是停下来等待,保证批量操作不会卡住。 - -## Step 3:汇总报告 - -``` -wtrebaseall 完成 -───────────────────────────────────── -✅ 成功 rebase: - - ~/worktrees/leon--feat-x (feat/x) +2 新 commit - - ~/worktrees/leon--fix-y (fix/y) 已是最新 - -⚠ 跳过(有未提交改动,需手动处理): - - ~/worktrees/leon--wip-z (wip/z) - -❌ 冲突(已 abort,需手动处理): - - worktrees/old-a (old/a) - 提示:cd worktrees/old-a && git rebase origin/main - -🗑 建议清理(PR 已关闭): - - worktrees/done-b (done/b) → PR #9 merged - 执行:/wtrm done/b -───────────────────────────────────── -成功 2 / 跳过 1 / 冲突 1 / 待清理 1 -``` - -报告中使用 `git worktree list` 返回的实际路径,兼容新旧两种位置。 diff --git a/.claude/commands/wtrm.md b/.claude/commands/wtrm.md deleted file mode 100644 index 07a8c6623..000000000 --- a/.claude/commands/wtrm.md +++ /dev/null @@ -1,79 +0,0 @@ -# 移除 Worktree - -清理并移除指定的 worktree。 - -## 参数 - -`$ARGUMENTS` = 分支名或目录名(如 `feat/eval`、`feat-eval`)。可省略,自动识别。 - -## Step 0:确定目标 - -优先级:命令参数 → 当前所在 worktree → 列出所有 worktree 询问用户 - -```bash -MAIN_REPO=$(git worktree list | head -1 | awk '{print $1}') -PROJECT_NAME=$(basename "$MAIN_REPO") -``` - -- 当前目录是某个 worktree → 默认操作当前 worktree,确认后执行 -- 当前目录是主仓库且无参数 → 列出所有 worktree,询问移除哪个 -- 提供了参数 → 匹配分支名或目录名 - -worktree 可能在两个位置(兼容新旧路径): -- 新路径:`~/worktrees/<项目名>--<目录名>` -- 旧路径:`$MAIN_REPO/worktrees/<目录名>` - -用 `git worktree list` 获取实际路径,按分支名匹配即可。 - -## Step 1:检查未提交改动 - -```bash -git -C status --short -``` - -有未提交改动 → 列出改动内容,询问用户:**继续移除(改动会丢失)?还是先处理?** - -## Step 2:清理 untracked 文件 - -先移除已知的 symlink(`CLAUDE.local.md` 由 `wtnew` 创建,不在 Git 里): - -```bash -TARGET="/CLAUDE.local.md" -[ -L "$TARGET" ] && rm "$TARGET" || echo "跳过:$TARGET 不是符号链接,不删除" -``` - -**必须用 `[ -L ]` 确认是 symlink 再删**,绝不对普通文件执行 `rm`,防止误删原始文件。 - -## Step 3:移除 worktree - -```bash -git worktree remove "" -``` - -如果仍然失败(`.venv`、`__pycache__` 等其他 untracked 文件残留): - -```bash -rm -rf "" -git worktree prune -``` - -移除后,`config.worktree` 随 `.git/worktrees//` 自动清除,无需额外处理。 - -## Step 4:询问是否删除本地分支 - -先 fetch 远程 main,确保合并判断基于最新状态: - -```bash -git fetch origin main -git branch -d <分支名> # 基于最新 origin/main 判断是否已合并 -``` - -如果 `-d` 报"未合并":用 `gh` 查该分支是否有已合并的 PR(squash/rebase merge 会改变 hash,`git branch -d` 检测不到): - -```bash -gh pr list --head <分支名> --state merged --json number,title --limit 1 -``` - -- 返回非空(有已合并 PR)→ 安全删除 `git branch -D <分支名>` -- 返回空(无已合并 PR)→ 告知用户分支确实未合并,确认后再 `-D` 强删 -- 不删除远程分支,除非用户明确要求 diff --git a/.claude/commands/wtsync.md b/.claude/commands/wtsync.md deleted file mode 100644 index 1fdd27894..000000000 --- a/.claude/commands/wtsync.md +++ /dev/null @@ -1,43 +0,0 @@ -# 同步 Worktree 本地配置 - -将主仓库的 `CLAUDE.local.md` 链接到当前 worktree。 - -> `.claude/` 已纳入 Git 管理,worktree checkout 后自动包含,无需手动处理。 - -## 使用场景 - -- worktree 中找不到 `CLAUDE.local.md`(本地配置不在 Git 里,不会随 checkout 复制) - -## Step 0:确定位置 - -```bash -MAIN_REPO=$(git worktree list | head -1 | awk '{print $1}') -CWD=$(pwd) -``` - -- `CWD == MAIN_REPO` → 提示"你在主仓库,不需要 sync",退出 -- `CWD` 在某个 worktree 下 → 继续(无论是 `~/worktrees/` 还是旧路径 `$MAIN_REPO/worktrees/`) - -## Step 1:链接本地配置 - -```bash -TARGET="CLAUDE.local.md" -if [ -e "$TARGET" ] && [ ! -L "$TARGET" ]; then - echo "错误:$TARGET 是普通文件,不覆盖,请手动确认后再操作" - exit 1 -fi -ln -sf "$MAIN_REPO/CLAUDE.local.md" "$TARGET" -``` - -**若目标已存在且不是 symlink(即普通文件),直接报错退出**,绝不强制覆盖。 - -## Step 2:验证 - -确认符号链接存在且指向正确目标,输出结果: - -``` -✅ 已同步: - CLAUDE.local.md → /path/to/main/CLAUDE.local.md -``` - -链接已存在且正确 → 提示"已是最新,无需重复同步"。 diff --git a/.claude/rules/codestyle.md b/.claude/rules/codestyle.md deleted file mode 100644 index 3ef21e8df..000000000 --- a/.claude/rules/codestyle.md +++ /dev/null @@ -1,5 +0,0 @@ -# 代码风格 - -**禁止**:monkeypatch、嵌套函数、单例/全局变量、metaclass/动态 import、eval/exec - -**强制**:Pydantic 强类型校验输入输出;核心逻辑必须有日志;区分预期异常与系统错误,不吞异常 diff --git a/.claude/rules/conventions.md b/.claude/rules/conventions.md deleted file mode 100644 index acfb8ffc5..000000000 --- a/.claude/rules/conventions.md +++ /dev/null @@ -1,7 +0,0 @@ -# 代码约定 - -## 命名规范 - -- **工具参数**:PascalCase(FilePath, SearchPath) -- **路径**:强制绝对路径,限制 workspace_root -- **Hook 优先级**:1-10,数字大优先 diff --git a/.claude/rules/development.md b/.claude/rules/development.md deleted file mode 100644 index d89784ca3..000000000 --- a/.claude/rules/development.md +++ /dev/null @@ -1,16 +0,0 @@ -# 开发规范 - -## 本地测试 - -修改代码后本地测试: - -```bash -uv cache clean leonai --force && uv tool install . --force -``` - -- `--force` 必须加,否则缓存/进程占用会导致安装旧版本 - -## 发版 - -- push tag → GitHub Actions 自动发布到 PyPI -- ❌ 不要用 `uv publish` diff --git a/.claude/rules/git.md b/.claude/rules/git.md deleted file mode 100644 index e123a95c0..000000000 --- a/.claude/rules/git.md +++ /dev/null @@ -1,10 +0,0 @@ -# Git 规范 - -- Conventional Commits,小步提交,单一职责 -- PR title/body 必须英文;push 前清理截图等 untracked 文件 -- ❌ Push 必须用户授权,Claude 不能主动 push -- push 被拒 → `git rebase origin/`,❌ 不用 `reset --hard` / `pull --rebase` - -## Worktree - -统一放 `~/worktrees/<项目>--`,端口对写入 `git config --worktree`,用完 `git worktree remove` 清理。 diff --git a/.claude/rules/interactive.md b/.claude/rules/interactive.md deleted file mode 100644 index 9a977c221..000000000 --- a/.claude/rules/interactive.md +++ /dev/null @@ -1,9 +0,0 @@ -# 交互规范 - -## 阶段约束 -- 分析阶段:只提炼问题,不带方案,不写代码 -- 方案阶段:输出架构/流程/接口定义,不写实现 -- 实现阶段:严格依照方案,不偏离 -- 复盘阶段:只总结得失,不新增需求 - -意见分歧时:写清选项及权衡点,不自己拍板。 diff --git a/.claude/rules/team-guidelines.md b/.claude/rules/team-guidelines.md deleted file mode 100644 index 871e85578..000000000 --- a/.claude/rules/team-guidelines.md +++ /dev/null @@ -1,9 +0,0 @@ -# Agent Team 规范 - -- PM 只协调,不写码,必须 spawn 子成员执行 -- **完成 = 跑通 + 测试通过**,基础设施类还要验证端口可达 -- 先搭基础设施,再写业务代码 -- 每个成员先读懂现有代码再动手,核心原则"包一层"复用 sandbox/ 和 middleware/,❌ 不重造 -- 每个模块完成后立即 review,不攒到最后 - -**环境**:Docker 已装,Redis/RabbitMQ 用容器;`proxy_on` 开代理;Supabase MCP 可用(`~/.mcp.json`) diff --git a/.claude/settings.json b/.claude/settings.json deleted file mode 100644 index 0cbf4db50..000000000 --- a/.claude/settings.json +++ /dev/null @@ -1,5 +0,0 @@ -{ - "enabledPlugins": { - "typescript-lsp@claude-plugins-official": true - } -} diff --git a/.claude/skills/architecture-patterns/SKILL.md b/.claude/skills/architecture-patterns/SKILL.md deleted file mode 100644 index ba0a35944..000000000 --- a/.claude/skills/architecture-patterns/SKILL.md +++ /dev/null @@ -1,494 +0,0 @@ ---- -name: architecture-patterns -description: Implement proven backend architecture patterns including Clean Architecture, Hexagonal Architecture, and Domain-Driven Design. Use when architecting complex backend systems or refactoring existing applications for better maintainability. ---- - -# Architecture Patterns - -Master proven backend architecture patterns including Clean Architecture, Hexagonal Architecture, and Domain-Driven Design to build maintainable, testable, and scalable systems. - -## When to Use This Skill - -- Designing new backend systems from scratch -- Refactoring monolithic applications for better maintainability -- Establishing architecture standards for your team -- Migrating from tightly coupled to loosely coupled architectures -- Implementing domain-driven design principles -- Creating testable and mockable codebases -- Planning microservices decomposition - -## Core Concepts - -### 1. Clean Architecture (Uncle Bob) - -**Layers (dependency flows inward):** - -- **Entities**: Core business models -- **Use Cases**: Application business rules -- **Interface Adapters**: Controllers, presenters, gateways -- **Frameworks & Drivers**: UI, database, external services - -**Key Principles:** - -- Dependencies point inward -- Inner layers know nothing about outer layers -- Business logic independent of frameworks -- Testable without UI, database, or external services - -### 2. Hexagonal Architecture (Ports and Adapters) - -**Components:** - -- **Domain Core**: Business logic -- **Ports**: Interfaces defining interactions -- **Adapters**: Implementations of ports (database, REST, message queue) - -**Benefits:** - -- Swap implementations easily (mock for testing) -- Technology-agnostic core -- Clear separation of concerns - -### 3. Domain-Driven Design (DDD) - -**Strategic Patterns:** - -- **Bounded Contexts**: Separate models for different domains -- **Context Mapping**: How contexts relate -- **Ubiquitous Language**: Shared terminology - -**Tactical Patterns:** - -- **Entities**: Objects with identity -- **Value Objects**: Immutable objects defined by attributes -- **Aggregates**: Consistency boundaries -- **Repositories**: Data access abstraction -- **Domain Events**: Things that happened - -## Clean Architecture Pattern - -### Directory Structure - -``` -app/ -├── domain/ # Entities & business rules -│ ├── entities/ -│ │ ├── user.py -│ │ └── order.py -│ ├── value_objects/ -│ │ ├── email.py -│ │ └── money.py -│ └── interfaces/ # Abstract interfaces -│ ├── user_repository.py -│ └── payment_gateway.py -├── use_cases/ # Application business rules -│ ├── create_user.py -│ ├── process_order.py -│ └── send_notification.py -├── adapters/ # Interface implementations -│ ├── repositories/ -│ │ ├── postgres_user_repository.py -│ │ └── redis_cache_repository.py -│ ├── controllers/ -│ │ └── user_controller.py -│ └── gateways/ -│ ├── stripe_payment_gateway.py -│ └── sendgrid_email_gateway.py -└── infrastructure/ # Framework & external concerns - ├── database.py - ├── config.py - └── logging.py -``` - -### Implementation Example - -```python -# domain/entities/user.py -from dataclasses import dataclass -from datetime import datetime -from typing import Optional - -@dataclass -class User: - """Core user entity - no framework dependencies.""" - id: str - email: str - name: str - created_at: datetime - is_active: bool = True - - def deactivate(self): - """Business rule: deactivating user.""" - self.is_active = False - - def can_place_order(self) -> bool: - """Business rule: active users can order.""" - return self.is_active - -# domain/interfaces/user_repository.py -from abc import ABC, abstractmethod -from typing import Optional, List -from domain.entities.user import User - -class IUserRepository(ABC): - """Port: defines contract, no implementation.""" - - @abstractmethod - async def find_by_id(self, user_id: str) -> Optional[User]: - pass - - @abstractmethod - async def find_by_email(self, email: str) -> Optional[User]: - pass - - @abstractmethod - async def save(self, user: User) -> User: - pass - - @abstractmethod - async def delete(self, user_id: str) -> bool: - pass - -# use_cases/create_user.py -from domain.entities.user import User -from domain.interfaces.user_repository import IUserRepository -from dataclasses import dataclass -from datetime import datetime -import uuid - -@dataclass -class CreateUserRequest: - email: str - name: str - -@dataclass -class CreateUserResponse: - user: User - success: bool - error: Optional[str] = None - -class CreateUserUseCase: - """Use case: orchestrates business logic.""" - - def __init__(self, user_repository: IUserRepository): - self.user_repository = user_repository - - async def execute(self, request: CreateUserRequest) -> CreateUserResponse: - # Business validation - existing = await self.user_repository.find_by_email(request.email) - if existing: - return CreateUserResponse( - user=None, - success=False, - error="Email already exists" - ) - - # Create entity - user = User( - id=str(uuid.uuid4()), - email=request.email, - name=request.name, - created_at=datetime.now(), - is_active=True - ) - - # Persist - saved_user = await self.user_repository.save(user) - - return CreateUserResponse( - user=saved_user, - success=True - ) - -# adapters/repositories/postgres_user_repository.py -from domain.interfaces.user_repository import IUserRepository -from domain.entities.user import User -from typing import Optional -import asyncpg - -class PostgresUserRepository(IUserRepository): - """Adapter: PostgreSQL implementation.""" - - def __init__(self, pool: asyncpg.Pool): - self.pool = pool - - async def find_by_id(self, user_id: str) -> Optional[User]: - async with self.pool.acquire() as conn: - row = await conn.fetchrow( - "SELECT * FROM users WHERE id = $1", user_id - ) - return self._to_entity(row) if row else None - - async def find_by_email(self, email: str) -> Optional[User]: - async with self.pool.acquire() as conn: - row = await conn.fetchrow( - "SELECT * FROM users WHERE email = $1", email - ) - return self._to_entity(row) if row else None - - async def save(self, user: User) -> User: - async with self.pool.acquire() as conn: - await conn.execute( - """ - INSERT INTO users (id, email, name, created_at, is_active) - VALUES ($1, $2, $3, $4, $5) - ON CONFLICT (id) DO UPDATE - SET email = $2, name = $3, is_active = $5 - """, - user.id, user.email, user.name, user.created_at, user.is_active - ) - return user - - async def delete(self, user_id: str) -> bool: - async with self.pool.acquire() as conn: - result = await conn.execute( - "DELETE FROM users WHERE id = $1", user_id - ) - return result == "DELETE 1" - - def _to_entity(self, row) -> User: - """Map database row to entity.""" - return User( - id=row["id"], - email=row["email"], - name=row["name"], - created_at=row["created_at"], - is_active=row["is_active"] - ) - -# adapters/controllers/user_controller.py -from fastapi import APIRouter, Depends, HTTPException -from use_cases.create_user import CreateUserUseCase, CreateUserRequest -from pydantic import BaseModel - -router = APIRouter() - -class CreateUserDTO(BaseModel): - email: str - name: str - -@router.post("/users") -async def create_user( - dto: CreateUserDTO, - use_case: CreateUserUseCase = Depends(get_create_user_use_case) -): - """Controller: handles HTTP concerns only.""" - request = CreateUserRequest(email=dto.email, name=dto.name) - response = await use_case.execute(request) - - if not response.success: - raise HTTPException(status_code=400, detail=response.error) - - return {"user": response.user} -``` - -## Hexagonal Architecture Pattern - -```python -# Core domain (hexagon center) -class OrderService: - """Domain service - no infrastructure dependencies.""" - - def __init__( - self, - order_repository: OrderRepositoryPort, - payment_gateway: PaymentGatewayPort, - notification_service: NotificationPort - ): - self.orders = order_repository - self.payments = payment_gateway - self.notifications = notification_service - - async def place_order(self, order: Order) -> OrderResult: - # Business logic - if not order.is_valid(): - return OrderResult(success=False, error="Invalid order") - - # Use ports (interfaces) - payment = await self.payments.charge( - amount=order.total, - customer=order.customer_id - ) - - if not payment.success: - return OrderResult(success=False, error="Payment failed") - - order.mark_as_paid() - saved_order = await self.orders.save(order) - - await self.notifications.send( - to=order.customer_email, - subject="Order confirmed", - body=f"Order {order.id} confirmed" - ) - - return OrderResult(success=True, order=saved_order) - -# Ports (interfaces) -class OrderRepositoryPort(ABC): - @abstractmethod - async def save(self, order: Order) -> Order: - pass - -class PaymentGatewayPort(ABC): - @abstractmethod - async def charge(self, amount: Money, customer: str) -> PaymentResult: - pass - -class NotificationPort(ABC): - @abstractmethod - async def send(self, to: str, subject: str, body: str): - pass - -# Adapters (implementations) -class StripePaymentAdapter(PaymentGatewayPort): - """Primary adapter: connects to Stripe API.""" - - def __init__(self, api_key: str): - self.stripe = stripe - self.stripe.api_key = api_key - - async def charge(self, amount: Money, customer: str) -> PaymentResult: - try: - charge = self.stripe.Charge.create( - amount=amount.cents, - currency=amount.currency, - customer=customer - ) - return PaymentResult(success=True, transaction_id=charge.id) - except stripe.error.CardError as e: - return PaymentResult(success=False, error=str(e)) - -class MockPaymentAdapter(PaymentGatewayPort): - """Test adapter: no external dependencies.""" - - async def charge(self, amount: Money, customer: str) -> PaymentResult: - return PaymentResult(success=True, transaction_id="mock-123") -``` - -## Domain-Driven Design Pattern - -```python -# Value Objects (immutable) -from dataclasses import dataclass -from typing import Optional - -@dataclass(frozen=True) -class Email: - """Value object: validated email.""" - value: str - - def __post_init__(self): - if "@" not in self.value: - raise ValueError("Invalid email") - -@dataclass(frozen=True) -class Money: - """Value object: amount with currency.""" - amount: int # cents - currency: str - - def add(self, other: "Money") -> "Money": - if self.currency != other.currency: - raise ValueError("Currency mismatch") - return Money(self.amount + other.amount, self.currency) - -# Entities (with identity) -class Order: - """Entity: has identity, mutable state.""" - - def __init__(self, id: str, customer: Customer): - self.id = id - self.customer = customer - self.items: List[OrderItem] = [] - self.status = OrderStatus.PENDING - self._events: List[DomainEvent] = [] - - def add_item(self, product: Product, quantity: int): - """Business logic in entity.""" - item = OrderItem(product, quantity) - self.items.append(item) - self._events.append(ItemAddedEvent(self.id, item)) - - def total(self) -> Money: - """Calculated property.""" - return sum(item.subtotal() for item in self.items) - - def submit(self): - """State transition with business rules.""" - if not self.items: - raise ValueError("Cannot submit empty order") - if self.status != OrderStatus.PENDING: - raise ValueError("Order already submitted") - - self.status = OrderStatus.SUBMITTED - self._events.append(OrderSubmittedEvent(self.id)) - -# Aggregates (consistency boundary) -class Customer: - """Aggregate root: controls access to entities.""" - - def __init__(self, id: str, email: Email): - self.id = id - self.email = email - self._addresses: List[Address] = [] - self._orders: List[str] = [] # Order IDs, not full objects - - def add_address(self, address: Address): - """Aggregate enforces invariants.""" - if len(self._addresses) >= 5: - raise ValueError("Maximum 5 addresses allowed") - self._addresses.append(address) - - @property - def primary_address(self) -> Optional[Address]: - return next((a for a in self._addresses if a.is_primary), None) - -# Domain Events -@dataclass -class OrderSubmittedEvent: - order_id: str - occurred_at: datetime = field(default_factory=datetime.now) - -# Repository (aggregate persistence) -class OrderRepository: - """Repository: persist/retrieve aggregates.""" - - async def find_by_id(self, order_id: str) -> Optional[Order]: - """Reconstitute aggregate from storage.""" - pass - - async def save(self, order: Order): - """Persist aggregate and publish events.""" - await self._persist(order) - await self._publish_events(order._events) - order._events.clear() -``` - -## Resources - -- **references/clean-architecture-guide.md**: Detailed layer breakdown -- **references/hexagonal-architecture-guide.md**: Ports and adapters patterns -- **references/ddd-tactical-patterns.md**: Entities, value objects, aggregates -- **assets/clean-architecture-template/**: Complete project structure -- **assets/ddd-examples/**: Domain modeling examples - -## Best Practices - -1. **Dependency Rule**: Dependencies always point inward -2. **Interface Segregation**: Small, focused interfaces -3. **Business Logic in Domain**: Keep frameworks out of core -4. **Test Independence**: Core testable without infrastructure -5. **Bounded Contexts**: Clear domain boundaries -6. **Ubiquitous Language**: Consistent terminology -7. **Thin Controllers**: Delegate to use cases -8. **Rich Domain Models**: Behavior with data - -## Common Pitfalls - -- **Anemic Domain**: Entities with only data, no behavior -- **Framework Coupling**: Business logic depends on frameworks -- **Fat Controllers**: Business logic in controllers -- **Repository Leakage**: Exposing ORM objects -- **Missing Abstractions**: Concrete dependencies in core -- **Over-Engineering**: Clean architecture for simple CRUD diff --git a/.claude/skills/bench/SKILL.md b/.claude/skills/bench/SKILL.md deleted file mode 100644 index 0fbb023e3..000000000 --- a/.claude/skills/bench/SKILL.md +++ /dev/null @@ -1,51 +0,0 @@ ---- -name: bench -description: 测试 API 模型性能 - TTFB 和吐字速度 ---- - -快速测试第三方 API 的模型性能。两个脚本,按 API 格式选择。 - -## 用法 - -用户提供:API Key + Base URL - -**默认 → OpenAI 格式**(`/v1/chat/completions`): -```bash -cd .claude/skills/bench && uv run python test_models.py "" "" -``` - -**用户说"Anthropic 格式"→ Anthropic 原生**(`/v1/messages`): -```bash -cd .claude/skills/bench && uv run python test_anthropic.py "" "" -``` - -## 判断规则 - -- 默认走 `test_models.py`(OpenAI 格式) -- 仅当用户明确说"Anthropic 格式/原生 API"时,走 `test_anthropic.py` -- 不要自动猜测,由用户指定 - -## 测试指标 - -- **TTFB**: 首次响应延迟(10秒超时) -- **Tok/s**: 吐字速度 -- 总超时:30秒 - -## OpenAI 脚本特性(test_models.py) - -- 自动从 `/models` 端点获取可用模型并智能筛选 -- 筛选最新主流文本模型(Claude 4.x, GPT 5.x, Gemini 3.x, Qwen 3.x, GLM 4.7+, Kimi k2.5+) -- 排除 DeepSeek、多模态模型、过时版本 -- Base URL 必须带 `/v1` 后缀 - -## Anthropic 脚本特性(test_anthropic.py) - -- 使用 `x-api-key` + `anthropic-version` 认证 -- 默认测试:opus-4-6, sonnet-4-6, sonnet-4-5, haiku-4-5 -- Base URL 带不带 `/v1` 都行(自动处理) - -## 常见问题 - -- **依赖**:必须用 `uv run`,不能直接 `python` -- **OpenAI Base URL**:必须带 `/v1`,否则 `/models` 返回 HTML -- **Anthropic `/models` 被拦截**:正常,Cloudflare 保护,脚本用默认模型列表 diff --git a/.claude/skills/bench/test_anthropic.py b/.claude/skills/bench/test_anthropic.py deleted file mode 100644 index ec68e07cd..000000000 --- a/.claude/skills/bench/test_anthropic.py +++ /dev/null @@ -1,156 +0,0 @@ -#!/usr/bin/env python3 -""" -Test Anthropic native API (/v1/messages) performance: TTFB and token speed. -""" - -import asyncio -import json -import sys -import time - -import aiohttp - -if len(sys.argv) < 3: - print("Usage: python test_anthropic.py ") - print("Example: python test_anthropic.py sk-ant-xxx https://api.example.com/claude/droid") - sys.exit(1) - -API_KEY = sys.argv[1] -BASE_URL = sys.argv[2].rstrip("/") - -# Strip /v1 suffix if present — we'll add it ourselves -if BASE_URL.endswith("/v1"): - BASE_URL = BASE_URL[:-3] - -TEST_PROMPT = "请写一篇300字的短文,主题是春天的早晨。" - -DEFAULT_MODELS = [ - "claude-opus-4-6", - "claude-sonnet-4-6", - "claude-sonnet-4-5-20250929", - "claude-haiku-4-5-20251001", -] - - -async def test_model(session, model): - """Test a single model with Anthropic streaming.""" - start_time = time.time() - first_token_time = None - chunks_received = 0 - content_length = 0 - output_tokens = 0 - - try: - async with session.post( - f"{BASE_URL}/v1/messages", - json={ - "model": model, - "messages": [{"role": "user", "content": TEST_PROMPT}], - "max_tokens": 4096, - "stream": True, - }, - timeout=aiohttp.ClientTimeout(total=90), - ) as resp: - if resp.status != 200: - try: - text = await resp.text() - # Try to parse error JSON - err = json.loads(text).get("error", {}).get("message", text[:60]) - except Exception: - err = text[:60] - return {"model": model, "status": "✗", "error": err[:60]} - - async for raw_line in resp.content: - if first_token_time is None and (time.time() - start_time) > 10: - return {"model": model, "status": "✗", "error": "TTFB timeout (>10s)"} - if (time.time() - start_time) > 90: - return {"model": model, "status": "✗", "error": "Total timeout (>30s)"} - - line = raw_line.decode("utf-8").strip() - if not line.startswith("data: "): - continue - - data_str = line[6:] - try: - data = json.loads(data_str) - except json.JSONDecodeError: - continue - - event_type = data.get("type", "") - - if event_type == "content_block_delta": - text = data.get("delta", {}).get("text", "") - if text: - if first_token_time is None: - first_token_time = time.time() - chunks_received += 1 - content_length += len(text) - - elif event_type == "message_delta": - usage = data.get("usage", {}) - output_tokens = usage.get("output_tokens", 0) - - elif event_type == "message_stop": - break - - total_time = time.time() - start_time - ttfb = first_token_time - start_time if first_token_time else None - - if ttfb is None: - return {"model": model, "status": "✗", "error": "No tokens received"} - - gen_time = total_time - ttfb - tok_s = output_tokens / gen_time if gen_time > 0 else 0 - - return { - "model": model, "status": "✓", - "ttfb": ttfb, "total_time": total_time, - "tokens": output_tokens, "chars": content_length, - "tokens_per_sec": tok_s, - } - - except TimeoutError: - return {"model": model, "status": "✗", "error": "Timeout (>30s)"} - except Exception as e: - return {"model": model, "status": "✗", "error": str(e)[:60]} - - -async def main(): - headers = { - "x-api-key": API_KEY, - "anthropic-version": "2023-06-01", - "Content-Type": "application/json", - } - - print(f"Endpoint: {BASE_URL}/v1/messages") - print(f"Testing {len(DEFAULT_MODELS)} models with streaming...") - print(f"Task: {TEST_PROMPT}\n") - - async with aiohttp.ClientSession(headers=headers) as session: - tasks = [test_model(session, m) for m in DEFAULT_MODELS] - results = await asyncio.gather(*tasks) - - ok = [r for r in results if r["status"] in ("✓", "⚠")] - fail = [r for r in results if r["status"] == "✗"] - ok.sort(key=lambda x: (x.get("ttfb", 999), -x.get("tokens_per_sec", 0))) - - print("=" * 90) - print(f"{'Model':<40} {'TTFB':<12} {'Tok/s':<12} {'Status'}") - print("=" * 90) - - for r in ok: - print(f"{r['model']:<40} {r['ttfb']:.2f}s {r['tokens_per_sec']:>6.1f} {r['status']}") - for r in fail: - print(f"{r['model']:<40} {'--':<12} {'--':<12} ✗ {r.get('error', '')[:40]}") - - if ok: - best_ttfb = min(ok, key=lambda x: x["ttfb"]) - best_tok = max(ok, key=lambda x: x["tokens_per_sec"]) - print("=" * 90) - print(f"🏆 最快首次响应: {best_ttfb['model']} ({best_ttfb['ttfb']:.2f}s)") - print(f"⚡ 最快吐字速度: {best_tok['model']} ({best_tok['tokens_per_sec']:.1f} tok/s)") - print(f"✓ {len(ok)}/{len(results)} 模型可用") - - -if __name__ == "__main__": - asyncio.run(main()) diff --git a/.claude/skills/bench/test_models.py b/.claude/skills/bench/test_models.py deleted file mode 100644 index 504870fbb..000000000 --- a/.claude/skills/bench/test_models.py +++ /dev/null @@ -1,281 +0,0 @@ -#!/usr/bin/env python3 -""" -Test model performance: TTFB and token generation speed. -""" - -import asyncio -import sys -import time - -import aiohttp - -# Get parameters from command line -if len(sys.argv) < 3: - print("Error: API_KEY and BASE_URL are required") - print("Usage: python test_models.py ") - print("Example: python test_models.py sk-xxx https://api.example.com/v1") - sys.exit(1) - -API_KEY = sys.argv[1] -BASE_URL = sys.argv[2] - -TEST_PROMPT = "请写一篇300字的短文,主题是春天的早晨。" - - -async def fetch_models(session): - """Fetch available models from /models endpoint.""" - try: - async with session.get(f"{BASE_URL}/models", timeout=aiohttp.ClientTimeout(total=10)) as resp: - if resp.status == 200: - data = await resp.json() - models = [m["id"] for m in data.get("data", [])] - return models - else: - print(f"Warning: Failed to fetch models (HTTP {resp.status}), using defaults") - return None - except Exception as e: - print(f"Warning: Failed to fetch models ({e}), using defaults") - return None - - -# Default models if /models endpoint fails -DEFAULT_MODELS = [ - "claude-opus-4-6", - "claude-sonnet-4-5-20250929", - "claude-haiku-4-5-20251001", - "gpt-5.2-2025-12-11", - "gpt-5.1-2025-11-13", - "qwen3-max-2026-01-23", - "glm-4.7", - "kimi-k2.5", -] - - -async def test_model_streaming(session, model): - """Test a model with streaming to measure TTFB and token speed.""" - start_time = time.time() - first_token_time = None - tokens_received = 0 - content_length = 0 - - try: - async with session.post( - f"{BASE_URL}/chat/completions", - json={ - "model": model, - "messages": [{"role": "user", "content": TEST_PROMPT}], - "stream": True, - }, - timeout=aiohttp.ClientTimeout(total=30), - ) as resp: - if resp.status != 200: - try: - text = await resp.text() - return {"model": model, "status": "✗", "error": text[:50]} - except Exception: - return {"model": model, "status": "✗", "error": f"HTTP {resp.status}"} - - try: - async for line in resp.content: - # Check TTFB timeout (10s) - if first_token_time is None and (time.time() - start_time) > 10: - return {"model": model, "status": "✗", "error": "TTFB timeout (>10s)"} - - # Check total timeout (30s) - if (time.time() - start_time) > 30: - return {"model": model, "status": "✗", "error": "Total timeout (>30s)"} - - if not line: - continue - - line = line.decode("utf-8").strip() - if not line.startswith("data: "): - continue - - data_str = line[6:] - if data_str == "[DONE]": - break - - try: - import json - - data = json.loads(data_str) - delta = data.get("choices", [{}])[0].get("delta", {}) - content = delta.get("content", "") - - if content: - if first_token_time is None: - first_token_time = time.time() - tokens_received += 1 - content_length += len(content) - - except json.JSONDecodeError: - continue - except Exception: - continue - - except Exception as e: - if tokens_received > 0: - # Partial success - total_time = time.time() - start_time - ttfb = first_token_time - start_time if first_token_time else None - generation_time = total_time - ttfb if ttfb else total_time - tokens_per_sec = tokens_received / generation_time if generation_time > 0 else 0 - - return { - "model": model, - "status": "⚠", - "ttfb": ttfb, - "total_time": total_time, - "tokens": tokens_received, - "chars": content_length, - "tokens_per_sec": tokens_per_sec, - "error": f"Partial: {str(e)[:30]}", - } - else: - return {"model": model, "status": "✗", "error": str(e)[:50]} - - total_time = time.time() - start_time - ttfb = first_token_time - start_time if first_token_time else None - - if ttfb is None: - return {"model": model, "status": "✗", "error": "No tokens received"} - - generation_time = total_time - ttfb - tokens_per_sec = tokens_received / generation_time if generation_time > 0 else 0 - - return { - "model": model, - "status": "✓", - "ttfb": ttfb, - "total_time": total_time, - "tokens": tokens_received, - "chars": content_length, - "tokens_per_sec": tokens_per_sec, - } - - except TimeoutError: - return {"model": model, "status": "✗", "error": "Timeout (>30s)"} - except Exception as e: - return {"model": model, "status": "✗", "error": str(e)[:50]} - - -async def main(): - headers = {"Authorization": f"Bearer {API_KEY}", "Content-Type": "application/json"} - - async with aiohttp.ClientSession(headers=headers) as session: - # Fetch available models - print("Fetching available models...") - all_models = await fetch_models(session) - - if all_models: - # Filter for latest mainstream text models only - test_models = [] - - # Claude: 4.6, 4.5, 4-5 (排除 3.x) - claude_models = [ - m - for m in all_models - if "claude" in m.lower() - and any(v in m for v in ["4.6", "4.5", "4-6", "4-5"]) - and not any(skip in m.lower() for skip in ["embed", "vision", "3."]) - ] - test_models.extend(claude_models) - - # GPT: 5.x (排除 4.x, o1, o3) - gpt_models = [ - m - for m in all_models - if "gpt" in m.lower() - and ("5." in m or "gpt-5" in m.lower()) - and not any(skip in m.lower() for skip in ["embed", "audio", "realtime", "vision", "4.", "4o", "4.1"]) - ] - test_models.extend(gpt_models) - - # Gemini: 3.x (排除 2.x) - gemini_models = [ - m - for m in all_models - if "gemini" in m.lower() - and ("3" in m or "gemini-3" in m.lower()) - and not any(skip in m.lower() for skip in ["embed", "vision", "lite", "image", "2."]) - ] - test_models.extend(gemini_models) - - # Qwen: 3.x (排除 2.x) - qwen_models = [ - m - for m in all_models - if "qwen" in m.lower() - and ("qwen3" in m.lower() or "qwen-3" in m.lower()) - and not any(skip in m.lower() for skip in ["embed", "vl", "vision", "coder", "math", "2."]) - ] - test_models.extend(qwen_models) - - # GLM: 4.7+ (排除 4.6 及以下) - glm_models = [ - m - for m in all_models - if "glm" in m.lower() - and ("4.7" in m or "glm-4.7" in m.lower()) - and not any(skip in m.lower() for skip in ["embed", "vision", "4.6", "4.5"]) - ] - test_models.extend(glm_models) - - # Kimi: k2.5+ (排除 k2 及以下) - kimi_models = [ - m - for m in all_models - if "kimi" in m.lower() - and ("k2.5" in m.lower() or "k3" in m.lower()) - and not any(skip in m.lower() for skip in ["embed", "vision"]) - ] - test_models.extend(kimi_models) - - # 排除 DeepSeek (按要求不测) - - # 去重 - test_models = list(dict.fromkeys(test_models)) - - if not test_models: - print("No suitable latest models found, using defaults") - test_models = DEFAULT_MODELS - else: - test_models = DEFAULT_MODELS - - print(f"Testing {len(test_models)} models with streaming...") - print(f"Task: {TEST_PROMPT}\n") - - tasks = [test_model_streaming(session, model) for model in test_models] - results = await asyncio.gather(*tasks) - - # Separate successful and failed results - successful = [r for r in results if r["status"] in ["✓", "⚠"]] - failed = [r for r in results if r["status"] == "✗"] - - # Sort by TTFB first, then by token speed - successful.sort(key=lambda x: (x.get("ttfb", 999), -x.get("tokens_per_sec", 0))) - - print("=" * 90) - print(f"{'Model':<40} {'TTFB':<12} {'Tok/s':<12} {'Status'}") - print("=" * 90) - - for r in successful: - print(f"{r['model']:<40} {r['ttfb']:.2f}s {r['tokens_per_sec']:>6.1f} {r['status']}") - - for r in failed: - print(f"{r['model']:<40} {'--':<12} {'--':<12} ✗ {r.get('error', '')[:20]}") - - # Summary - if successful: - fastest_ttfb = min(successful, key=lambda x: x["ttfb"]) - fastest_tokens = max(successful, key=lambda x: x["tokens_per_sec"]) - - print("=" * 90) - print(f"🏆 最快首次响应: {fastest_ttfb['model']} ({fastest_ttfb['ttfb']:.2f}s)") - print(f"⚡ 最快吐字速度: {fastest_tokens['model']} ({fastest_tokens['tokens_per_sec']:.1f} tok/s)") - print(f"✓ {len(successful)}/{len(results)} 模型可用") - - -if __name__ == "__main__": - asyncio.run(main()) diff --git a/.claude/skills/event-store-design/SKILL.md b/.claude/skills/event-store-design/SKILL.md deleted file mode 100644 index 6c8485bca..000000000 --- a/.claude/skills/event-store-design/SKILL.md +++ /dev/null @@ -1,437 +0,0 @@ ---- -name: event-store-design -description: Design and implement event stores for event-sourced systems. Use when building event sourcing infrastructure, choosing event store technologies, or implementing event persistence patterns. ---- - -# Event Store Design - -Comprehensive guide to designing event stores for event-sourced applications. - -## When to Use This Skill - -- Designing event sourcing infrastructure -- Choosing between event store technologies -- Implementing custom event stores -- Optimizing event storage and retrieval -- Setting up event store schemas -- Planning for event store scaling - -## Core Concepts - -### 1. Event Store Architecture - -``` -┌─────────────────────────────────────────────────────┐ -│ Event Store │ -├─────────────────────────────────────────────────────┤ -│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │ -│ │ Stream 1 │ │ Stream 2 │ │ Stream 3 │ │ -│ │ (Aggregate) │ │ (Aggregate) │ │ (Aggregate) │ │ -│ ├─────────────┤ ├─────────────┤ ├─────────────┤ │ -│ │ Event 1 │ │ Event 1 │ │ Event 1 │ │ -│ │ Event 2 │ │ Event 2 │ │ Event 2 │ │ -│ │ Event 3 │ │ ... │ │ Event 3 │ │ -│ │ ... │ │ │ │ Event 4 │ │ -│ └─────────────┘ └─────────────┘ └─────────────┘ │ -├─────────────────────────────────────────────────────┤ -│ Global Position: 1 → 2 → 3 → 4 → 5 → 6 → ... │ -└─────────────────────────────────────────────────────┘ -``` - -### 2. Event Store Requirements - -| Requirement | Description | -| ----------------- | ---------------------------------- | -| **Append-only** | Events are immutable, only appends | -| **Ordered** | Per-stream and global ordering | -| **Versioned** | Optimistic concurrency control | -| **Subscriptions** | Real-time event notifications | -| **Idempotent** | Handle duplicate writes safely | - -## Technology Comparison - -| Technology | Best For | Limitations | -| ---------------- | ------------------------- | -------------------------------- | -| **EventStoreDB** | Pure event sourcing | Single-purpose | -| **PostgreSQL** | Existing Postgres stack | Manual implementation | -| **Kafka** | High-throughput streaming | Not ideal for per-stream queries | -| **DynamoDB** | Serverless, AWS-native | Query limitations | -| **Marten** | .NET ecosystems | .NET specific | - -## Templates - -### Template 1: PostgreSQL Event Store Schema - -```sql --- Events table -CREATE TABLE events ( - id UUID PRIMARY KEY DEFAULT gen_random_uuid(), - stream_id VARCHAR(255) NOT NULL, - stream_type VARCHAR(255) NOT NULL, - event_type VARCHAR(255) NOT NULL, - event_data JSONB NOT NULL, - metadata JSONB DEFAULT '{}', - version BIGINT NOT NULL, - global_position BIGSERIAL, - created_at TIMESTAMPTZ DEFAULT NOW(), - - CONSTRAINT unique_stream_version UNIQUE (stream_id, version) -); - --- Index for stream queries -CREATE INDEX idx_events_stream_id ON events(stream_id, version); - --- Index for global subscription -CREATE INDEX idx_events_global_position ON events(global_position); - --- Index for event type queries -CREATE INDEX idx_events_event_type ON events(event_type); - --- Index for time-based queries -CREATE INDEX idx_events_created_at ON events(created_at); - --- Snapshots table -CREATE TABLE snapshots ( - stream_id VARCHAR(255) PRIMARY KEY, - stream_type VARCHAR(255) NOT NULL, - snapshot_data JSONB NOT NULL, - version BIGINT NOT NULL, - created_at TIMESTAMPTZ DEFAULT NOW() -); - --- Subscriptions checkpoint table -CREATE TABLE subscription_checkpoints ( - subscription_id VARCHAR(255) PRIMARY KEY, - last_position BIGINT NOT NULL DEFAULT 0, - updated_at TIMESTAMPTZ DEFAULT NOW() -); -``` - -### Template 2: Python Event Store Implementation - -```python -from dataclasses import dataclass, field -from datetime import datetime -from typing import Any, Optional, List -from uuid import UUID, uuid4 -import json -import asyncpg - -@dataclass -class Event: - stream_id: str - event_type: str - data: dict - metadata: dict = field(default_factory=dict) - event_id: UUID = field(default_factory=uuid4) - version: Optional[int] = None - global_position: Optional[int] = None - created_at: datetime = field(default_factory=datetime.utcnow) - - -class EventStore: - def __init__(self, pool: asyncpg.Pool): - self.pool = pool - - async def append_events( - self, - stream_id: str, - stream_type: str, - events: List[Event], - expected_version: Optional[int] = None - ) -> List[Event]: - """Append events to a stream with optimistic concurrency.""" - async with self.pool.acquire() as conn: - async with conn.transaction(): - # Check expected version - if expected_version is not None: - current = await conn.fetchval( - "SELECT MAX(version) FROM events WHERE stream_id = $1", - stream_id - ) - current = current or 0 - if current != expected_version: - raise ConcurrencyError( - f"Expected version {expected_version}, got {current}" - ) - - # Get starting version - start_version = await conn.fetchval( - "SELECT COALESCE(MAX(version), 0) + 1 FROM events WHERE stream_id = $1", - stream_id - ) - - # Insert events - saved_events = [] - for i, event in enumerate(events): - event.version = start_version + i - row = await conn.fetchrow( - """ - INSERT INTO events (id, stream_id, stream_type, event_type, - event_data, metadata, version, created_at) - VALUES ($1, $2, $3, $4, $5, $6, $7, $8) - RETURNING global_position - """, - event.event_id, - stream_id, - stream_type, - event.event_type, - json.dumps(event.data), - json.dumps(event.metadata), - event.version, - event.created_at - ) - event.global_position = row['global_position'] - saved_events.append(event) - - return saved_events - - async def read_stream( - self, - stream_id: str, - from_version: int = 0, - limit: int = 1000 - ) -> List[Event]: - """Read events from a stream.""" - async with self.pool.acquire() as conn: - rows = await conn.fetch( - """ - SELECT id, stream_id, event_type, event_data, metadata, - version, global_position, created_at - FROM events - WHERE stream_id = $1 AND version >= $2 - ORDER BY version - LIMIT $3 - """, - stream_id, from_version, limit - ) - return [self._row_to_event(row) for row in rows] - - async def read_all( - self, - from_position: int = 0, - limit: int = 1000 - ) -> List[Event]: - """Read all events globally.""" - async with self.pool.acquire() as conn: - rows = await conn.fetch( - """ - SELECT id, stream_id, event_type, event_data, metadata, - version, global_position, created_at - FROM events - WHERE global_position > $1 - ORDER BY global_position - LIMIT $2 - """, - from_position, limit - ) - return [self._row_to_event(row) for row in rows] - - async def subscribe( - self, - subscription_id: str, - handler, - from_position: int = 0, - batch_size: int = 100 - ): - """Subscribe to all events from a position.""" - # Get checkpoint - async with self.pool.acquire() as conn: - checkpoint = await conn.fetchval( - """ - SELECT last_position FROM subscription_checkpoints - WHERE subscription_id = $1 - """, - subscription_id - ) - position = checkpoint or from_position - - while True: - events = await self.read_all(position, batch_size) - if not events: - await asyncio.sleep(1) # Poll interval - continue - - for event in events: - await handler(event) - position = event.global_position - - # Save checkpoint - async with self.pool.acquire() as conn: - await conn.execute( - """ - INSERT INTO subscription_checkpoints (subscription_id, last_position) - VALUES ($1, $2) - ON CONFLICT (subscription_id) - DO UPDATE SET last_position = $2, updated_at = NOW() - """, - subscription_id, position - ) - - def _row_to_event(self, row) -> Event: - return Event( - event_id=row['id'], - stream_id=row['stream_id'], - event_type=row['event_type'], - data=json.loads(row['event_data']), - metadata=json.loads(row['metadata']), - version=row['version'], - global_position=row['global_position'], - created_at=row['created_at'] - ) - - -class ConcurrencyError(Exception): - """Raised when optimistic concurrency check fails.""" - pass -``` - -### Template 3: EventStoreDB Usage - -```python -from esdbclient import EventStoreDBClient, NewEvent, StreamState -import json - -# Connect -client = EventStoreDBClient(uri="esdb://localhost:2113?tls=false") - -# Append events -def append_events(stream_name: str, events: list, expected_revision=None): - new_events = [ - NewEvent( - type=event['type'], - data=json.dumps(event['data']).encode(), - metadata=json.dumps(event.get('metadata', {})).encode() - ) - for event in events - ] - - if expected_revision is None: - state = StreamState.ANY - elif expected_revision == -1: - state = StreamState.NO_STREAM - else: - state = expected_revision - - return client.append_to_stream( - stream_name=stream_name, - events=new_events, - current_version=state - ) - -# Read stream -def read_stream(stream_name: str, from_revision: int = 0): - events = client.get_stream( - stream_name=stream_name, - stream_position=from_revision - ) - return [ - { - 'type': event.type, - 'data': json.loads(event.data), - 'metadata': json.loads(event.metadata) if event.metadata else {}, - 'stream_position': event.stream_position, - 'commit_position': event.commit_position - } - for event in events - ] - -# Subscribe to all -async def subscribe_to_all(handler, from_position: int = 0): - subscription = client.subscribe_to_all(commit_position=from_position) - async for event in subscription: - await handler({ - 'type': event.type, - 'data': json.loads(event.data), - 'stream_id': event.stream_name, - 'position': event.commit_position - }) - -# Category projection ($ce-Category) -def read_category(category: str): - """Read all events for a category using system projection.""" - return read_stream(f"$ce-{category}") -``` - -### Template 4: DynamoDB Event Store - -```python -import boto3 -from boto3.dynamodb.conditions import Key -from datetime import datetime -import json -import uuid - -class DynamoEventStore: - def __init__(self, table_name: str): - self.dynamodb = boto3.resource('dynamodb') - self.table = self.dynamodb.Table(table_name) - - def append_events(self, stream_id: str, events: list, expected_version: int = None): - """Append events with conditional write for concurrency.""" - with self.table.batch_writer() as batch: - for i, event in enumerate(events): - version = (expected_version or 0) + i + 1 - item = { - 'PK': f"STREAM#{stream_id}", - 'SK': f"VERSION#{version:020d}", - 'GSI1PK': 'EVENTS', - 'GSI1SK': datetime.utcnow().isoformat(), - 'event_id': str(uuid.uuid4()), - 'stream_id': stream_id, - 'event_type': event['type'], - 'event_data': json.dumps(event['data']), - 'version': version, - 'created_at': datetime.utcnow().isoformat() - } - batch.put_item(Item=item) - return events - - def read_stream(self, stream_id: str, from_version: int = 0): - """Read events from a stream.""" - response = self.table.query( - KeyConditionExpression=Key('PK').eq(f"STREAM#{stream_id}") & - Key('SK').gte(f"VERSION#{from_version:020d}") - ) - return [ - { - 'event_type': item['event_type'], - 'data': json.loads(item['event_data']), - 'version': item['version'] - } - for item in response['Items'] - ] - -# Table definition (CloudFormation/Terraform) -""" -DynamoDB Table: - - PK (Partition Key): String - - SK (Sort Key): String - - GSI1PK, GSI1SK for global ordering - -Capacity: On-demand or provisioned based on throughput needs -""" -``` - -## Best Practices - -### Do's - -- **Use stream IDs that include aggregate type** - `Order-{uuid}` -- **Include correlation/causation IDs** - For tracing -- **Version events from day one** - Plan for schema evolution -- **Implement idempotency** - Use event IDs for deduplication -- **Index appropriately** - For your query patterns - -### Don'ts - -- **Don't update or delete events** - They're immutable facts -- **Don't store large payloads** - Keep events small -- **Don't skip optimistic concurrency** - Prevents data corruption -- **Don't ignore backpressure** - Handle slow consumers - -## Resources - -- [EventStoreDB](https://www.eventstore.com/) -- [Marten Events](https://martendb.io/events/) -- [Event Sourcing Pattern](https://docs.microsoft.com/en-us/azure/architecture/patterns/event-sourcing) diff --git a/.claude/skills/frameworks-react/SKILL.md b/.claude/skills/frameworks-react/SKILL.md deleted file mode 100644 index 2cba2e523..000000000 --- a/.claude/skills/frameworks-react/SKILL.md +++ /dev/null @@ -1,838 +0,0 @@ ---- -name: react -description: Builds token-driven React components with TypeScript and modern patterns. Use when creating React component libraries, integrating CSS custom properties, or building Next.js design system components with forwardRef and composition. ---- - -# React Component Patterns - -## Overview - -Build accessible, token-driven React components following modern patterns. Covers styling approaches, TypeScript integration, composition patterns, and how to consume design tokens from the token skills. - -## When to Use - -- Creating a React component library -- Building components that use design tokens -- Setting up a design system in React/Next.js -- Converting designs to React components - -## The Process - -1. **Identify component type**: Primitive, composite, or layout? -2. **Choose styling approach**: CSS Modules, Tailwind, styled-components, or CSS-in-JS? -3. **Define props interface**: TypeScript types with sensible defaults -4. **Implement with tokens**: Use CSS custom properties or theme context -5. **Add accessibility**: ARIA, keyboard handling, focus management -6. **Export properly**: Named exports, types, and variants - -## Styling Approaches - -| Approach | When to Use | Token Integration | -|----------|-------------|-------------------| -| CSS Modules | Build-time CSS, SSR-friendly | Import CSS with `var(--token)` | -| Tailwind | Utility-first, rapid development | Extend config with tokens | -| styled-components | Runtime theming, dynamic styles | ThemeProvider with tokens | -| Vanilla Extract | Type-safe, zero-runtime | Import tokens as TS objects | -| CSS Custom Properties | Framework-agnostic, simple | Direct `var(--token)` usage | - -## Project Structure - -``` -src/ -├── components/ -│ ├── primitives/ # Base components -│ │ ├── Button/ -│ │ │ ├── Button.tsx -│ │ │ ├── Button.module.css -│ │ │ ├── Button.test.tsx -│ │ │ └── index.ts -│ │ ├── Input/ -│ │ └── Text/ -│ ├── composite/ # Composed components -│ │ ├── Card/ -│ │ ├── Modal/ -│ │ └── Dropdown/ -│ └── layout/ # Layout components -│ ├── Stack/ -│ ├── Grid/ -│ └── Container/ -├── tokens/ -│ ├── colors.css -│ ├── spacing.css -│ └── index.css -├── hooks/ -│ └── useTheme.ts -└── index.ts # Public exports -``` - -## Component Patterns - -### Button Component - -**Button.tsx:** -```tsx -import { forwardRef, type ButtonHTMLAttributes, type ReactNode } from 'react'; -import styles from './Button.module.css'; -import { clsx } from 'clsx'; - -export interface ButtonProps extends ButtonHTMLAttributes { - /** Visual style variant */ - variant?: 'primary' | 'secondary' | 'ghost' | 'danger'; - /** Size of the button */ - size?: 'sm' | 'md' | 'lg'; - /** Full width button */ - fullWidth?: boolean; - /** Loading state - disables button and shows spinner */ - loading?: boolean; - /** Icon before text */ - leftIcon?: ReactNode; - /** Icon after text */ - rightIcon?: ReactNode; - children: ReactNode; -} - -export const Button = forwardRef( - ( - { - variant = 'primary', - size = 'md', - fullWidth = false, - loading = false, - leftIcon, - rightIcon, - disabled, - className, - children, - ...props - }, - ref - ) => { - const isDisabled = disabled || loading; - - return ( - - ); - } -); - -Button.displayName = 'Button'; -``` - -**Button.module.css:** -```css -.button { - /* Layout */ - display: inline-flex; - align-items: center; - justify-content: center; - gap: var(--spacing-xs); - - /* Typography */ - font-family: inherit; - font-weight: 500; - line-height: 1; - white-space: nowrap; - - /* Interaction */ - cursor: pointer; - user-select: none; - transition: - background-color 150ms ease, - border-color 150ms ease, - transform 100ms ease; - - /* Reset */ - border: 1px solid transparent; - border-radius: var(--radius-md); -} - -.button:focus-visible { - outline: 2px solid var(--color-primary-500); - outline-offset: 2px; -} - -.button:active:not(:disabled) { - transform: scale(0.98); -} - -.button:disabled { - cursor: not-allowed; - opacity: 0.5; -} - -/* Variants */ -.primary { - background-color: var(--color-primary-500); - color: white; -} - -.primary:hover:not(:disabled) { - background-color: var(--color-primary-600); -} - -.secondary { - background-color: transparent; - border-color: var(--color-gray-300); - color: var(--color-gray-700); -} - -.secondary:hover:not(:disabled) { - background-color: var(--color-gray-50); - border-color: var(--color-gray-400); -} - -.ghost { - background-color: transparent; - color: var(--color-gray-700); -} - -.ghost:hover:not(:disabled) { - background-color: var(--color-gray-100); -} - -.danger { - background-color: var(--color-error-500); - color: white; -} - -.danger:hover:not(:disabled) { - background-color: var(--color-error-600); -} - -/* Sizes */ -.sm { - height: 32px; - padding: 0 var(--spacing-sm); - font-size: var(--text-sm); -} - -.md { - height: 40px; - padding: 0 var(--spacing-md); - font-size: var(--text-base); -} - -.lg { - height: 48px; - padding: 0 var(--spacing-lg); - font-size: var(--text-lg); -} - -/* Modifiers */ -.fullWidth { - width: 100%; -} - -.loading .label { - opacity: 0; -} - -.spinner { - position: absolute; - width: 1em; - height: 1em; - border: 2px solid currentColor; - border-right-color: transparent; - border-radius: 50%; - animation: spin 600ms linear infinite; -} - -@keyframes spin { - to { transform: rotate(360deg); } -} - -.icon { - display: flex; - flex-shrink: 0; -} -``` - ---- - -### Input Component - -**Input.tsx:** -```tsx -import { forwardRef, type InputHTMLAttributes, type ReactNode } from 'react'; -import styles from './Input.module.css'; -import { clsx } from 'clsx'; - -export interface InputProps extends Omit, 'size'> { - /** Label text */ - label?: string; - /** Helper text below input */ - helperText?: string; - /** Error message - sets error state */ - error?: string; - /** Size variant */ - size?: 'sm' | 'md' | 'lg'; - /** Icon/element at start */ - startAdornment?: ReactNode; - /** Icon/element at end */ - endAdornment?: ReactNode; - /** Full width */ - fullWidth?: boolean; -} - -export const Input = forwardRef( - ( - { - label, - helperText, - error, - size = 'md', - startAdornment, - endAdornment, - fullWidth = false, - disabled, - id, - className, - ...props - }, - ref - ) => { - const inputId = id || `input-${Math.random().toString(36).slice(2, 9)}`; - const helperId = `${inputId}-helper`; - const errorId = `${inputId}-error`; - - return ( -
- {label && ( - - )} - -
- {startAdornment && ( - {startAdornment} - )} - - - - {endAdornment && ( - {endAdornment} - )} -
- - {error && ( - - {error} - - )} - - {helperText && !error && ( - - {helperText} - - )} -
- ); - } -); - -Input.displayName = 'Input'; -``` - ---- - -### Stack Layout Component - -**Stack.tsx:** -```tsx -import { forwardRef, type HTMLAttributes, type ElementType } from 'react'; -import styles from './Stack.module.css'; -import { clsx } from 'clsx'; - -type SpacingToken = 'none' | 'xs' | 'sm' | 'md' | 'lg' | 'xl' | '2xl'; - -export interface StackProps extends HTMLAttributes { - /** HTML element or component to render */ - as?: ElementType; - /** Direction of stacking */ - direction?: 'row' | 'column'; - /** Gap between items */ - gap?: SpacingToken; - /** Horizontal alignment */ - align?: 'start' | 'center' | 'end' | 'stretch' | 'baseline'; - /** Vertical alignment (when row) or horizontal (when column) */ - justify?: 'start' | 'center' | 'end' | 'between' | 'around' | 'evenly'; - /** Wrap items */ - wrap?: boolean; - /** Full width */ - fullWidth?: boolean; -} - -export const Stack = forwardRef( - ( - { - as: Component = 'div', - direction = 'column', - gap = 'md', - align = 'stretch', - justify = 'start', - wrap = false, - fullWidth = false, - className, - style, - ...props - }, - ref - ) => { - return ( - - ); - } -); - -const alignMap = { - start: 'flex-start', - center: 'center', - end: 'flex-end', - stretch: 'stretch', - baseline: 'baseline', -}; - -const justifyMap = { - start: 'flex-start', - center: 'center', - end: 'flex-end', - between: 'space-between', - around: 'space-around', - evenly: 'space-evenly', -}; - -Stack.displayName = 'Stack'; -``` - -**Stack.module.css:** -```css -.stack { - display: flex; - gap: var(--stack-gap, var(--spacing-md)); - align-items: var(--stack-align, stretch); - justify-content: var(--stack-justify, flex-start); -} - -.column { - flex-direction: column; -} - -.row { - flex-direction: row; -} - -.wrap { - flex-wrap: wrap; -} - -.fullWidth { - width: 100%; -} -``` - ---- - -### Card Component - -**Card.tsx:** -```tsx -import { forwardRef, type HTMLAttributes, type ReactNode } from 'react'; -import styles from './Card.module.css'; -import { clsx } from 'clsx'; - -export interface CardProps extends HTMLAttributes { - /** Padding size */ - padding?: 'none' | 'sm' | 'md' | 'lg'; - /** Shadow elevation */ - elevation?: 'none' | 'sm' | 'md' | 'lg'; - /** Border style */ - variant?: 'elevated' | 'outlined' | 'filled'; - /** Make card interactive (hover effects, cursor) */ - interactive?: boolean; - /** As a link or button */ - as?: 'div' | 'article' | 'section' | 'a' | 'button'; -} - -export const Card = forwardRef( - ( - { - padding = 'md', - elevation = 'sm', - variant = 'elevated', - interactive = false, - as: Component = 'div', - className, - children, - ...props - }, - ref - ) => { - return ( - - {children} - - ); - } -); - -Card.displayName = 'Card'; - -// Sub-components -export const CardHeader = ({ className, ...props }: HTMLAttributes) => ( -
-); - -export const CardBody = ({ className, ...props }: HTMLAttributes) => ( -
-); - -export const CardFooter = ({ className, ...props }: HTMLAttributes) => ( -
-); -``` - ---- - -## Theme Context Pattern - -**ThemeProvider.tsx:** -```tsx -import { createContext, useContext, useState, useEffect, type ReactNode } from 'react'; - -type Theme = 'light' | 'dark' | 'system'; - -interface ThemeContextValue { - theme: Theme; - resolvedTheme: 'light' | 'dark'; - setTheme: (theme: Theme) => void; -} - -const ThemeContext = createContext(null); - -export function ThemeProvider({ children }: { children: ReactNode }) { - const [theme, setTheme] = useState('system'); - const [resolvedTheme, setResolvedTheme] = useState<'light' | 'dark'>('light'); - - useEffect(() => { - const root = document.documentElement; - - if (theme === 'system') { - const mediaQuery = window.matchMedia('(prefers-color-scheme: dark)'); - const handleChange = () => { - const resolved = mediaQuery.matches ? 'dark' : 'light'; - setResolvedTheme(resolved); - root.dataset.theme = resolved; - }; - handleChange(); - mediaQuery.addEventListener('change', handleChange); - return () => mediaQuery.removeEventListener('change', handleChange); - } else { - setResolvedTheme(theme); - root.dataset.theme = theme; - } - }, [theme]); - - return ( - - {children} - - ); -} - -export function useTheme() { - const context = useContext(ThemeContext); - if (!context) throw new Error('useTheme must be used within ThemeProvider'); - return context; -} -``` - -**Theme tokens in CSS:** -```css -:root, -[data-theme="light"] { - --color-background: var(--color-white); - --color-foreground: var(--color-gray-900); - --color-muted: var(--color-gray-500); - --color-border: var(--color-gray-200); - --color-surface: var(--color-gray-50); -} - -[data-theme="dark"] { - --color-background: var(--color-gray-950); - --color-foreground: var(--color-gray-50); - --color-muted: var(--color-gray-400); - --color-border: var(--color-gray-800); - --color-surface: var(--color-gray-900); -} -``` - ---- - -## Compound Component Pattern - -**Select.tsx:** -```tsx -import { - createContext, - useContext, - useState, - type ReactNode, - type HTMLAttributes, -} from 'react'; - -interface SelectContextValue { - value: string; - onChange: (value: string) => void; - open: boolean; - setOpen: (open: boolean) => void; -} - -const SelectContext = createContext(null); - -interface SelectProps { - value: string; - onChange: (value: string) => void; - children: ReactNode; -} - -export function Select({ value, onChange, children }: SelectProps) { - const [open, setOpen] = useState(false); - - return ( - -
- {children} -
-
- ); -} - -Select.Trigger = function SelectTrigger({ children }: { children: ReactNode }) { - const ctx = useContext(SelectContext)!; - return ( - - ); -}; - -Select.Content = function SelectContent({ children }: { children: ReactNode }) { - const ctx = useContext(SelectContext)!; - if (!ctx.open) return null; - return ( -
    - {children} -
- ); -}; - -Select.Option = function SelectOption({ - value, - children, -}: { - value: string; - children: ReactNode; -}) { - const ctx = useContext(SelectContext)!; - const selected = ctx.value === value; - return ( -
  • { - ctx.onChange(value); - ctx.setOpen(false); - }} - > - {children} -
  • - ); -}; -``` - ---- - -## Hook Patterns - -**useControllable (controlled/uncontrolled state):** -```tsx -import { useState, useCallback } from 'react'; - -export function useControllable({ - value, - defaultValue, - onChange, -}: { - value?: T; - defaultValue: T; - onChange?: (value: T) => void; -}) { - const [internalValue, setInternalValue] = useState(defaultValue); - const isControlled = value !== undefined; - const currentValue = isControlled ? value : internalValue; - - const setValue = useCallback( - (next: T | ((prev: T) => T)) => { - const nextValue = typeof next === 'function' ? (next as Function)(currentValue) : next; - if (!isControlled) setInternalValue(nextValue); - onChange?.(nextValue); - }, - [isControlled, currentValue, onChange] - ); - - return [currentValue, setValue] as const; -} -``` - -**useMediaQuery:** -```tsx -import { useState, useEffect } from 'react'; - -export function useMediaQuery(query: string): boolean { - const [matches, setMatches] = useState(false); - - useEffect(() => { - const mediaQuery = window.matchMedia(query); - setMatches(mediaQuery.matches); - - const handler = (e: MediaQueryListEvent) => setMatches(e.matches); - mediaQuery.addEventListener('change', handler); - return () => mediaQuery.removeEventListener('change', handler); - }, [query]); - - return matches; -} - -// Usage with breakpoint tokens -const isMobile = useMediaQuery('(max-width: 767px)'); -const isTablet = useMediaQuery('(min-width: 768px) and (max-width: 1023px)'); -const isDesktop = useMediaQuery('(min-width: 1024px)'); -``` - ---- - -## Export Pattern - -**index.ts (component barrel):** -```tsx -// Components -export { Button, type ButtonProps } from './components/primitives/Button'; -export { Input, type InputProps } from './components/primitives/Input'; -export { Card, CardHeader, CardBody, CardFooter, type CardProps } from './components/composite/Card'; -export { Stack, type StackProps } from './components/layout/Stack'; - -// Hooks -export { useTheme } from './hooks/useTheme'; -export { useControllable } from './hooks/useControllable'; -export { useMediaQuery } from './hooks/useMediaQuery'; - -// Context -export { ThemeProvider } from './providers/ThemeProvider'; - -// Types -export type { Theme } from './types'; -``` - ---- - -## Testing Patterns - -**Button.test.tsx:** -```tsx -import { render, screen } from '@testing-library/react'; -import userEvent from '@testing-library/user-event'; -import { Button } from './Button'; - -describe('Button', () => { - it('renders children', () => { - render(); - expect(screen.getByRole('button', { name: 'Click me' })).toBeInTheDocument(); - }); - - it('handles click events', async () => { - const onClick = vi.fn(); - render(); - await userEvent.click(screen.getByRole('button')); - expect(onClick).toHaveBeenCalledTimes(1); - }); - - it('is disabled when loading', () => { - render(); - expect(screen.getByRole('button')).toBeDisabled(); - expect(screen.getByRole('button')).toHaveAttribute('aria-busy', 'true'); - }); - - it('applies variant classes', () => { - render(); - expect(screen.getByRole('button')).toHaveClass('danger'); - }); -}); -``` diff --git a/.claude/skills/frontend-design/SKILL.md b/.claude/skills/frontend-design/SKILL.md deleted file mode 100644 index 600b6db41..000000000 --- a/.claude/skills/frontend-design/SKILL.md +++ /dev/null @@ -1,42 +0,0 @@ ---- -name: frontend-design -description: Create distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics. -license: Complete terms in LICENSE.txt ---- - -This skill guides creation of distinctive, production-grade frontend interfaces that avoid generic "AI slop" aesthetics. Implement real working code with exceptional attention to aesthetic details and creative choices. - -The user provides frontend requirements: a component, page, application, or interface to build. They may include context about the purpose, audience, or technical constraints. - -## Design Thinking - -Before coding, understand the context and commit to a BOLD aesthetic direction: -- **Purpose**: What problem does this interface solve? Who uses it? -- **Tone**: Pick an extreme: brutally minimal, maximalist chaos, retro-futuristic, organic/natural, luxury/refined, playful/toy-like, editorial/magazine, brutalist/raw, art deco/geometric, soft/pastel, industrial/utilitarian, etc. There are so many flavors to choose from. Use these for inspiration but design one that is true to the aesthetic direction. -- **Constraints**: Technical requirements (framework, performance, accessibility). -- **Differentiation**: What makes this UNFORGETTABLE? What's the one thing someone will remember? - -**CRITICAL**: Choose a clear conceptual direction and execute it with precision. Bold maximalism and refined minimalism both work - the key is intentionality, not intensity. - -Then implement working code (HTML/CSS/JS, React, Vue, etc.) that is: -- Production-grade and functional -- Visually striking and memorable -- Cohesive with a clear aesthetic point-of-view -- Meticulously refined in every detail - -## Frontend Aesthetics Guidelines - -Focus on: -- **Typography**: Choose fonts that are beautiful, unique, and interesting. Avoid generic fonts like Arial and Inter; opt instead for distinctive choices that elevate the frontend's aesthetics; unexpected, characterful font choices. Pair a distinctive display font with a refined body font. -- **Color & Theme**: Commit to a cohesive aesthetic. Use CSS variables for consistency. Dominant colors with sharp accents outperform timid, evenly-distributed palettes. -- **Motion**: Use animations for effects and micro-interactions. Prioritize CSS-only solutions for HTML. Use Motion library for React when available. Focus on high-impact moments: one well-orchestrated page load with staggered reveals (animation-delay) creates more delight than scattered micro-interactions. Use scroll-triggering and hover states that surprise. -- **Spatial Composition**: Unexpected layouts. Asymmetry. Overlap. Diagonal flow. Grid-breaking elements. Generous negative space OR controlled density. -- **Backgrounds & Visual Details**: Create atmosphere and depth rather than defaulting to solid colors. Add contextual effects and textures that match the overall aesthetic. Apply creative forms like gradient meshes, noise textures, geometric patterns, layered transparencies, dramatic shadows, decorative borders, custom cursors, and grain overlays. - -NEVER use generic AI-generated aesthetics like overused font families (Inter, Roboto, Arial, system fonts), cliched color schemes (particularly purple gradients on white backgrounds), predictable layouts and component patterns, and cookie-cutter design that lacks context-specific character. - -Interpret creatively and make unexpected choices that feel genuinely designed for the context. No design should be the same. Vary between light and dark themes, different fonts, different aesthetics. NEVER converge on common choices (Space Grotesk, for example) across generations. - -**IMPORTANT**: Match implementation complexity to the aesthetic vision. Maximalist designs need elaborate code with extensive animations and effects. Minimalist or refined designs need restraint, precision, and careful attention to spacing, typography, and subtle details. Elegance comes from executing the vision well. - -Remember: Claude is capable of extraordinary creative work. Don't hold back, show what can truly be created when thinking outside the box and committing fully to a distinctive vision. \ No newline at end of file diff --git a/.claude/skills/guiding-users/SKILL.md b/.claude/skills/guiding-users/SKILL.md deleted file mode 100644 index b51fce19c..000000000 --- a/.claude/skills/guiding-users/SKILL.md +++ /dev/null @@ -1,433 +0,0 @@ ---- -name: guiding-users -description: Implements onboarding and help systems including product tours, interactive tutorials, tooltips, checklists, help panels, and progressive disclosure patterns. Use when building first-time experiences, feature discovery, guided walkthroughs, contextual help, setup flows, or user activation features. Provides timing strategies, accessibility patterns (keyboard, screen readers, reduced motion), and metrics for measuring onboarding success. ---- - -# Guiding Users Through Onboarding and Help Systems - -## Purpose - -This skill provides systematic patterns for onboarding users and delivering contextual help, from first-time product tours to ongoing feature discovery. It covers the complete spectrum of user guidance mechanisms, ensuring optimal user activation, feature adoption, and self-service support. - -## When to Use - -Activate this skill when: -- Building first-time user experiences or product tours -- Implementing feature discovery and announcements -- Creating interactive tutorials or guided tasks -- Adding tooltips, hints, or contextual help -- Designing setup flows or completion checklists -- Building help panels or documentation systems -- Implementing progressive disclosure patterns -- Measuring onboarding effectiveness and user activation -- Ensuring onboarding accessibility - -## Quick Decision Framework - -Select the appropriate guidance mechanism based on user state and content type: - -``` -First-time user → Product Tour (step-by-step) -New feature launch → Feature Spotlight (tooltip + animation) -Complex workflow → Interactive Tutorial (guided tasks) -Account setup → Checklist (progress tracking) -Contextual help needed → Tooltip/Hint system -Ongoing support → Help Panel (sidebar/searchable) -Feature unlock → Progressive Disclosure -``` - -Reference `references/selection-framework.md` for detailed selection criteria. - -## Core Guidance Mechanisms - -### Product Tours - -Step-by-step walkthroughs that guide users through key features: -- Sequential spotlights with modal overlays -- Progress indicators (Step 2 of 5) -- Skip, Previous, and Next controls -- Dismiss and resume capability -- Context-sensitive activation - -**Implementation:** -```bash -npm install react-joyride -``` - -See `examples/first-time-tour.tsx` for complete implementation. -Reference `references/product-tours.md` for patterns and best practices. - -### Feature Spotlights - -Announce new features to existing users: -- Pulsing hotspot animations -- Contextual tooltip with arrow -- "Got it" acknowledgment -- Auto-dismiss after first view -- Non-blocking overlay - -See `examples/feature-spotlight.tsx` for implementation. -Reference `references/tooltips-hints.md` for patterns. - -### Interactive Tutorials - -Guided task completion with validation: -- "Complete these tasks to get started" -- Checkbox completion tracking -- Celebration animations on completion -- Sandbox mode with sample data -- Undo and reset capabilities - -See `examples/guided-tutorial.tsx` for implementation. -Reference `references/interactive-tutorials.md` for patterns. - -### Setup Checklists - -Track multi-step onboarding progress: -- Visual progress indicators (3/4 complete) -- Direct links to each task -- Profile completion percentages -- Achievement badges and gamification -- Persistent until completed - -See `examples/setup-checklist.tsx` for implementation. -Reference `references/checklists.md` for patterns. - -### Contextual Tooltips and Hints - -Just-in-time help when users need it: -- Hover or click-triggered tooltips -- Progressive hint levels (1, 2, 3) -- "Need help?" assistance triggers -- Context-aware suggestions -- Keyboard-accessible - -See `examples/contextual-help.tsx` for implementation. -Reference `references/tooltips-hints.md` for complete patterns. - -### Help Panels - -Comprehensive help systems: -- Sidebar or drawer interface -- Contextual help based on current page -- Search help articles and docs -- Video tutorials and demos -- Contact support integration -- Collapsible and resizable - -See `examples/help-panel.tsx` for implementation. -Reference `references/help-systems.md` for patterns. - -## Timing and Triggering Strategies - -### When to Show Onboarding - -Appropriate triggers: -- First login (always) -- Immediately after signup -- New feature launch (to existing users) -- User appears stuck (smart triggering based on inactivity) -- User explicitly requests help - -### When NOT to Show Onboarding - -Avoid showing when: -- User is mid-task or focused -- Shown in every session (becomes annoying) -- Before allowing free exploration -- Tour exceeds 7 steps (too long) -- User already dismissed or completed - -**Auto-dismiss timing:** -- Simple tooltips: 5-7 seconds -- Feature announcements: 10-15 seconds or manual dismiss -- Tours: User-controlled, no auto-dismiss -- Persistent hints: Until user acknowledges - -Reference `references/timing-strategies.md` for detailed guidelines. - -## Progressive Disclosure Patterns - -Show only what's needed, when it's needed: - -**Techniques:** -1. **Accordion Help**: Collapsed by default, expand for details -2. **"Learn More" Links**: Deep dive content optional -3. **Advanced Settings**: Hidden behind "Show advanced" toggle -4. **Gradual Feature Introduction**: Unlock features as user progresses -5. **Contextual Hints**: Show based on user actions - -Reference `references/progressive-disclosure.md` for implementation patterns. - -## Accessibility Requirements - -### Keyboard Navigation - -Essential keyboard support: -- Tab through tour steps and controls -- ESC to dismiss tours and tooltips -- Arrow keys for Previous/Next navigation -- Enter/Space to activate buttons -- Focus visible indicators - -### Screen Reader Support - -ARIA patterns for announcements: -- Announce step number and total (Step 2 of 5) -- Read tooltip and help content -- Describe highlighted UI elements -- Announce progress completion -- Alert on errors or blockers - -### Reduced Motion - -Respect `prefers-reduced-motion`: -- Disable pulsing animations -- Use instant transitions instead of animations -- Remove parallax and complex effects -- Maintain functionality without motion - -To validate accessibility: -```bash -node scripts/validate_accessibility.js -``` - -Reference `references/accessibility-patterns.md` for complete implementation. - -## Library Recommendations - -### Primary: react-joyride (Feature-Rich, Accessible) - -**Library:** `/gilbarbara/react-joyride` -**Trust Score:** 9.6/10 -**Code Snippets:** 29+ - -Best for comprehensive product tours: -- WAI-ARIA compliant out of the box -- Full keyboard navigation support -- Highly customizable styling -- Programmatic control -- Localization support -- Active maintenance - -```bash -npm install react-joyride -``` - -See `examples/joyride-tour.tsx` for complete setup. - -### Alternative: driver.js (Lightweight, Modern) - -Best for minimal bundle size: -- Vanilla JavaScript (framework agnostic) -- ~5KB gzipped -- Modern API design -- No dependencies - -```bash -npm install driver.js -``` - -### Alternative: intro.js (Classic, Proven) - -Best for traditional tours: -- Battle-tested library -- Wide browser support -- JSON-based tour configuration -- Extensive plugin ecosystem - -```bash -npm install intro.js -``` - -Reference `references/library-comparison.md` for detailed analysis and selection criteria. - -## Design Token Integration - -All onboarding components use the design-tokens skill for consistent theming: - -**Token categories used:** -- **Colors**: Tour spotlight, overlay, tooltip backgrounds, hotspot colors -- **Spacing**: Tour padding, tooltip spacing, arrow size -- **Typography**: Title sizes, body text, help content -- **Borders**: Border radius for modals and tooltips -- **Shadows**: Elevation for tour spotlights and tooltips -- **Motion**: Transition durations, pulse animations - -Supports light, dark, high-contrast, and custom brand themes. -Reference the design-tokens skill for complete theming documentation. - -## Measuring Success - -### Key Metrics - -Track these indicators: -- Tour completion rate (target: >60%) -- Time to first value (faster = better) -- Feature adoption rate post-tour -- Support ticket reduction -- User activation rate (completed key actions) -- Drop-off points in tours - -### Optimization Strategies - -Iterate based on data: -- A/B test tour length (shorter often better) -- Test different messaging and copy -- Measure drop-off at each step -- Simplify steps with high abandonment -- Add skip options for returning users -- Personalize based on user type - -To analyze onboarding metrics: -```bash -python scripts/analyze_onboarding_metrics.py -``` - -Reference `references/measuring-success.md` for complete analytics implementation. - -## Anti-Patterns to Avoid - -Common mistakes that harm user experience: - -❌ **Forced Tours**: Requiring tour completion before product use -❌ **Too Long**: Tours exceeding 7 steps lose user attention -❌ **Every Session**: Showing same tour repeatedly -❌ **No Skip Option**: Preventing users from exploring independently -❌ **Wall of Text**: Using lengthy explanations instead of visuals -❌ **Blocking Everything**: Preventing interaction during tours -❌ **Premature Guidance**: Showing help before users explore -❌ **Poor Timing**: Interrupting focused work -❌ **No Context**: Generic tips without specific relevance - -## Implementation Workflow - -### Step 1: Map User Journey - -Identify key moments: -1. First login and account creation -2. Core value delivery (aha moment) -3. Feature discovery points -4. Potential confusion or abandonment -5. Achievement and progress milestones - -### Step 2: Choose Guidance Mechanisms - -Match mechanisms to moments: -- First login → Product tour (3-5 steps max) -- Core features → Interactive tutorial -- Setup requirements → Checklist -- New features → Spotlight + tooltip -- Ongoing help → Help panel - -### Step 3: Implement with Progressive Enhancement - -Build incrementally: -1. Start with essential guidance only -2. Add contextual help based on user behavior -3. Implement analytics to measure effectiveness -4. Iterate based on data -5. A/B test variations - -### Step 4: Test Accessibility - -Verify compliance: -- Keyboard navigation works completely -- Screen reader announces properly -- Reduced motion preference honored -- Focus management correct -- ARIA labels descriptive - -Run validation: -```bash -node scripts/validate_accessibility.js -``` - -### Step 5: Monitor and Optimize - -Track and improve: -- Monitor completion rates -- Identify drop-off points -- Gather user feedback -- A/B test improvements -- Update based on findings - -## Working Examples - -Start with the example matching the use case: - -``` -first-time-tour.tsx # Product walkthrough with react-joyride -feature-spotlight.tsx # New feature announcement -guided-tutorial.tsx # Interactive task completion -setup-checklist.tsx # Multi-step onboarding progress -contextual-help.tsx # Tooltips and progressive hints -help-panel.tsx # Sidebar help with search -celebration-animation.tsx # Completion feedback -``` - -## Resources - -### Scripts (Token-Free Execution) -- `scripts/generate_tour_config.js` - Generate tour configurations from user flows -- `scripts/analyze_onboarding_metrics.py` - Analyze completion and drop-off rates -- `scripts/validate_accessibility.js` - Test keyboard and screen reader support - -### References (Detailed Documentation) -- `references/product-tours.md` - Tour patterns, step design, navigation -- `references/interactive-tutorials.md` - Guided tasks and sandbox modes -- `references/tooltips-hints.md` - Contextual help and progressive hints -- `references/checklists.md` - Progress tracking and gamification -- `references/help-systems.md` - Help panels, videos, and documentation -- `references/progressive-disclosure.md` - Advanced patterns and feature unlocking -- `references/timing-strategies.md` - When and how to trigger guidance -- `references/accessibility-patterns.md` - WCAG compliance and ARIA patterns -- `references/measuring-success.md` - Analytics and optimization -- `references/library-comparison.md` - Detailed library evaluation -- `references/selection-framework.md` - Decision trees for choosing mechanisms - -### Examples (Implementation Code) -- Complete working implementations for all guidance types -- Integration examples with common frameworks -- Accessibility-compliant patterns -- Design token integration examples - -### Assets (Templates and Configs) -- `assets/celebration-animations/` - Success animations and confetti -- `assets/tour-templates.json` - Reusable tour configurations -- `assets/message-templates.json` - Tooltip and hint copy templates -- `assets/timing-config.json` - Recommended timing values - -## Cross-Skill Integration - -This skill works with other component skills: - -- **Forms**: Guided form completion, validation hints -- **Dashboards**: Feature tours, widget explanations -- **Tables**: Data grid tutorials, feature discovery -- **AI Chat**: Chat interface walkthroughs -- **Navigation**: Menu and navigation guidance -- **Feedback**: Success celebrations, progress notifications -- **Design Tokens**: All visual styling and theming - -## Key Principles - -1. **Respect User Time**: Keep tours under 7 steps, make skippable -2. **Show, Don't Tell**: Use visuals and interactions over text -3. **Progressive Enhancement**: Start simple, add guidance as needed -4. **Context is King**: Show help when and where it's relevant -5. **Measure Everything**: Track completion, iterate based on data -6. **Accessibility First**: Keyboard, screen reader, reduced motion support -7. **Celebrate Progress**: Acknowledge completion and achievements -8. **Allow Exploration**: Don't force tours, enable discovery - -## Next Steps - -1. Map the user journey and identify key moments -2. Choose appropriate guidance mechanisms for each moment -3. Install react-joyride or preferred library -4. Start with one critical flow (usually first-time experience) -5. Implement with accessibility built-in -6. Add analytics tracking -7. Test with real users -8. Iterate based on metrics and feedback diff --git a/.claude/skills/guiding-users/assets/message-templates.json b/.claude/skills/guiding-users/assets/message-templates.json deleted file mode 100644 index 196ee214d..000000000 --- a/.claude/skills/guiding-users/assets/message-templates.json +++ /dev/null @@ -1,34 +0,0 @@ -{ - "welcomeMessages": [ - "Welcome! Let's get you started.", - "Hi there! Ready to explore?", - "Welcome aboard! Let's take a quick tour." - ], - "completionMessages": [ - "🎉 You're all set! Ready to get started?", - "Great job! You've completed the tour.", - "Nice work! You're ready to go." - ], - "skipConfirmations": [ - "Skip the tour? You can restart it anytime from the Help menu.", - "Want to skip? You can always take the tour later." - ], - "progressMessages": [ - "Step {current} of {total}", - "{current}/{total}", - "Progress: {current} of {total} steps" - ], - "tooltips": { - "create": "Click here to create a new {item}", - "edit": "Edit your {item} here", - "delete": "Remove this {item}", - "save": "Save your changes", - "cancel": "Discard changes", - "help": "Need help? Click here for documentation" - }, - "hints": { - "level1": "Hint: {basic_suggestion}", - "level2": "Still stuck? Try {intermediate_suggestion}", - "level3": "Here's how: {detailed_explanation}" - } -} diff --git a/.claude/skills/guiding-users/assets/timing-config.json b/.claude/skills/guiding-users/assets/timing-config.json deleted file mode 100644 index de8a15bec..000000000 --- a/.claude/skills/guiding-users/assets/timing-config.json +++ /dev/null @@ -1,32 +0,0 @@ -{ - "autoShowDelays": { - "firstLogin": 500, - "afterSignup": 1000, - "featureAnnouncement": 2000, - "inactivityTrigger": 30000 - }, - "autoDismissTimings": { - "simpleTooltip": 5000, - "featureSpotlight": 10000, - "contextualHint": 7000, - "successMessage": 3000 - }, - "transitionDurations": { - "stepChange": 300, - "modalFade": 200, - "tooltipAppear": 150, - "pulseAnimation": 2000 - }, - "interactionTimeouts": { - "clickDelay": 100, - "hoverDelay": 500, - "focusDelay": 200 - }, - "recommendations": { - "maxTourDuration": 180000, - "maxStepDuration": 15000, - "minStepDuration": 3000, - "optimalStepCount": 5, - "maxStepCount": 7 - } -} diff --git a/.claude/skills/guiding-users/assets/tour-templates.json b/.claude/skills/guiding-users/assets/tour-templates.json deleted file mode 100644 index bebf9d662..000000000 --- a/.claude/skills/guiding-users/assets/tour-templates.json +++ /dev/null @@ -1,54 +0,0 @@ -{ - "templates": { - "first-time-onboarding": { - "name": "First-Time User Onboarding", - "description": "Standard 3-5 step tour for new users", - "maxSteps": 5, - "duration": "2-3 minutes", - "steps": [ - { - "type": "welcome", - "placement": "center", - "title": "Welcome to [Product Name]!", - "content": "Let's take a quick tour of the key features." - }, - { - "type": "feature", - "placement": "bottom", - "title": "[Feature Name]", - "content": "[Feature description and value]" - } - ] - }, - "feature-announcement": { - "name": "New Feature Spotlight", - "description": "Single-step announcement for new features", - "maxSteps": 1, - "duration": "10-15 seconds", - "steps": [ - { - "type": "spotlight", - "placement": "auto", - "title": "New: [Feature Name]", - "content": "[Brief description and benefit]", - "cta": "Try it now" - } - ] - }, - "interactive-tutorial": { - "name": "Guided Task Completion", - "description": "Step-by-step task guidance with validation", - "maxSteps": 7, - "duration": "5-10 minutes", - "steps": [ - { - "type": "task", - "validation": true, - "title": "Step 1: [Action]", - "content": "[Instructions]", - "successMessage": "Great! [Encouragement]" - } - ] - } - } -} diff --git a/.claude/skills/guiding-users/examples/celebration-animation.tsx b/.claude/skills/guiding-users/examples/celebration-animation.tsx deleted file mode 100644 index c1cf8daba..000000000 --- a/.claude/skills/guiding-users/examples/celebration-animation.tsx +++ /dev/null @@ -1,10 +0,0 @@ -/** - * Celebration Animation Example - * - * Success animations and confetti effects for milestone completion. - */ - -export function CelebrationAnimation() { - // Implementation example for celebration animations - return null; // To be expanded -} diff --git a/.claude/skills/guiding-users/examples/contextual-help.tsx b/.claude/skills/guiding-users/examples/contextual-help.tsx deleted file mode 100644 index ff98550eb..000000000 --- a/.claude/skills/guiding-users/examples/contextual-help.tsx +++ /dev/null @@ -1,10 +0,0 @@ -/** - * Contextual Help Example - * - * Tooltips and progressive hints that appear based on user context and actions. - */ - -export function ContextualHelp() { - // Implementation example for contextual tooltips - return null; // To be expanded -} diff --git a/.claude/skills/guiding-users/examples/feature-spotlight.tsx b/.claude/skills/guiding-users/examples/feature-spotlight.tsx deleted file mode 100644 index 613e236df..000000000 --- a/.claude/skills/guiding-users/examples/feature-spotlight.tsx +++ /dev/null @@ -1,13 +0,0 @@ -/** - * Feature Spotlight Example - * - * Demonstrates announcing a new feature to existing users with a pulsing - * hotspot and dismissible tooltip. - */ - -import { useState, useEffect } from 'react'; - -export function FeatureSpotlight() { - // Implementation example for feature announcements - return null; // To be expanded -} diff --git a/.claude/skills/guiding-users/examples/first-time-tour.tsx b/.claude/skills/guiding-users/examples/first-time-tour.tsx deleted file mode 100644 index e4cc46349..000000000 --- a/.claude/skills/guiding-users/examples/first-time-tour.tsx +++ /dev/null @@ -1,119 +0,0 @@ -/** - * First-Time Product Tour Example - * - * Demonstrates a complete product tour implementation using react-joyride - * with design token integration and accessibility features. - */ - -import { useState, useEffect } from 'react'; -import Joyride, { Step, CallBackProps, STATUS, ACTIONS } from 'react-joyride'; - -const steps: Step[] = [ - { - target: 'body', - content: 'Welcome! Let\'s take a quick tour of the key features.', - placement: 'center', - disableBeacon: true, - }, - { - target: '.dashboard-header', - content: 'This is your dashboard where you\'ll find all your recent activity.', - placement: 'bottom', - }, - { - target: '.create-button', - content: 'Click here anytime to create a new project.', - placement: 'bottom', - }, - { - target: '.navigation-menu', - content: 'Use this menu to navigate between different sections.', - placement: 'right', - }, - { - target: '.help-button', - content: 'Need help? Click here to access documentation and support.', - placement: 'bottom', - }, -]; - -export function FirstTimeTour() { - const [run, setRun] = useState(false); - const [stepIndex, setStepIndex] = useState(0); - - useEffect(() => { - // Check if user has completed tour - const tourCompleted = localStorage.getItem('tourCompleted'); - if (!tourCompleted) { - // Delay tour start to let page load - const timer = setTimeout(() => setRun(true), 500); - return () => clearTimeout(timer); - } - }, []); - - const handleJoyrideCallback = (data: CallBackProps) => { - const { status, action, index } = data; - - if (status === STATUS.FINISHED) { - // Tour completed successfully - setRun(false); - localStorage.setItem('tourCompleted', 'true'); - // Optional: Track completion analytics - console.log('Tour completed'); - } else if (status === STATUS.SKIPPED) { - // User skipped tour - setRun(false); - localStorage.setItem('tourSkipped', 'true'); - // Optional: Track skip analytics - console.log('Tour skipped'); - } - - if (action === ACTIONS.NEXT) { - setStepIndex(index + 1); - } else if (action === ACTIONS.PREV) { - setStepIndex(index - 1); - } - }; - - return ( - - ); -} diff --git a/.claude/skills/guiding-users/examples/guided-tutorial.tsx b/.claude/skills/guiding-users/examples/guided-tutorial.tsx deleted file mode 100644 index 6ecc36a66..000000000 --- a/.claude/skills/guiding-users/examples/guided-tutorial.tsx +++ /dev/null @@ -1,11 +0,0 @@ -/** - * Guided Tutorial Example - * - * Interactive tutorial that guides users through completing actual tasks - * with validation and sandbox mode. - */ - -export function GuidedTutorial() { - // Implementation example for interactive tutorials - return null; // To be expanded -} diff --git a/.claude/skills/guiding-users/examples/help-panel.tsx b/.claude/skills/guiding-users/examples/help-panel.tsx deleted file mode 100644 index aa6717749..000000000 --- a/.claude/skills/guiding-users/examples/help-panel.tsx +++ /dev/null @@ -1,10 +0,0 @@ -/** - * Help Panel Example - * - * Sidebar help system with search, contextual documentation, and support access. - */ - -export function HelpPanel() { - // Implementation example for help panels - return null; // To be expanded -} diff --git a/.claude/skills/guiding-users/examples/joyride-tour.tsx b/.claude/skills/guiding-users/examples/joyride-tour.tsx deleted file mode 100644 index f9a799cb5..000000000 --- a/.claude/skills/guiding-users/examples/joyride-tour.tsx +++ /dev/null @@ -1,10 +0,0 @@ -/** - * react-joyride Complete Setup Example - * - * Comprehensive example showing all react-joyride features with design tokens. - */ - -export function JoyrideTour() { - // Complete implementation with all features - return null; // To be expanded -} diff --git a/.claude/skills/guiding-users/examples/setup-checklist.tsx b/.claude/skills/guiding-users/examples/setup-checklist.tsx deleted file mode 100644 index 6cf6b4398..000000000 --- a/.claude/skills/guiding-users/examples/setup-checklist.tsx +++ /dev/null @@ -1,10 +0,0 @@ -/** - * Setup Checklist Example - * - * Multi-step onboarding checklist with progress tracking and visual feedback. - */ - -export function SetupChecklist() { - // Implementation example for setup checklists - return null; // To be expanded -} diff --git a/.claude/skills/guiding-users/outputs.yaml b/.claude/skills/guiding-users/outputs.yaml deleted file mode 100644 index dbd7e0c48..000000000 --- a/.claude/skills/guiding-users/outputs.yaml +++ /dev/null @@ -1,245 +0,0 @@ -skill: "guiding-users" -version: "1.0" -domain: "frontend" - -base_outputs: - - path: "src/components/onboarding/ProductTour.tsx" - must_contain: ["react-joyride", "Step[]", "localStorage", "tourCompleted"] - description: "Main product tour component with react-joyride integration" - - - path: "src/components/onboarding/FeatureSpotlight.tsx" - must_contain: ["spotlight", "tooltip", "pulsing", "dismiss"] - description: "Feature announcement component for new feature discovery" - - - path: "src/hooks/useOnboarding.ts" - must_contain: ["useState", "useEffect", "localStorage"] - description: "Hook for managing onboarding state and tour completion tracking" - - - path: "src/styles/onboarding.css" - must_contain: ["--tour-spotlight", "--tour-overlay", "var(--color-primary)"] - description: "Design token-based styling for all onboarding components" - -conditional_outputs: - maturity: - starter: - - path: "src/components/onboarding/SimpleTour.tsx" - must_contain: ["react-joyride", "steps", "run"] - description: "Basic 3-step product tour for getting started" - - - path: "src/components/onboarding/WelcomeTooltip.tsx" - must_contain: ["tooltip", "first-time", "dismiss"] - description: "Simple welcome tooltip for new users" - - intermediate: - - path: "src/components/onboarding/ProductTour.tsx" - must_contain: ["react-joyride", "callback", "analytics", "skip"] - description: "Full-featured tour with progress tracking and analytics" - - - path: "src/components/onboarding/SetupChecklist.tsx" - must_contain: ["checklist", "progress", "localStorage"] - description: "Multi-step setup checklist with progress tracking" - - - path: "src/components/onboarding/ContextualHelp.tsx" - must_contain: ["tooltip", "hover", "keyboard-accessible"] - description: "Contextual help system with tooltips and hints" - - - path: "src/utils/onboarding-analytics.ts" - must_contain: ["trackTourStart", "trackTourComplete", "trackDropoff"] - description: "Analytics utilities for measuring onboarding success" - - advanced: - - path: "src/components/onboarding/AdaptiveTour.tsx" - must_contain: ["user-type", "personalized", "A/B test"] - description: "Adaptive tour that personalizes based on user type and behavior" - - - path: "src/components/onboarding/InteractiveTutorial.tsx" - must_contain: ["guided-tasks", "validation", "sandbox", "celebration"] - description: "Interactive tutorial with task validation and completion feedback" - - - path: "src/components/onboarding/HelpPanel.tsx" - must_contain: ["sidebar", "search", "contextual", "video"] - description: "Comprehensive help panel with search and contextual support" - - - path: "src/components/onboarding/ProgressiveDisclosure.tsx" - must_contain: ["feature-unlock", "gradual", "context-aware"] - description: "Progressive disclosure system for gradual feature introduction" - - - path: "src/hooks/useSmartTriggers.ts" - must_contain: ["inactivity", "confusion", "context-based"] - description: "Smart triggering system based on user behavior and context" - - - path: "src/utils/tour-optimization.ts" - must_contain: ["A/B", "completion-rate", "drop-off", "optimize"] - description: "Tour optimization utilities for A/B testing and performance" - - frontend_framework: - react: - - path: "src/components/onboarding/ProductTour.tsx" - must_contain: ["useState", "useEffect", "react-joyride"] - description: "React-based product tour with hooks" - - - path: "src/hooks/useOnboarding.ts" - must_contain: ["useState", "useCallback", "useMemo"] - description: "React hooks for onboarding state management" - - vue: - - path: "src/components/onboarding/ProductTour.vue" - must_contain: [" - - diff --git a/.claude/skills/implementing-realtime-sync/examples/llm-streaming-sse/requirements.txt b/.claude/skills/implementing-realtime-sync/examples/llm-streaming-sse/requirements.txt deleted file mode 100644 index 22634fc29..000000000 --- a/.claude/skills/implementing-realtime-sync/examples/llm-streaming-sse/requirements.txt +++ /dev/null @@ -1,5 +0,0 @@ -fastapi==0.109.0 -uvicorn[standard]==0.27.0 -openai==1.10.0 -anthropic==0.18.0 -python-dotenv==1.0.0 diff --git a/.claude/skills/implementing-realtime-sync/outputs.yaml b/.claude/skills/implementing-realtime-sync/outputs.yaml deleted file mode 100644 index b8b117423..000000000 --- a/.claude/skills/implementing-realtime-sync/outputs.yaml +++ /dev/null @@ -1,327 +0,0 @@ -skill: "implementing-realtime-sync" -version: "1.0" -domain: "backend" - -base_outputs: - # WebSocket server implementation - - path: "src/websocket/server.{py,rs,go,ts}" - must_contain: - - "WebSocket" - - "connection" - - "message handling" - - # SSE endpoint for streaming - - path: "src/sse/stream.{py,rs,go,ts}" - must_contain: - - "EventSource" - - "streaming" - - "yield" - - # Connection manager - - path: "src/realtime/connection_manager.{py,rs,go,ts}" - must_contain: - - "connections" - - "add" - - "remove" - - "broadcast" - -conditional_outputs: - maturity: - starter: - # Basic WebSocket chat - - path: "src/websocket/chat.{py,rs,go,ts}" - must_contain: - - "WebSocket" - - "receive" - - "send" - - "connections" - - # Simple SSE endpoint - - path: "src/sse/events.{py,rs,go,ts}" - must_contain: - - "EventSourceResponse" - - "async generator" - - "yield" - - intermediate: - # WebSocket with authentication - - path: "src/websocket/authenticated.{py,rs,go,ts}" - must_contain: - - "WebSocket" - - "authentication" - - "token" - - # Presence tracking - - path: "src/realtime/presence.{py,rs,go,ts}" - must_contain: - - "awareness" - - "online users" - - "cursor position" - - # Heartbeat/ping-pong - - path: "src/websocket/heartbeat.{py,rs,go,ts}" - must_contain: - - "ping" - - "pong" - - "keepalive" - - # Reconnection logic - - path: "src/realtime/reconnection.{ts,js}" - must_contain: - - "exponential backoff" - - "retry" - - "connection state" - - advanced: - # Redis pub/sub for scaling - - path: "src/realtime/redis_pubsub.{py,rs,go,ts}" - must_contain: - - "Redis" - - "publish" - - "subscribe" - - "channel" - - # CRDT implementation (Yjs/Automerge) - - path: "src/collaboration/crdt.{ts,js}" - must_contain: - - "Y.Doc|Automerge" - - "WebsocketProvider" - - "conflict-free" - - # Offline sync - - path: "src/realtime/offline_sync.{ts,js}" - must_contain: - - "IndexedDB" - - "queue" - - "sync" - - "offline" - - # Rate limiting - - path: "src/websocket/rate_limiter.{py,rs,go,ts}" - must_contain: - - "rate limit" - - "sliding window" - - "throttle" - - backend_framework: - fastapi: - - path: "src/websocket/fastapi_ws.py" - must_contain: - - "from fastapi import WebSocket" - - "async def websocket_endpoint" - - "await websocket.accept()" - - - path: "src/sse/fastapi_sse.py" - must_contain: - - "from sse_starlette.sse import EventSourceResponse" - - "async def generate" - - "yield" - - axum: - - path: "src/websocket/axum_ws.rs" - must_contain: - - "use axum::extract::ws" - - "WebSocketUpgrade" - - "handle_socket" - - - path: "src/sse/axum_sse.rs" - must_contain: - - "use axum::response::sse" - - "Sse::new" - - "Stream" - - express: - - path: "src/websocket/express_ws.ts" - must_contain: - - "import WebSocket" - - "wss.on('connection'" - - "ws.on('message'" - - socketio: - - path: "src/websocket/socketio_server.{py,ts,js}" - must_contain: - - "Socket.IO|SocketIO" - - "on('connect'" - - "emit" - - "rooms" - -scaffolding: - # Frontend client examples - - path: "src/client/websocket_client.ts" - template: | - // WebSocket client with reconnection - class WebSocketClient { - private ws: WebSocket | null = null; - private reconnectAttempts = 0; - private maxReconnectDelay = 30000; - - connect(url: string) { - this.ws = new WebSocket(url); - this.ws.onopen = () => { - console.log('Connected'); - this.reconnectAttempts = 0; - }; - this.ws.onclose = () => this.reconnect(url); - this.ws.onerror = (error) => console.error('WebSocket error:', error); - } - - private reconnect(url: string) { - const delay = Math.min(1000 * Math.pow(2, this.reconnectAttempts), this.maxReconnectDelay); - setTimeout(() => { - this.reconnectAttempts++; - this.connect(url); - }, delay); - } - - send(data: any) { - if (this.ws?.readyState === WebSocket.OPEN) { - this.ws.send(JSON.stringify(data)); - } - } - } - - - path: "src/client/sse_client.ts" - template: | - // SSE client for streaming - export function connectSSE(url: string, onMessage: (data: string) => void) { - const eventSource = new EventSource(url); - - eventSource.addEventListener('token', (event) => { - onMessage(event.data); - }); - - eventSource.addEventListener('done', () => { - eventSource.close(); - }); - - eventSource.onerror = (error) => { - console.error('SSE error:', error); - eventSource.close(); - }; - - return () => eventSource.close(); - } - - - path: "src/realtime/types.ts" - template: | - // Type definitions for real-time communication - export interface Message { - type: 'chat' | 'presence' | 'system'; - userId: string; - content: string; - timestamp: number; - } - - export interface PresenceState { - userId: string; - username: string; - cursor?: { x: number; y: number }; - online: boolean; - } - - export interface ConnectionState { - status: 'connecting' | 'connected' | 'disconnected' | 'reconnecting'; - error?: string; - reconnectAttempts: number; - } - - - path: "README.md" - template: | - # Real-Time Sync Implementation - - ## Protocols Used - - **SSE**: Server-Sent Events for one-way streaming (LLM responses, notifications) - - **WebSocket**: Bidirectional communication (chat, live updates) - - **CRDT**: Conflict-free collaborative editing (Yjs/Automerge) - - ## Running Examples - - ### LLM Streaming with SSE - ```bash - cd examples/llm-streaming-sse - pip install -r requirements.txt - python backend.py - # Open frontend.html in browser - ``` - - ### WebSocket Chat - ```bash - cd examples/chat-websocket - # Follow README instructions - ``` - - ### Collaborative Editing - ```bash - cd examples/collaborative-yjs - npm install - npm run dev - ``` - - ## Testing - ```bash - python scripts/test_websocket_connection.py - ``` - - ## Security Considerations - - Use token-based authentication (Sec-WebSocket-Protocol header) - - Implement rate limiting per user - - Validate all incoming messages - - Use WSS (WebSocket Secure) in production - - ## Scaling - - Use Redis pub/sub for horizontal scaling - - Reference: references/websockets.md - -metadata: - primary_blueprints: - - "api-first" - - contributes_to: - - "Real-time functionality" - - "Live updates" - - "Collaborative features" - - "LLM streaming" - - "Chat applications" - - "Presence awareness" - - integrates_with: - - "ai-chat" # LLM streaming integration - - "dashboards" # Live metrics - - "implementing-rest-apis" # Authentication handoff - - library_dependencies: - python: - - "websockets>=13.0" - - "sse-starlette>=2.0" - - "redis>=5.0" # For scaling - - rust: - - "tokio-tungstenite = \"0.23\"" - - "axum = \"0.7\"" - - go: - - "gorilla/websocket" - - "nhooyr.io/websocket" - - typescript: - - "ws" - - "socket.io@^4.0.0" - - "yjs" # CRDT - - "y-websocket" - - "y-indexeddb" # Offline sync - - required_infrastructure: - - "Redis (for horizontal scaling)" - - "WebSocket-compatible load balancer (sticky sessions or Redis backplane)" - - examples_reference: - - "examples/llm-streaming-sse/" - - "examples/chat-websocket/" - - "examples/collaborative-yjs/" - - references: - - "references/sse.md" - - "references/websockets.md" - - "references/crdts.md" - - "references/presence-patterns.md" - - "references/offline-sync.md" diff --git a/.claude/skills/implementing-realtime-sync/references/crdts.md b/.claude/skills/implementing-realtime-sync/references/crdts.md deleted file mode 100644 index 48bef4e03..000000000 --- a/.claude/skills/implementing-realtime-sync/references/crdts.md +++ /dev/null @@ -1,631 +0,0 @@ -# CRDTs (Conflict-Free Replicated Data Types) Reference Guide - -CRDTs enable conflict-free collaborative editing across distributed systems. - - -## Table of Contents - -- [The Collaboration Problem](#the-collaboration-problem) -- [CRDT Types](#crdt-types) - - [G-Counter (Grow-Only Counter)](#g-counter-grow-only-counter) - - [LWW-Register (Last-Write-Wins)](#lww-register-last-write-wins) - - [OR-Set (Observed-Remove Set)](#or-set-observed-remove-set) -- [Yjs - Production CRDT Library](#yjs-production-crdt-library) - - [Architecture](#architecture) - - [Basic Usage](#basic-usage) - - [Network Sync with y-websocket](#network-sync-with-y-websocket) - - [Backend (y-sweet server)](#backend-y-sweet-server) - - [Local Persistence (IndexedDB)](#local-persistence-indexeddb) - - [Rich Text Editing](#rich-text-editing) - - [Awareness (Presence)](#awareness-presence) - - [Cursor Rendering](#cursor-rendering) -- [Automerge - Alternative CRDT](#automerge-alternative-crdt) - - [When to Use Automerge](#when-to-use-automerge) - - [Basic Usage (Rust)](#basic-usage-rust) - - [Network Sync (Rust)](#network-sync-rust) - - [TypeScript (via WASM)](#typescript-via-wasm) -- [Yjs vs Automerge Comparison](#yjs-vs-automerge-comparison) -- [Conflict Resolution Strategies](#conflict-resolution-strategies) - - [Merge Strategy](#merge-strategy) - - [Custom Conflict Resolution](#custom-conflict-resolution) -- [Best Practices](#best-practices) - -## The Collaboration Problem - -**Traditional Operational Transform (OT):** -``` -User A at position 5: Insert "hello" -User B at position 5: Insert "world" (simultaneously) - -Server must choose: -- Transform A's operation relative to B's? "worldhello" -- Transform B's operation relative to A's? "helloworld" -- Error and ask user to retry? - -Problems: -- Complex transformation functions -- Central server required -- Race conditions -- Order-dependent -``` - -**CRDT Solution:** -``` -User A: Insert "hello" with unique ID A1 -User B: Insert "world" with unique ID B1 - -Merge rule: Sort by ID -Result: Deterministic (always "helloworld" if A1 < B1) - -Benefits: -- No central server needed -- Eventually consistent -- Order-independent -- Commutative and associative -``` - -## CRDT Types - -### G-Counter (Grow-Only Counter) - -Each replica maintains its own counter. Total is sum of all replicas. - -```typescript -class GCounter { - private counts = new Map() - - increment(replicaId: string, amount: number = 1) { - const current = this.counts.get(replicaId) || 0 - this.counts.set(replicaId, current + amount) - } - - value(): number { - return Array.from(this.counts.values()).reduce((a, b) => a + b, 0) - } - - merge(other: GCounter) { - for (const [id, count] of other.counts) { - const current = this.counts.get(id) || 0 - this.counts.set(id, Math.max(current, count)) - } - } -} - -// Usage -const counter1 = new GCounter() -counter1.increment('replica1', 5) - -const counter2 = new GCounter() -counter2.increment('replica2', 3) - -counter1.merge(counter2) -console.log(counter1.value()) // 8 (5 + 3) -``` - -### LWW-Register (Last-Write-Wins) - -Store value with timestamp. Latest timestamp wins on merge. - -```typescript -class LWWRegister { - private value: T - private timestamp: number - - set(value: T, timestamp: number = Date.now()) { - if (timestamp > this.timestamp) { - this.value = value - this.timestamp = timestamp - } - } - - get(): T { - return this.value - } - - merge(other: LWWRegister) { - if (other.timestamp > this.timestamp) { - this.value = other.value - this.timestamp = other.timestamp - } - } -} -``` - -### OR-Set (Observed-Remove Set) - -Add and remove elements with unique IDs. - -```typescript -type ElementId = string - -class ORSet { - private elements = new Map>() - private tombstones = new Map>() - - add(element: T, elementId: ElementId = crypto.randomUUID()) { - if (!this.elements.has(element)) { - this.elements.set(element, new Set()) - } - this.elements.get(element)!.add(elementId) - } - - remove(element: T) { - const ids = this.elements.get(element) - if (ids) { - if (!this.tombstones.has(element)) { - this.tombstones.set(element, new Set()) - } - ids.forEach(id => this.tombstones.get(element)!.add(id)) - } - } - - has(element: T): boolean { - const ids = this.elements.get(element) - const removed = this.tombstones.get(element) - - if (!ids) return false - if (!removed) return ids.size > 0 - - // Element exists if any ID not in tombstones - return Array.from(ids).some(id => !removed.has(id)) - } - - merge(other: ORSet) { - // Merge elements - for (const [element, ids] of other.elements) { - if (!this.elements.has(element)) { - this.elements.set(element, new Set()) - } - ids.forEach(id => this.elements.get(element)!.add(id)) - } - - // Merge tombstones - for (const [element, ids] of other.tombstones) { - if (!this.tombstones.has(element)) { - this.tombstones.set(element, new Set()) - } - ids.forEach(id => this.tombstones.get(element)!.add(id)) - } - } -} -``` - -## Yjs - Production CRDT Library - -Yjs is the most mature CRDT library for collaborative editing. - -### Architecture - -``` -┌─────────────────────────────────────────┐ -│ Yjs Document (Y.Doc) │ -│ ┌──────────────────────────────────┐ │ -│ │ Shared Types │ │ -│ │ - Y.Text (collaborative text) │ │ -│ │ - Y.Array (collaborative list) │ │ -│ │ - Y.Map (collaborative map) │ │ -│ │ - Y.XmlFragment (rich text) │ │ -│ └──────────────────────────────────┘ │ -│ │ -│ ┌──────────────────────────────────┐ │ -│ │ Providers (Network Layer) │ │ -│ │ - y-websocket (WebSocket sync) │ │ -│ │ - y-webrtc (P2P sync) │ │ -│ │ - y-indexeddb (local storage) │ │ -│ └──────────────────────────────────┘ │ -└─────────────────────────────────────────┘ -``` - -### Basic Usage - -```typescript -import * as Y from 'yjs' - -// Create shared document -const doc = new Y.Doc() - -// Shared text -const ytext = doc.getText('content') - -// Insert text -ytext.insert(0, 'Hello ') -ytext.insert(6, 'world!') - -console.log(ytext.toString()) // "Hello world!" - -// Listen for changes -ytext.observe(event => { - event.changes.delta.forEach(change => { - if (change.insert) { - console.log('Inserted:', change.insert) - } - if (change.delete) { - console.log('Deleted:', change.delete, 'characters') - } - }) -}) -``` - -### Network Sync with y-websocket - -```typescript -import * as Y from 'yjs' -import { WebsocketProvider } from 'y-websocket' - -const doc = new Y.Doc() - -// WebSocket provider connects to server -const provider = new WebsocketProvider( - 'ws://localhost:1234', // WebSocket server - 'my-document-id', // Document ID (room) - doc, // Yjs document - { connect: true } // Auto-connect -) - -// Provider events -provider.on('status', (event) => { - console.log('Connection status:', event.status) // 'connected' | 'disconnected' -}) - -provider.on('sync', (isSynced) => { - console.log('Synced with server:', isSynced) -}) - -// Shared text -const ytext = doc.getText('content') - -// Changes automatically sync to all connected peers -ytext.insert(0, 'This syncs across all users!') -``` - -### Backend (y-sweet server) - -Yjs WebSocket server in Rust for production deployments. - -**Run y-sweet:** -```bash -# Via npx -npx y-sweet serve - -# Via Docker -docker run -p 1234:1234 ysweet/y-sweet - -# With persistence -docker run -p 1234:1234 -v $(pwd)/data:/data ysweet/y-sweet -``` - -**Environment configuration:** -```bash -# .env -YSWEETD_HOST=0.0.0.0 -YSWEETD_PORT=1234 -YSWEETD_DATA_DIR=/data -YSWEETD_AUTH=none # or 'token' for authentication -``` - -### Local Persistence (IndexedDB) - -Store document offline for PWA/mobile apps. - -```typescript -import * as Y from 'yjs' -import { IndexeddbPersistence } from 'y-indexeddb' -import { WebsocketProvider } from 'y-websocket' - -const doc = new Y.Doc() - -// Local persistence first -const indexeddbProvider = new IndexeddbPersistence('my-doc', doc) - -indexeddbProvider.on('synced', () => { - console.log('Loaded from IndexedDB') -}) - -// Then connect to server (syncs changes) -const wsProvider = new WebsocketProvider( - 'wss://api.example.com/sync', - 'my-doc', - doc -) - -// Workflow: -// 1. Load from IndexedDB (instant) -// 2. Connect to server (background) -// 3. Sync differences (automatic) -// 4. All changes saved to IndexedDB + server -``` - -### Rich Text Editing - -Integrate with ProseMirror, Monaco, CodeMirror, or Quill. - -**Quill Integration:** -```typescript -import * as Y from 'yjs' -import { WebsocketProvider } from 'y-websocket' -import { QuillBinding } from 'y-quill' -import Quill from 'quill' - -const doc = new Y.Doc() -const ytext = doc.getText('quill') - -const provider = new WebsocketProvider('ws://localhost:1234', 'quill-doc', doc) - -// Initialize Quill -const quill = new Quill('#editor', { - theme: 'snow', - modules: { toolbar: [['bold', 'italic', 'underline']] } -}) - -// Bind Yjs to Quill -const binding = new QuillBinding(ytext, quill, provider.awareness) - -// Now multiple users can edit the same document! -``` - -**ProseMirror Integration:** -```typescript -import { ySyncPlugin, yCursorPlugin, yUndoPlugin } from 'y-prosemirror' -import { EditorState } from 'prosemirror-state' -import { EditorView } from 'prosemirror-view' - -const yXmlFragment = doc.getXmlFragment('prosemirror') - -const state = EditorState.create({ - schema, - plugins: [ - ySyncPlugin(yXmlFragment), - yCursorPlugin(provider.awareness), - yUndoPlugin(), - ] -}) - -const view = new EditorView(document.querySelector('#editor'), { state }) -``` - -### Awareness (Presence) - -Track online users, cursor positions, selections. - -```typescript -import { WebsocketProvider } from 'y-websocket' - -const provider = new WebsocketProvider('ws://localhost:1234', 'doc', doc) -const awareness = provider.awareness - -// Set local state -awareness.setLocalState({ - user: { - name: 'Alice', - color: '#FF5733', - avatar: 'https://...' - }, - cursor: { - anchor: 10, - head: 15 - } -}) - -// Get all connected users -awareness.getStates().forEach((state, clientId) => { - console.log(`User ${state.user.name} is at position ${state.cursor.anchor}`) -}) - -// Listen for changes -awareness.on('change', (changes) => { - // changes.added: Array (new users) - // changes.updated: Array (state changed) - // changes.removed: Array (users left) - - changes.added.forEach(clientId => { - const state = awareness.getStates().get(clientId) - console.log(`${state.user.name} joined`) - }) - - changes.updated.forEach(clientId => { - const state = awareness.getStates().get(clientId) - console.log(`${state.user.name} moved cursor`) - }) - - changes.removed.forEach(clientId => { - console.log(`User ${clientId} left`) - }) -}) -``` - -### Cursor Rendering - -```typescript -function renderCursors(awareness: Awareness) { - const cursors = document.getElementById('cursors') - cursors.innerHTML = '' - - awareness.getStates().forEach((state, clientId) => { - if (clientId === awareness.clientID) return // Skip self - - const cursor = document.createElement('div') - cursor.className = 'remote-cursor' - cursor.style.left = `${state.cursor.x}px` - cursor.style.top = `${state.cursor.y}px` - cursor.style.backgroundColor = state.user.color - - const label = document.createElement('span') - label.textContent = state.user.name - cursor.appendChild(label) - - cursors.appendChild(cursor) - }) -} - -awareness.on('change', () => renderCursors(awareness)) -``` - -## Automerge - Alternative CRDT - -Automerge is Rust-first CRDT for JSON-like data structures. - -### When to Use Automerge - -- Need full Rust implementation (Yjs is TypeScript-first) -- JSON-like data structures (not just text) -- Time-travel / history playback -- Local-first architecture - -### Basic Usage (Rust) - -```rust -use automerge::{AutoCommit, transaction::Transactable, ObjType, ROOT}; - -fn main() { - let mut doc = AutoCommit::new(); - - // Create a map - let mut tx = doc.transaction(); - let map = tx.put_object(ROOT, "notes", ObjType::Map).unwrap(); - - // Add properties - tx.put(&map, "title", "Meeting Notes").unwrap(); - tx.put(&map, "date", "2025-12-02").unwrap(); - - // Create nested list - let list = tx.put_object(&map, "items", ObjType::List).unwrap(); - tx.insert(&list, 0, "Item 1").unwrap(); - tx.insert(&list, 1, "Item 2").unwrap(); - - tx.commit(); - - // Generate binary update (send over network) - let changes = doc.get_changes(&[]).unwrap(); - - // Other peer can apply changes - // doc2.apply_changes(changes).unwrap(); -} -``` - -### Network Sync (Rust) - -```rust -use automerge::{AutoCommit, sync}; - -// Peer A -let mut doc1 = AutoCommit::new(); -let mut sync_state1 = sync::State::new(); - -// Peer B -let mut doc2 = AutoCommit::new(); -let mut sync_state2 = sync::State::new(); - -// Peer A generates sync message -let message = doc1.sync().generate_sync_message(&mut sync_state1); - -// Send message to Peer B (over WebSocket, HTTP, etc.) -// ... - -// Peer B receives and applies -doc2.sync().receive_sync_message(&mut sync_state2, message).unwrap(); - -// Both docs are now in sync -``` - -### TypeScript (via WASM) - -```typescript -import * as Automerge from '@automerge/automerge' - -let doc = Automerge.init() - -doc = Automerge.change(doc, 'Add data', doc => { - doc.notes = { title: 'Meeting Notes', items: [] } - doc.notes.items.push('Item 1') - doc.notes.items.push('Item 2') -}) - -// Generate changes -const changes = Automerge.getChanges(Automerge.init(), doc) - -// Apply to another doc -let doc2 = Automerge.init() -doc2 = Automerge.applyChanges(doc2, changes) - -console.log(doc2.notes.title) // "Meeting Notes" -``` - -## Yjs vs Automerge Comparison - -| Feature | Yjs | Automerge | -|---------|-----|-----------| -| **Language** | TypeScript (Rust WASM in progress) | Rust (TypeScript WASM bindings) | -| **Best For** | Text editing, rich text | JSON-like data structures | -| **Performance** | Faster for text operations | Slower but more flexible | -| **Ecosystem** | Mature (ProseMirror, Monaco, Quill) | Growing (still maturing) | -| **Network** | y-websocket, y-webrtc | DIY sync protocol | -| **Persistence** | y-indexeddb | DIY storage | -| **Awareness** | Built-in (y-protocols) | Manual implementation | -| **Time Travel** | No built-in support | First-class feature | -| **Binary Format** | Custom (efficient) | Custom (flexible) | -| **Bundle Size** | ~60KB | ~200KB | - -**Recommendation:** -- **Use Yjs** for collaborative text editing (documents, code, spreadsheets) -- **Use Automerge** for JSON data structures with history/time-travel - -## Conflict Resolution Strategies - -### Merge Strategy - -Both Yjs and Automerge use deterministic merge rules: - -**Text (Yjs):** -- Each character has unique position ID -- Merge based on ID ordering -- Always produces same result regardless of order - -**Lists (Automerge):** -- Each list insertion has unique ID -- Concurrent insertions at same position are ordered by ID -- Deletions are tombstones (preserved for sync) - -**Maps:** -- Last-write-wins with vector clock timestamps -- Concurrent updates to same key: highest timestamp wins - -### Custom Conflict Resolution - -For application-specific logic, wrap CRDT operations: - -```typescript -// Example: Conflict-free task priority -class TaskList { - private yarray: Y.Array - - constructor(doc: Y.Doc) { - this.yarray = doc.getArray('tasks') - } - - addTask(task: Task) { - // Generate unique ID for deterministic ordering - task.id = `${task.priority}_${Date.now()}_${Math.random()}` - this.yarray.push([task]) - } - - getTasks(): Task[] { - // Sort by priority (deterministic) - return this.yarray.toArray().sort((a, b) => { - return b.priority - a.priority || a.id.localeCompare(b.id) - }) - } -} -``` - -## Best Practices - -1. **Use Yjs for text editing** - Most mature, best ecosystem -2. **Use Automerge for JSON data** - Better for structured data -3. **Always include awareness** - Track online users and cursors -4. **Enable local persistence** - IndexedDB for offline support -5. **Test conflict scenarios** - Simulate simultaneous edits -6. **Monitor CRDT size** - Garbage collect tombstones periodically -7. **Use binary encoding** - Smaller than JSON for network sync -8. **Version your schema** - Plan for data structure changes -9. **Implement heartbeat** - Detect disconnected users -10. **Test with poor networks** - Ensure sync works with delays diff --git a/.claude/skills/implementing-realtime-sync/references/offline-sync.md b/.claude/skills/implementing-realtime-sync/references/offline-sync.md deleted file mode 100644 index e4d80e8db..000000000 --- a/.claude/skills/implementing-realtime-sync/references/offline-sync.md +++ /dev/null @@ -1,734 +0,0 @@ -# Offline Sync Reference Guide - -Patterns for building offline-first applications with automatic sync on reconnection. - - -## Table of Contents - -- [The Offline Challenge](#the-offline-challenge) -- [Architecture Pattern](#architecture-pattern) -- [Yjs + IndexedDB Pattern](#yjs-indexeddb-pattern) - - [Setup](#setup) - - [Workflow](#workflow) -- [Connection Status Indicator](#connection-status-indicator) -- [Pending Changes Counter](#pending-changes-counter) -- [Manual Sync Trigger](#manual-sync-trigger) -- [Conflict Resolution](#conflict-resolution) -- [Last Sync Timestamp](#last-sync-timestamp) -- [Data Reconciliation](#data-reconciliation) -- [Retry Strategy](#retry-strategy) -- [Mobile-Specific Patterns](#mobile-specific-patterns) - - [Background Sync (Service Worker)](#background-sync-service-worker) - - [Low Battery Mode](#low-battery-mode) - - [Data Saver Mode](#data-saver-mode) -- [Storage Quota Management](#storage-quota-management) -- [Clear Old Data](#clear-old-data) -- [Testing Offline Scenarios](#testing-offline-scenarios) - - [Simulate Offline Mode](#simulate-offline-mode) - - [Simulate Flaky Connection](#simulate-flaky-connection) -- [Best Practices](#best-practices) - -## The Offline Challenge - -Mobile and web apps need to work without constant connectivity: - -**Problems:** -1. User makes changes while offline -2. Changes queue up locally -3. Connection restored - how to sync? -4. Conflicts with server state -5. Other users made changes too - -**Requirements:** -- Queue mutations locally -- Apply optimistically to UI (instant feedback) -- Sync when connection restored -- Resolve conflicts automatically -- No data loss - -## Architecture Pattern - -``` -┌────────────────────────────────────────────────────────┐ -│ Offline-First Architecture │ -├────────────────────────────────────────────────────────┤ -│ │ -│ User Action (edit, move, delete) │ -│ ↓ │ -│ ┌──────────────────────────┐ │ -│ │ Local CRDT Update │ │ -│ │ (Yjs Y.Doc) │ │ -│ └──────────┬───────────────┘ │ -│ ├─── Apply to UI (optimistic) │ -│ ↓ │ -│ ┌──────────────────────────┐ │ -│ │ Local Storage Queue │ │ -│ │ (IndexedDB) │ │ -│ └──────────┬───────────────┘ │ -│ │ │ -│ ↓ │ -│ Connection status check │ -│ ├─ OFFLINE: Store locally │ -│ └─ ONLINE: Sync to server │ -│ │ -│ ┌──────────────────────────┐ │ -│ │ WebSocket/HTTP Sync │ │ -│ └──────────┬───────────────┘ │ -│ ↓ │ -│ Backend CRDT merge (conflict-free) │ -│ Broadcast to other clients │ -│ │ -└────────────────────────────────────────────────────────┘ -``` - -## Yjs + IndexedDB Pattern - -Yjs with IndexedDB provides automatic offline support. - -### Setup - -```typescript -import * as Y from 'yjs' -import { IndexeddbPersistence } from 'y-indexeddb' -import { WebsocketProvider } from 'y-websocket' - -const doc = new Y.Doc() - -// 1. Local persistence (loads immediately) -const indexeddbProvider = new IndexeddbPersistence('my-document', doc) - -indexeddbProvider.on('synced', () => { - console.log('✅ Loaded from IndexedDB') -}) - -// 2. Server sync (connects in background) -const wsProvider = new WebsocketProvider( - 'wss://api.example.com/sync', - 'my-document', - doc, - { connect: true } -) - -wsProvider.on('status', (event) => { - if (event.status === 'connected') { - console.log('✅ Online - syncing to server...') - } else { - console.log('📴 Offline - queuing locally...') - } -}) - -wsProvider.on('sync', (isSynced) => { - if (isSynced) { - console.log('✅ Fully synced with server') - } -}) - -// All changes automatically: -// 1. Applied to IndexedDB (instant) -// 2. Queued for server sync -// 3. Synced when online -// 4. Merged conflict-free -``` - -### Workflow - -``` -1. Page Load - ↓ - Load from IndexedDB (instant, last known state) - ↓ - Display UI (user can start working immediately) - ↓ - Connect to server in background - ↓ - Sync differences (CRDTs merge conflict-free) - ↓ - UI updates if server had newer changes - -2. User Makes Changes (Offline) - ↓ - Apply to local CRDT (Y.Doc) - ↓ - Save to IndexedDB (automatic) - ↓ - Update UI (optimistic) - ↓ - Queue for server sync (automatic) - -3. Connection Restored - ↓ - WebSocket reconnects (automatic exponential backoff) - ↓ - Send queued changes to server - ↓ - Receive changes from server - ↓ - CRDT merge (conflict-free) - ↓ - UI updates if needed -``` - -## Connection Status Indicator - -Show online/offline/syncing status to user. - -```typescript -import { WebsocketProvider } from 'y-websocket' - -type ConnectionStatus = 'online' | 'offline' | 'syncing' | 'error' - -function setupConnectionMonitor(provider: WebsocketProvider) { - let status: ConnectionStatus = 'offline' - - const updateStatusUI = (newStatus: ConnectionStatus) => { - status = newStatus - - const indicator = document.getElementById('connection-status') - if (!indicator) return - - indicator.className = `status-${status}` - - const messages = { - online: '✅ Online', - offline: '📴 Offline', - syncing: '🔄 Syncing...', - error: '⚠️ Connection error' - } - - indicator.textContent = messages[status] - } - - provider.on('status', (event: { status: string }) => { - if (event.status === 'connected') { - updateStatusUI('syncing') - } else { - updateStatusUI('offline') - } - }) - - provider.on('sync', (isSynced: boolean) => { - if (isSynced) { - updateStatusUI('online') - } - }) - - // Network status (browser API) - window.addEventListener('online', () => { - console.log('Network: online') - updateStatusUI('syncing') - }) - - window.addEventListener('offline', () => { - console.log('Network: offline') - updateStatusUI('offline') - }) - - // Initial state - if (navigator.onLine) { - updateStatusUI('syncing') - } else { - updateStatusUI('offline') - } -} -``` - -**CSS:** -```css -.status-online { - background: #00cc00; - color: white; -} - -.status-offline { - background: #666; - color: white; -} - -.status-syncing { - background: #ff9900; - color: white; -} - -.status-error { - background: #cc0000; - color: white; -} -``` - -## Pending Changes Counter - -Show number of unsynced changes. - -```typescript -function setupPendingChangesMonitor( - doc: Y.Doc, - provider: WebsocketProvider -) { - let pendingChanges = 0 - let isOnline = false - - provider.on('status', (event) => { - isOnline = event.status === 'connected' - updateUI() - }) - - provider.on('sync', (isSynced) => { - if (isSynced) { - pendingChanges = 0 - updateUI() - } - }) - - doc.on('update', (update) => { - if (!isOnline) { - pendingChanges++ - updateUI() - } - }) - - function updateUI() { - const badge = document.getElementById('pending-changes-badge') - if (!badge) return - - if (pendingChanges > 0) { - badge.textContent = `${pendingChanges} pending` - badge.style.display = 'block' - } else { - badge.style.display = 'none' - } - } -} -``` - -## Manual Sync Trigger - -Allow users to manually trigger sync. - -```typescript -function setupManualSync(provider: WebsocketProvider) { - const syncButton = document.getElementById('sync-button') - - if (syncButton) { - syncButton.addEventListener('click', async () => { - syncButton.textContent = 'Syncing...' - syncButton.disabled = true - - try { - // Disconnect and reconnect to force sync - provider.disconnect() - await new Promise(resolve => setTimeout(resolve, 100)) - provider.connect() - - // Wait for sync - await new Promise((resolve) => { - provider.on('sync', (isSynced) => { - if (isSynced) resolve() - }) - }) - - syncButton.textContent = '✓ Synced' - } catch (error) { - syncButton.textContent = '✗ Sync failed' - console.error('Sync error:', error) - } finally { - setTimeout(() => { - syncButton.textContent = 'Sync' - syncButton.disabled = false - }, 2000) - } - }) - } -} -``` - -## Conflict Resolution - -CRDTs handle conflicts automatically, but you can detect when merges occur. - -```typescript -doc.on('update', (update, origin, doc, transaction) => { - // Check if update came from remote - if (origin !== doc) { - console.log('Received remote update - merged automatically') - - // Optionally notify user - const changes = transaction.changed - - if (changes.size > 0) { - showNotification('Document updated by another user') - } - } -}) -``` - -## Last Sync Timestamp - -Track when document was last synced with server. - -```typescript -let lastSyncTime: number | null = null - -provider.on('sync', (isSynced) => { - if (isSynced) { - lastSyncTime = Date.now() - - // Store in localStorage - localStorage.setItem( - `last-sync-${documentId}`, - lastSyncTime.toString() - ) - - updateLastSyncUI() - } -}) - -function updateLastSyncUI() { - const element = document.getElementById('last-sync-time') - if (!element) return - - if (!lastSyncTime) { - element.textContent = 'Never synced' - return - } - - const now = Date.now() - const elapsed = now - lastSyncTime - - if (elapsed < 60000) { - element.textContent = 'Just now' - } else if (elapsed < 3600000) { - element.textContent = `${Math.floor(elapsed / 60000)}m ago` - } else if (elapsed < 86400000) { - element.textContent = `${Math.floor(elapsed / 3600000)}h ago` - } else { - element.textContent = new Date(lastSyncTime).toLocaleDateString() - } -} - -// Update every minute -setInterval(updateLastSyncUI, 60000) -``` - -## Data Reconciliation - -Handle cases where local and server state diverge significantly. - -```typescript -async function reconcileData( - doc: Y.Doc, - provider: WebsocketProvider -) { - // Get current local state - const localState = Y.encodeStateAsUpdate(doc) - - // Request full server state - provider.disconnect() - - // Clear local doc - const newDoc = new Y.Doc() - - // Load server state - provider.doc = newDoc - provider.connect() - - // Wait for sync - await new Promise((resolve) => { - provider.on('sync', (isSynced) => { - if (isSynced) resolve() - }) - }) - - // Apply local changes on top - Y.applyUpdate(newDoc, localState) - - console.log('Reconciliation complete') -} -``` - -## Retry Strategy - -Implement exponential backoff for failed syncs. - -```typescript -class RetryableWebSocketProvider { - private provider: WebsocketProvider - private reconnectAttempts = 0 - private maxReconnectDelay = 30000 // 30 seconds - private reconnectTimeout: NodeJS.Timeout | null = null - - constructor(url: string, roomName: string, doc: Y.Doc) { - this.provider = new WebsocketProvider(url, roomName, doc, { - connect: false // Manual connection control - }) - - this.setupEventHandlers() - this.connect() - } - - private setupEventHandlers() { - this.provider.on('status', (event) => { - if (event.status === 'connected') { - console.log('Connected to server') - this.reconnectAttempts = 0 // Reset on successful connection - } else { - console.log('Disconnected from server') - this.scheduleReconnect() - } - }) - } - - private connect() { - console.log('Attempting connection...') - this.provider.connect() - } - - private scheduleReconnect() { - if (this.reconnectTimeout) { - clearTimeout(this.reconnectTimeout) - } - - // Exponential backoff: 1s, 2s, 4s, 8s, 16s, 30s (max) - const delay = Math.min( - 1000 * Math.pow(2, this.reconnectAttempts), - this.maxReconnectDelay - ) - - // Add jitter (0-1000ms) to prevent thundering herd - const jitter = Math.random() * 1000 - - console.log(`Reconnecting in ${Math.floor((delay + jitter) / 1000)}s...`) - - this.reconnectTimeout = setTimeout(() => { - this.reconnectAttempts++ - this.connect() - }, delay + jitter) - } - - disconnect() { - if (this.reconnectTimeout) { - clearTimeout(this.reconnectTimeout) - } - this.provider.disconnect() - } -} -``` - -## Mobile-Specific Patterns - -### Background Sync (Service Worker) - -For PWAs, use Background Sync API to sync when connection restored. - -```typescript -// Register service worker -if ('serviceWorker' in navigator && 'sync' in ServiceWorkerRegistration.prototype) { - navigator.serviceWorker.register('/sw.js') -} - -// Request background sync when offline -async function requestBackgroundSync() { - const registration = await navigator.serviceWorker.ready - - try { - await registration.sync.register('yjs-sync') - console.log('Background sync registered') - } catch (error) { - console.error('Background sync failed:', error) - } -} - -// In service worker (sw.js) -self.addEventListener('sync', (event) => { - if (event.tag === 'yjs-sync') { - event.waitUntil(syncYjsDocument()) - } -}) - -async function syncYjsDocument() { - // Load document from IndexedDB - // Sync to server - // Return promise -} -``` - -### Low Battery Mode - -Reduce sync frequency when battery is low. - -```typescript -function setupBatteryOptimization(provider: WebsocketProvider) { - if ('getBattery' in navigator) { - (navigator as any).getBattery().then((battery: any) => { - const updateSyncFrequency = () => { - if (battery.charging) { - // Normal sync (real-time) - provider.connect() - } else if (battery.level < 0.2) { - // Low battery - sync every 5 minutes - provider.disconnect() - setInterval(() => { - provider.connect() - setTimeout(() => provider.disconnect(), 10000) - }, 5 * 60 * 1000) - } else { - // Normal sync - provider.connect() - } - } - - battery.addEventListener('chargingchange', updateSyncFrequency) - battery.addEventListener('levelchange', updateSyncFrequency) - - updateSyncFrequency() - }) - } -} -``` - -### Data Saver Mode - -Compress data when on cellular connection. - -```typescript -if ('connection' in navigator) { - const connection = (navigator as any).connection - - connection.addEventListener('change', () => { - const type = connection.effectiveType - - if (type === '4g' || connection.type === 'wifi') { - // Normal sync frequency - console.log('Fast connection - normal sync') - } else if (type === '3g' || type === '2g') { - // Reduce sync frequency - console.log('Slow connection - reduced sync') - } - - // Check if data saver is enabled - if (connection.saveData) { - console.log('Data saver enabled - minimal sync') - } - }) -} -``` - -## Storage Quota Management - -Monitor IndexedDB storage and warn when running low. - -```typescript -async function checkStorageQuota() { - if ('storage' in navigator && 'estimate' in navigator.storage) { - const estimate = await navigator.storage.estimate() - - const usage = estimate.usage || 0 - const quota = estimate.quota || 0 - const percentUsed = (usage / quota) * 100 - - console.log(`Storage: ${Math.round(usage / 1024 / 1024)}MB / ${Math.round(quota / 1024 / 1024)}MB (${percentUsed.toFixed(1)}%)`) - - if (percentUsed > 80) { - showWarning('Storage almost full - consider clearing old documents') - } - } -} - -// Check on page load -checkStorageQuota() - -// Check after large syncs -provider.on('sync', checkStorageQuota) -``` - -## Clear Old Data - -Implement garbage collection for old documents. - -```typescript -async function clearOldDocuments(maxAgeMs: number = 30 * 24 * 60 * 60 * 1000) { - const db = await openIndexedDB() - const transaction = db.transaction(['documents'], 'readwrite') - const store = transaction.objectStore('documents') - - const request = store.openCursor() - - request.onsuccess = (event) => { - const cursor = (event.target as IDBRequest).result - - if (cursor) { - const doc = cursor.value - const age = Date.now() - doc.lastModified - - if (age > maxAgeMs) { - console.log(`Deleting old document: ${doc.id}`) - cursor.delete() - } - - cursor.continue() - } else { - console.log('Cleanup complete') - } - } -} - -// Run cleanup weekly -setInterval(clearOldDocuments, 7 * 24 * 60 * 60 * 1000) -``` - -## Testing Offline Scenarios - -### Simulate Offline Mode - -```typescript -// Disconnect from server -provider.disconnect() - -// Make changes -ytext.insert(0, 'This was written offline') - -// Wait 5 seconds -await new Promise(resolve => setTimeout(resolve, 5000)) - -// Reconnect -provider.connect() - -// Verify sync -await new Promise((resolve) => { - provider.on('sync', (isSynced) => { - if (isSynced) { - console.log('✅ Offline changes synced successfully') - resolve() - } - }) -}) -``` - -### Simulate Flaky Connection - -```typescript -// Randomly disconnect/reconnect -setInterval(() => { - if (Math.random() > 0.5) { - provider.disconnect() - console.log('📴 Simulating connection loss') - - setTimeout(() => { - provider.connect() - console.log('✅ Simulating connection restored') - }, Math.random() * 5000) - } -}, 10000) -``` - -## Best Practices - -1. **Use Yjs + IndexedDB** - Automatic offline support -2. **Show connection status** - Visual indicator for users -3. **Display pending changes count** - User knows what's queued -4. **Implement exponential backoff** - Don't hammer server during outages -5. **Handle low battery** - Reduce sync frequency when battery low -6. **Monitor storage quota** - Warn when running out of space -7. **Garbage collect old data** - Delete documents not accessed in 30+ days -8. **Test offline scenarios** - Simulate poor connections -9. **Use service workers** - Background sync for PWAs -10. **Provide manual sync button** - Let users force sync if needed diff --git a/.claude/skills/implementing-realtime-sync/references/presence-patterns.md b/.claude/skills/implementing-realtime-sync/references/presence-patterns.md deleted file mode 100644 index 9d3822dc3..000000000 --- a/.claude/skills/implementing-realtime-sync/references/presence-patterns.md +++ /dev/null @@ -1,743 +0,0 @@ -# Presence Patterns Reference Guide - -Real-time awareness of other users: online status, cursors, selections, typing indicators. - - -## Table of Contents - -- [What is Presence?](#what-is-presence) -- [Yjs Awareness API](#yjs-awareness-api) - - [Basic Setup](#basic-setup) - - [User State Structure](#user-state-structure) - - [Tracking Users](#tracking-users) - - [Listening for Changes](#listening-for-changes) - - [Cleanup on Page Unload](#cleanup-on-page-unload) -- [Cursor Tracking](#cursor-tracking) - - [Mouse Cursor Position](#mouse-cursor-position) - - [Text Cursor Position](#text-cursor-position) -- [Selection Tracking](#selection-tracking) -- [Typing Indicator](#typing-indicator) -- [Active View Tracking](#active-view-tracking) -- [Last Seen Timestamp](#last-seen-timestamp) -- [User Avatar List](#user-avatar-list) -- [Focus Indicator](#focus-indicator) -- [Performance Optimization](#performance-optimization) - - [Throttling Updates](#throttling-updates) - - [Debouncing Typing Indicator](#debouncing-typing-indicator) - - [Cleanup Stale State](#cleanup-stale-state) -- [Best Practices](#best-practices) - -## What is Presence? - -Presence provides awareness of other users in collaborative applications: - -- **Who's online** - Active users list -- **Cursor positions** - Where others are editing -- **Selections** - What others have selected -- **Typing indicators** - Who's currently typing -- **Active view** - What page/document others are viewing - -**Key Characteristics:** -- **Ephemeral** - Not persisted (unlike CRDT document state) -- **Fast updates** - High-frequency position changes -- **Eventually consistent** - Can tolerate temporary inconsistency - -## Yjs Awareness API - -Yjs includes built-in presence via Awareness API. - -### Basic Setup - -```typescript -import * as Y from 'yjs' -import { WebsocketProvider } from 'y-websocket' - -const doc = new Y.Doc() -const provider = new WebsocketProvider('ws://localhost:1234', 'doc-id', doc) - -// Awareness automatically created by provider -const awareness = provider.awareness - -// Get local client ID -const localClientId = awareness.clientID - -// Set local state -awareness.setLocalState({ - user: { - name: 'Alice', - email: 'alice@example.com', - color: '#FF5733', - avatar: 'https://example.com/alice.jpg' - }, - cursor: null, // null when not active - selection: null -}) -``` - -### User State Structure - -```typescript -interface UserState { - user: { - name: string - email?: string - color: string // For cursor/avatar rendering - avatar?: string - } - cursor?: { - x: number - y: number - // Or for text: - anchor: number // Cursor position in document - head: number // Selection end (anchor === head if no selection) - } - selection?: { - ranges: Array<{ from: number; to: number }> - } - typing?: boolean - lastSeen?: number // Timestamp -} -``` - -### Tracking Users - -```typescript -// Get all connected users -const users = Array.from(awareness.getStates().entries()) - .filter(([clientId]) => clientId !== awareness.clientID) - .map(([clientId, state]) => ({ - id: clientId, - ...state - })) - -console.log(`${users.length} other users online`) - -// Get specific user -const userId = 123 -const userState = awareness.getStates().get(userId) -if (userState) { - console.log(`${userState.user.name} is online`) -} -``` - -### Listening for Changes - -```typescript -awareness.on('change', (changes: { - added: number[] // New users - updated: number[] // State changed - removed: number[] // Users left -}) => { - // Handle new users - changes.added.forEach(clientId => { - const state = awareness.getStates().get(clientId) - console.log(`${state.user.name} joined`) - showNotification(`${state.user.name} joined the document`) - }) - - // Handle updates (cursor moved, typing status, etc.) - changes.updated.forEach(clientId => { - const state = awareness.getStates().get(clientId) - updateCursor(clientId, state.cursor) - updateTypingIndicator(clientId, state.typing) - }) - - // Handle users leaving - changes.removed.forEach(clientId => { - console.log(`User ${clientId} left`) - removeCursor(clientId) - }) -}) -``` - -### Cleanup on Page Unload - -```typescript -window.addEventListener('beforeunload', () => { - // Clear local state (notifies others you're leaving) - awareness.setLocalState(null) -}) - -// Or destroy awareness entirely -window.addEventListener('beforeunload', () => { - awareness.destroy() -}) -``` - -## Cursor Tracking - -### Mouse Cursor Position - -Track mouse position and broadcast to others. - -**Tracking:** -```typescript -import throttle from 'lodash.throttle' - -const awareness = provider.awareness - -// Throttle cursor updates to 60 FPS (16ms) -const updateCursor = throttle((event: MouseEvent) => { - const state = awareness.getLocalState() - - awareness.setLocalState({ - ...state, - cursor: { - x: event.clientX, - y: event.clientY - } - }) -}, 16) - -document.addEventListener('mousemove', updateCursor) - -// Clear cursor when mouse leaves -document.addEventListener('mouseleave', () => { - const state = awareness.getLocalState() - awareness.setLocalState({ - ...state, - cursor: null - }) -}) -``` - -**Rendering:** -```typescript -function renderCursors() { - const cursorsContainer = document.getElementById('cursors') - cursorsContainer.innerHTML = '' - - awareness.getStates().forEach((state, clientId) => { - // Skip own cursor - if (clientId === awareness.clientID) return - - // Skip if no cursor position - if (!state.cursor) return - - const cursor = document.createElement('div') - cursor.className = 'remote-cursor' - cursor.style.position = 'absolute' - cursor.style.left = `${state.cursor.x}px` - cursor.style.top = `${state.cursor.y}px` - cursor.style.pointerEvents = 'none' - - // Cursor shape - cursor.innerHTML = ` - - - -
    - ${state.user.name} -
    - ` - - cursorsContainer.appendChild(cursor) - }) -} - -awareness.on('change', renderCursors) -``` - -**CSS:** -```css -.remote-cursor { - position: absolute; - z-index: 9999; - pointer-events: none; - transition: left 0.1s ease-out, top 0.1s ease-out; -} - -.cursor-label { - position: absolute; - left: 20px; - top: 0; - padding: 2px 6px; - border-radius: 3px; - color: white; - font-size: 12px; - white-space: nowrap; -} -``` - -### Text Cursor Position - -Track cursor in text editor (ProseMirror, Monaco, CodeMirror). - -**ProseMirror Integration:** -```typescript -import { yCursorPlugin } from 'y-prosemirror' -import { EditorState } from 'prosemirror-state' - -const state = EditorState.create({ - schema, - plugins: [ - ySyncPlugin(yXmlFragment), - yCursorPlugin(provider.awareness, { - cursorBuilder: (user) => { - const cursor = document.createElement('span') - cursor.className = 'remote-cursor' - cursor.style.borderLeft = `2px solid ${user.color}` - return cursor - }, - selectionBuilder: (user) => { - return { - style: `background-color: ${user.color}30`, // 30 = 20% opacity - class: 'remote-selection' - } - } - }) - ] -}) -``` - -**Monaco Integration:** -```typescript -import * as monaco from 'monaco-editor' -import { MonacoBinding } from 'y-monaco' - -const editor = monaco.editor.create(document.getElementById('editor'), { - value: '', - language: 'typescript' -}) - -const ytext = doc.getText('monaco') - -const binding = new MonacoBinding( - ytext, - editor.getModel(), - new Set([editor]), - provider.awareness -) - -// Cursors and selections automatically rendered -``` - -## Selection Tracking - -Track text selection ranges. - -```typescript -document.addEventListener('selectionchange', throttle(() => { - const selection = window.getSelection() - - if (!selection || selection.rangeCount === 0) { - awareness.setLocalState({ - ...awareness.getLocalState(), - selection: null - }) - return - } - - const range = selection.getRangeAt(0) - - awareness.setLocalState({ - ...awareness.getLocalState(), - selection: { - ranges: [{ - from: range.startOffset, - to: range.endOffset - }] - } - }) -}, 100)) -``` - -**Render Selections:** -```typescript -function renderSelections() { - // Remove old selections - document.querySelectorAll('.remote-selection').forEach(el => el.remove()) - - awareness.getStates().forEach((state, clientId) => { - if (clientId === awareness.clientID) return - if (!state.selection) return - - state.selection.ranges.forEach(range => { - const span = document.createElement('span') - span.className = 'remote-selection' - span.style.backgroundColor = `${state.user.color}30` - - // Position span over selected text - // (Implementation depends on text editor) - }) - }) -} -``` - -## Typing Indicator - -Show when users are typing. - -**Tracking:** -```typescript -let typingTimeout: NodeJS.Timeout | null = null - -const textarea = document.getElementById('message-input') as HTMLTextAreaElement - -textarea.addEventListener('input', () => { - // Set typing = true - awareness.setLocalState({ - ...awareness.getLocalState(), - typing: true - }) - - // Clear after 1 second of no input - if (typingTimeout) clearTimeout(typingTimeout) - - typingTimeout = setTimeout(() => { - awareness.setLocalState({ - ...awareness.getLocalState(), - typing: false - }) - }, 1000) -}) - -textarea.addEventListener('blur', () => { - awareness.setLocalState({ - ...awareness.getLocalState(), - typing: false - }) -}) -``` - -**Rendering:** -```typescript -function renderTypingIndicators() { - const typingUsers = Array.from(awareness.getStates().entries()) - .filter(([clientId, state]) => - clientId !== awareness.clientID && state.typing - ) - .map(([, state]) => state.user.name) - - const indicator = document.getElementById('typing-indicator') - - if (typingUsers.length === 0) { - indicator.textContent = '' - } else if (typingUsers.length === 1) { - indicator.textContent = `${typingUsers[0]} is typing...` - } else if (typingUsers.length === 2) { - indicator.textContent = `${typingUsers[0]} and ${typingUsers[1]} are typing...` - } else { - indicator.textContent = `${typingUsers[0]}, ${typingUsers[1]}, and ${typingUsers.length - 2} others are typing...` - } -} - -awareness.on('change', renderTypingIndicators) -``` - -## Active View Tracking - -Track which page/section users are viewing. - -```typescript -// Update when route changes -router.afterEach((to) => { - awareness.setLocalState({ - ...awareness.getLocalState(), - activeView: { - path: to.path, - title: to.meta.title - } - }) -}) - -// Show users on same page -function getUsersOnSamePage(currentPath: string) { - return Array.from(awareness.getStates().entries()) - .filter(([clientId, state]) => - clientId !== awareness.clientID && - state.activeView?.path === currentPath - ) - .map(([, state]) => state.user) -} - -// Render user list -function renderUsersHere() { - const users = getUsersOnSamePage(window.location.pathname) - - const list = document.getElementById('users-here') - list.innerHTML = users.map(user => ` -
    - ${user.name} - ${user.name} -
    - `).join('') -} - -awareness.on('change', renderUsersHere) -``` - -## Last Seen Timestamp - -Track when users were last active. - -```typescript -// Update on any interaction -function updateLastSeen() { - awareness.setLocalState({ - ...awareness.getLocalState(), - lastSeen: Date.now() - }) -} - -document.addEventListener('mousemove', throttle(updateLastSeen, 5000)) -document.addEventListener('keydown', throttle(updateLastSeen, 5000)) - -// Show inactive users -function getInactiveUsers(thresholdMs: number = 60000) { - const now = Date.now() - - return Array.from(awareness.getStates().entries()) - .filter(([clientId, state]) => - clientId !== awareness.clientID && - state.lastSeen && - now - state.lastSeen > thresholdMs - ) - .map(([, state]) => state.user) -} - -// Mark inactive users with visual cue -setInterval(() => { - const inactiveUsers = getInactiveUsers() - - inactiveUsers.forEach(user => { - const element = document.getElementById(`user-${user.email}`) - if (element) { - element.classList.add('inactive') - } - }) -}, 10000) -``` - -## User Avatar List - -Display all online users with avatars. - -```typescript -function renderUserList() { - const container = document.getElementById('user-list') - - const users = Array.from(awareness.getStates().entries()) - .map(([clientId, state]) => ({ - id: clientId, - isMe: clientId === awareness.clientID, - ...state.user, - lastSeen: state.lastSeen - })) - .sort((a, b) => { - // Sort: active first, then by name - const aActive = Date.now() - (a.lastSeen || 0) < 60000 - const bActive = Date.now() - (b.lastSeen || 0) < 60000 - - if (aActive && !bActive) return -1 - if (!aActive && bActive) return 1 - - return a.name.localeCompare(b.name) - }) - - container.innerHTML = users.map(user => ` -
    -
    - ${user.name} - ${isUserActive(user) ? '
    ' : ''} -
    - -
    - `).join('') -} - -function isUserActive(user: { lastSeen?: number }): boolean { - if (!user.lastSeen) return false - return Date.now() - user.lastSeen < 60000 // Active within 1 minute -} - -awareness.on('change', renderUserList) -``` - -**CSS:** -```css -.user-item { - display: flex; - align-items: center; - padding: 8px; - border-radius: 4px; -} - -.user-item.me { - background: #f0f0f0; -} - -.avatar { - position: relative; - width: 40px; - height: 40px; - border-radius: 50%; - border: 2px solid transparent; -} - -.avatar img { - width: 100%; - height: 100%; - border-radius: 50%; - object-fit: cover; -} - -.status-dot { - position: absolute; - bottom: 0; - right: 0; - width: 12px; - height: 12px; - background: #00cc00; - border: 2px solid white; - border-radius: 50%; -} - -.user-info { - margin-left: 12px; -} - -.name { - font-weight: 500; -} - -.email { - font-size: 12px; - color: #666; -} -``` - -## Focus Indicator - -Show which element/field users are focused on. - -```typescript -document.addEventListener('focusin', (event) => { - const target = event.target as HTMLElement - - awareness.setLocalState({ - ...awareness.getLocalState(), - focus: { - elementId: target.id, - elementType: target.tagName, - label: target.getAttribute('aria-label') || target.getAttribute('name') - } - }) -}) - -document.addEventListener('focusout', () => { - awareness.setLocalState({ - ...awareness.getLocalState(), - focus: null - }) -}) - -// Show who's editing what field -function renderFieldOccupancy() { - document.querySelectorAll('input, textarea').forEach(element => { - const id = element.id - - const occupants = Array.from(awareness.getStates().entries()) - .filter(([clientId, state]) => - clientId !== awareness.clientID && - state.focus?.elementId === id - ) - .map(([, state]) => state.user) - - if (occupants.length > 0) { - // Show indicator - const indicator = document.createElement('div') - indicator.className = 'field-occupant' - indicator.textContent = occupants.map(u => u.name).join(', ') - indicator.style.color = occupants[0].color - element.parentElement?.appendChild(indicator) - } - }) -} - -awareness.on('change', renderFieldOccupancy) -``` - -## Performance Optimization - -### Throttling Updates - -Don't send awareness updates on every pixel movement. - -```typescript -import throttle from 'lodash.throttle' - -// 60 FPS = ~16ms -const updateCursor = throttle((x: number, y: number) => { - awareness.setLocalState({ - ...awareness.getLocalState(), - cursor: { x, y } - }) -}, 16) - -// Or 30 FPS = ~33ms for less network traffic -const updateCursorSlow = throttle(updateCursor, 33) -``` - -### Debouncing Typing Indicator - -```typescript -import debounce from 'lodash.debounce' - -const stopTyping = debounce(() => { - awareness.setLocalState({ - ...awareness.getLocalState(), - typing: false - }) -}, 1000) - -textarea.addEventListener('input', () => { - awareness.setLocalState({ - ...awareness.getLocalState(), - typing: true - }) - - stopTyping() -}) -``` - -### Cleanup Stale State - -Remove old state to prevent memory leaks. - -```typescript -// Clean up users inactive for >5 minutes -setInterval(() => { - const now = Date.now() - const threshold = 5 * 60 * 1000 // 5 minutes - - awareness.getStates().forEach((state, clientId) => { - if (state.lastSeen && now - state.lastSeen > threshold) { - // Manually remove stale state - awareness.states.delete(clientId) - awareness.emit('change', { - added: [], - updated: [], - removed: [clientId] - }) - } - }) -}, 60000) // Check every minute -``` - -## Best Practices - -1. **Throttle frequent updates** - Cursor movements, scrolling (16-33ms) -2. **Debounce typing indicators** - Stop typing after 1 second of inactivity -3. **Include last seen timestamp** - Track user activity -4. **Clear state on page unload** - Notify others when leaving -5. **Show inactive users differently** - Visual cue after 1+ minute -6. **Limit awareness data size** - Don't include large objects -7. **Use color coding** - Assign each user a distinct color -8. **Render cursors efficiently** - Use CSS transforms, not DOM manipulation -9. **Test with many users** - Ensure performance with 50+ users -10. **Provide privacy controls** - Let users hide their presence diff --git a/.claude/skills/implementing-realtime-sync/references/sse.md b/.claude/skills/implementing-realtime-sync/references/sse.md deleted file mode 100644 index c75206867..000000000 --- a/.claude/skills/implementing-realtime-sync/references/sse.md +++ /dev/null @@ -1,642 +0,0 @@ -# Server-Sent Events (SSE) Reference Guide - -SSE provides one-way server-to-client event streaming over HTTP. - - -## Table of Contents - -- [Protocol Overview](#protocol-overview) -- [SSE Message Format](#sse-message-format) - - [Basic Message](#basic-message) - - [Multi-Line Message](#multi-line-message) - - [Event Type](#event-type) - - [Event ID (for resumption)](#event-id-for-resumption) - - [Combined](#combined) -- [Automatic Reconnection](#automatic-reconnection) -- [Custom Retry Interval](#custom-retry-interval) -- [LLM Streaming Pattern](#llm-streaming-pattern) - - [OpenAI/Anthropic Relay](#openaianthropic-relay) - - [Frontend Integration](#frontend-integration) -- [Edge Runtime Support](#edge-runtime-support) - - [Hono (Cloudflare Workers, Deno)](#hono-cloudflare-workers-deno) - - [Next.js Edge Runtime](#nextjs-edge-runtime) -- [Live Metrics Dashboard](#live-metrics-dashboard) -- [Notification Feed](#notification-feed) -- [Authentication](#authentication) - - [Cookie-Based](#cookie-based) - - [Bearer Token](#bearer-token) -- [Compression](#compression) -- [Browser Compatibility](#browser-compatibility) -- [Limitations](#limitations) -- [Best Practices](#best-practices) - -## Protocol Overview - -SSE is a simple text-based protocol for pushing events from server to client. - -**HTTP Request:** -``` -GET /stream HTTP/1.1 -Host: example.com -Accept: text/event-stream -Cache-Control: no-cache -``` - -**HTTP Response:** -``` -HTTP/1.1 200 OK -Content-Type: text/event-stream -Cache-Control: no-cache -Connection: keep-alive - -data: Hello world - -data: Multi-line -data: message - -event: custom -data: Custom event - -id: 123 -data: Event with ID -``` - -## SSE Message Format - -### Basic Message -``` -data: This is a message - -``` -Note: Two newlines (`\n\n`) terminate each message. - -### Multi-Line Message -``` -data: Line 1 -data: Line 2 -data: Line 3 - -``` - -### Event Type -``` -event: notification -data: You have a new message - -``` - -### Event ID (for resumption) -``` -id: 42 -data: This event can be resumed - -``` - -### Combined -``` -id: 100 -event: update -data: {"user": "alice", "action": "joined"} -retry: 10000 - -``` - -## Automatic Reconnection - -Browser's EventSource automatically reconnects with exponential backoff. - -**Default Behavior:** -- Initial reconnect: ~1 second -- Subsequent: exponential backoff (2s, 4s, 8s, 16s, 32s) -- Max delay: ~64 seconds -- Browser sends `Last-Event-ID` header on reconnect - -**Server-Side Resumption:** -```python -from sse_starlette.sse import EventSourceResponse -from fastapi import Request - -@app.get("/stream") -async def stream(request: Request): - # Get last event ID from header - last_event_id = request.headers.get("Last-Event-ID") - - async def generate(): - # Resume from last received event - start_from = int(last_event_id) if last_event_id else 0 - - for i in range(start_from, 1000): - yield { - "id": str(i), # Include ID for resumption - "event": "message", - "data": f"Event {i}" - } - - return EventSourceResponse(generate()) -``` - -## Custom Retry Interval - -Override browser's reconnection delay: - -```python -async def generate(): - # First event sets retry interval - yield { - "retry": 5000, # 5 seconds - "data": "Connected" - } - - for i in range(100): - yield { - "data": f"Message {i}" - } -``` - -**Protocol:** -``` -retry: 5000 - -data: Connected - -data: Message 0 - -data: Message 1 - -``` - -## LLM Streaming Pattern - -Stream LLM tokens progressively to frontend. - -### OpenAI/Anthropic Relay - -**Python (FastAPI → OpenAI):** -```python -from fastapi import FastAPI -from fastapi.responses import StreamingResponse -from openai import AsyncOpenAI -import os - -app = FastAPI() -client = AsyncOpenAI(api_key=os.getenv("OPENAI_API_KEY")) - -@app.post("/chat/stream") -async def stream_chat(prompt: str): - async def generate(): - stream = await client.chat.completions.create( - model="gpt-4", - messages=[{"role": "user", "content": prompt}], - stream=True - ) - - async for chunk in stream: - if chunk.choices[0].delta.content is not None: - content = chunk.choices[0].delta.content - - # SSE format - yield f"event: token\n" - yield f"data: {content}\n\n" - - # Done signal - yield f"event: done\n" - yield f"data: [DONE]\n\n" - - return StreamingResponse( - generate(), - media_type="text/event-stream", - headers={ - "Cache-Control": "no-cache", - "Connection": "keep-alive", - "X-Accel-Buffering": "no", # Disable Nginx buffering - } - ) -``` - -**Python (FastAPI → Anthropic):** -```python -from anthropic import AsyncAnthropic - -client = AsyncAnthropic(api_key=os.getenv("ANTHROPIC_API_KEY")) - -@app.post("/chat/stream") -async def stream_chat(prompt: str): - async def generate(): - async with client.messages.stream( - model="claude-3-5-sonnet-20241022", - messages=[{"role": "user", "content": prompt}], - max_tokens=1024 - ) as stream: - async for text in stream.text_stream: - yield f"event: token\n" - yield f"data: {text}\n\n" - - yield f"event: done\n" - yield f"data: [DONE]\n\n" - - return StreamingResponse( - generate(), - media_type="text/event-stream", - headers={ - "Cache-Control": "no-cache", - "Connection": "keep-alive", - } - ) -``` - -### Frontend Integration - -**EventSource API (Browser):** -```typescript -function streamLLMResponse(prompt: string) { - const eventSource = new EventSource( - `/chat/stream?prompt=${encodeURIComponent(prompt)}` - ) - - eventSource.addEventListener('token', (event) => { - const token = event.data - appendToMessage(token) // Progressive rendering - }) - - eventSource.addEventListener('done', () => { - eventSource.close() - markComplete() - }) - - eventSource.onerror = (error) => { - console.error('SSE error:', error) - eventSource.close() - handleError() - } -} -``` - -**React Hook:** -```typescript -import { useEffect, useState } from 'react' - -function useSSEStream(url: string) { - const [data, setData] = useState('') - const [isDone, setIsDone] = useState(false) - const [error, setError] = useState(null) - - useEffect(() => { - const eventSource = new EventSource(url) - - eventSource.addEventListener('token', (event) => { - setData(prev => prev + event.data) - }) - - eventSource.addEventListener('done', () => { - setIsDone(true) - eventSource.close() - }) - - eventSource.onerror = (err) => { - setError(new Error('Stream error')) - eventSource.close() - } - - return () => { - eventSource.close() - } - }, [url]) - - return { data, isDone, error } -} - -// Usage -function ChatMessage({ prompt }: { prompt: string }) { - const { data, isDone, error } = useSSEStream(`/chat/stream?prompt=${prompt}`) - - if (error) return
    Error: {error.message}
    - - return ( -
    - {data} - {!isDone && } -
    - ) -} -``` - -## Edge Runtime Support - -### Hono (Cloudflare Workers, Deno) - -```typescript -import { Hono } from 'hono' -import { streamSSE } from 'hono/streaming' - -const app = new Hono() - -app.get('/stream', (c) => { - return streamSSE(c, async (stream) => { - const tokens = ['Hello', ' ', 'from', ' ', 'the', ' ', 'edge!'] - - for (const token of tokens) { - await stream.writeSSE({ - event: 'token', - data: token, - }) - await stream.sleep(100) - } - - await stream.writeSSE({ - event: 'done', - data: '[DONE]', - }) - }) -}) - -export default app -``` - -### Next.js Edge Runtime - -```typescript -// app/api/stream/route.ts -export const runtime = 'edge' - -export async function GET(request: Request) { - const encoder = new TextEncoder() - - const stream = new ReadableStream({ - async start(controller) { - const tokens = ['Hello', ' ', 'Next.js', ' ', 'Edge!'] - - for (const token of tokens) { - controller.enqueue( - encoder.encode(`event: token\ndata: ${token}\n\n`) - ) - await new Promise(resolve => setTimeout(resolve, 100)) - } - - controller.enqueue( - encoder.encode(`event: done\ndata: [DONE]\n\n`) - ) - controller.close() - } - }) - - return new Response(stream, { - headers: { - 'Content-Type': 'text/event-stream', - 'Cache-Control': 'no-cache', - 'Connection': 'keep-alive', - } - }) -} -``` - -## Live Metrics Dashboard - -Push real-time metrics to dashboard. - -**Backend (Python):** -```python -import asyncio -from datetime import datetime - -@app.get("/metrics") -async def stream_metrics(): - async def generate(): - while True: - # Fetch current metrics - metrics = { - "active_users": get_active_users(), - "revenue": get_current_revenue(), - "timestamp": datetime.now().isoformat() - } - - yield { - "event": "metrics", - "data": json.dumps(metrics) - } - - await asyncio.sleep(5) # Update every 5 seconds - - return EventSourceResponse(generate()) -``` - -**Frontend (React):** -```typescript -import { useEffect, useState } from 'react' - -function LiveDashboard() { - const [metrics, setMetrics] = useState({ - active_users: 0, - revenue: 0 - }) - - useEffect(() => { - const eventSource = new EventSource('/metrics') - - eventSource.addEventListener('metrics', (event) => { - const data = JSON.parse(event.data) - setMetrics(data) - }) - - return () => eventSource.close() - }, []) - - return ( -
    - - -
    - ) -} -``` - -## Notification Feed - -Push notifications to users. - -**Backend (Python with Redis Pub/Sub):** -```python -import redis.asyncio as redis - -redis_client = redis.from_url("redis://localhost") - -@app.get("/notifications/{user_id}") -async def stream_notifications(user_id: str): - async def generate(): - pubsub = redis_client.pubsub() - await pubsub.subscribe(f"notifications:{user_id}") - - async for message in pubsub.listen(): - if message['type'] == 'message': - yield { - "event": "notification", - "data": message['data'] - } - - return EventSourceResponse(generate()) - -# Publish notification -@app.post("/notify/{user_id}") -async def notify_user(user_id: str, message: str): - await redis_client.publish( - f"notifications:{user_id}", - json.dumps({"message": message, "timestamp": datetime.now().isoformat()}) - ) - return {"status": "sent"} -``` - -**Frontend:** -```typescript -function NotificationCenter({ userId }: { userId: string }) { - const [notifications, setNotifications] = useState([]) - - useEffect(() => { - const eventSource = new EventSource(`/notifications/${userId}`) - - eventSource.addEventListener('notification', (event) => { - const notification = JSON.parse(event.data) - setNotifications(prev => [notification, ...prev]) - }) - - return () => eventSource.close() - }, [userId]) - - return ( -
    - {notifications.map((notif, i) => ( - - ))} -
    - ) -} -``` - -## Authentication - -SSE uses standard HTTP, so authentication follows HTTP patterns. - -### Cookie-Based - -```python -from fastapi import Request, HTTPException - -@app.get("/stream") -async def stream(request: Request): - # Validate session cookie - session_token = request.cookies.get("session_token") - if not verify_session(session_token): - raise HTTPException(status_code=401, detail="Unauthorized") - - async def generate(): - yield {"data": "Authenticated stream"} - - return EventSourceResponse(generate()) -``` - -### Bearer Token - -```typescript -// Frontend - pass token in URL (NOT recommended - use POST) -const eventSource = new EventSource(`/stream?token=${token}`) - -// Better: Use POST with Authorization header (requires EventSource polyfill) -import { EventSourcePolyfill } from 'event-source-polyfill' - -const eventSource = new EventSourcePolyfill('/stream', { - headers: { - 'Authorization': `Bearer ${token}` - } -}) -``` - -**Backend:** -```python -from fastapi import Header, HTTPException - -@app.get("/stream") -async def stream(authorization: str = Header(None)): - if not authorization or not authorization.startswith("Bearer "): - raise HTTPException(status_code=401) - - token = authorization.replace("Bearer ", "") - if not verify_jwt(token): - raise HTTPException(status_code=401) - - async def generate(): - yield {"data": "Authenticated stream"} - - return EventSourceResponse(generate()) -``` - -## Compression - -Enable gzip compression for SSE responses. - -**Nginx:** -```nginx -http { - gzip on; - gzip_types text/event-stream; - - server { - location /stream { - proxy_pass http://backend; - proxy_set_header Connection ''; - proxy_http_version 1.1; - chunked_transfer_encoding off; - proxy_buffering off; - proxy_cache off; - } - } -} -``` - -**Python (FastAPI with gzip):** -```python -from fastapi.middleware.gzip import GZipMiddleware - -app.add_middleware(GZipMiddleware, minimum_size=1000) -``` - -## Browser Compatibility - -**Supported:** -- Chrome/Edge: ✅ Full support -- Firefox: ✅ Full support -- Safari: ✅ Full support -- Opera: ✅ Full support - -**Not Supported:** -- Internet Explorer: ❌ No support (use polyfill or WebSocket fallback) - -**Polyfill:** -```typescript -import { EventSourcePolyfill } from 'event-source-polyfill' - -// Use polyfill with custom headers -const eventSource = new EventSourcePolyfill('/stream', { - headers: { - 'Authorization': `Bearer ${token}` - } -}) -``` - -## Limitations - -1. **One-way only** - Server → Client (no client → server messages) -2. **Text-based** - Binary data must be base64 encoded -3. **No request headers** - Can't add headers after initial connection (without polyfill) -4. **HTTP/1.1 connection limit** - 6-8 connections per domain (use HTTP/2) -5. **Browser limit** - ~255 EventSource connections per domain - -## Best Practices - -1. **Include event IDs** for automatic resumption -2. **Set retry interval** explicitly for predictable behavior -3. **Use HTTP/2** to avoid connection limits -4. **Disable buffering** in reverse proxies (Nginx, Apache) -5. **Send heartbeat** every 30-60 seconds to keep connection alive -6. **Close streams** when no longer needed to free resources -7. **Handle errors** gracefully with automatic reconnection -8. **Use compression** for large payloads -9. **Monitor connection count** and set limits -10. **Test with slow networks** to ensure proper buffering diff --git a/.claude/skills/implementing-realtime-sync/references/websockets.md b/.claude/skills/implementing-realtime-sync/references/websockets.md deleted file mode 100644 index 4bf8c2a3f..000000000 --- a/.claude/skills/implementing-realtime-sync/references/websockets.md +++ /dev/null @@ -1,540 +0,0 @@ -# WebSocket Reference Guide - -WebSocket protocol for bidirectional real-time communication. - - -## Table of Contents - -- [Protocol Overview](#protocol-overview) -- [Authentication Patterns](#authentication-patterns) - - [Pattern 1: Cookie-Based (Recommended for Same-Origin)](#pattern-1-cookie-based-recommended-for-same-origin) - - [Pattern 2: Token in Sec-WebSocket-Protocol](#pattern-2-token-in-sec-websocket-protocol) - - [Pattern 3: First Message Authentication](#pattern-3-first-message-authentication) -- [Heartbeat (Ping/Pong)](#heartbeat-pingpong) - - [Server-Side Heartbeat (Python)](#server-side-heartbeat-python) - - [Client-Side Heartbeat (TypeScript)](#client-side-heartbeat-typescript) -- [Message Framing](#message-framing) - - [Text Frames (JSON)](#text-frames-json) - - [Binary Frames (Protocol Buffers, MessagePack)](#binary-frames-protocol-buffers-messagepack) -- [Horizontal Scaling](#horizontal-scaling) - - [Challenge](#challenge) - - [Solution 1: Redis Pub/Sub](#solution-1-redis-pubsub) - - [Solution 2: Sticky Sessions (Load Balancer)](#solution-2-sticky-sessions-load-balancer) -- [Connection Limits](#connection-limits) - - [Browser Limits](#browser-limits) - - [Server Limits](#server-limits) -- [Error Codes](#error-codes) -- [CORS Configuration](#cors-configuration) -- [Best Practices](#best-practices) - -## Protocol Overview - -WebSocket provides full-duplex communication over a single TCP connection, upgrading from HTTP. - -**Handshake (HTTP → WebSocket):** -``` -Client Request: -GET /ws HTTP/1.1 -Host: example.com -Upgrade: websocket -Connection: Upgrade -Sec-WebSocket-Key: dGhlIHNhbXBsZSBub25jZQ== -Sec-WebSocket-Version: 13 - -Server Response: -HTTP/1.1 101 Switching Protocols -Upgrade: websocket -Connection: Upgrade -Sec-WebSocket-Accept: s3pPLMBiTxaQ9kYGzzhZRbK+xOo= -``` - -## Authentication Patterns - -### Pattern 1: Cookie-Based (Recommended for Same-Origin) - -**Flow:** -1. User logs in via HTTP POST -2. Server sets HTTP-only cookie -3. WebSocket connection automatically sends cookie -4. Server validates cookie on connection - -**Implementation:** -```python -from fastapi import WebSocket, Cookie, HTTPException - -@app.websocket("/ws") -async def websocket_endpoint( - websocket: WebSocket, - session_token: str = Cookie(None) -): - if not verify_session(session_token): - await websocket.close(code=1008) # Policy violation - return - - await websocket.accept() - # Connection authenticated -``` - -**Frontend:** -```typescript -// Cookie set via HTTP login -await fetch('/api/login', { - method: 'POST', - credentials: 'include', - body: JSON.stringify({ username, password }) -}) - -// WebSocket automatically sends cookie -const ws = new WebSocket('ws://example.com/ws') -``` - -### Pattern 2: Token in Sec-WebSocket-Protocol - -**Flow:** -1. Client obtains JWT token -2. Passes token in `Sec-WebSocket-Protocol` header -3. Server validates token during handshake -4. Server responds with same subprotocol - -**Implementation:** -```python -from fastapi import WebSocket, HTTPException - -@app.websocket("/ws") -async def websocket_endpoint(websocket: WebSocket): - # Get token from subprotocol - protocols = websocket.headers.get("sec-websocket-protocol", "").split(", ") - - token = None - for proto in protocols: - if proto.startswith("access_token_"): - token = proto.replace("access_token_", "") - - if not token or not verify_jwt(token): - await websocket.close(code=1008) - return - - await websocket.accept(subprotocol="access_token") -``` - -**Frontend:** -```typescript -const token = await getAuthToken() -const ws = new WebSocket('ws://example.com/ws', [`access_token_${token}`]) -``` - -### Pattern 3: First Message Authentication - -**Flow:** -1. Accept WebSocket connection -2. Wait for authentication message -3. Validate credentials -4. Send confirmation or close - -**Implementation:** -```python -@app.websocket("/ws") -async def websocket_endpoint(websocket: WebSocket): - await websocket.accept() - - # Wait for auth message (5 second timeout) - try: - auth_msg = await asyncio.wait_for( - websocket.receive_json(), - timeout=5.0 - ) - except asyncio.TimeoutError: - await websocket.close(code=1008) - return - - if auth_msg.get("type") != "auth" or not verify_token(auth_msg.get("token")): - await websocket.close(code=1008) - return - - await websocket.send_json({"type": "auth_ok"}) - # Authenticated -``` - -**Frontend:** -```typescript -const ws = new WebSocket('ws://example.com/ws') - -ws.onopen = () => { - ws.send(JSON.stringify({ - type: 'auth', - token: getAuthToken() - })) -} - -ws.onmessage = (event) => { - const msg = JSON.parse(event.data) - if (msg.type === 'auth_ok') { - // Now authenticated - } -} -``` - -## Heartbeat (Ping/Pong) - -Detect dead connections and prevent timeout. - -### Server-Side Heartbeat (Python) - -```python -import asyncio -from fastapi import WebSocket - -HEARTBEAT_INTERVAL = 30 # seconds - -@app.websocket("/ws") -async def websocket_endpoint(websocket: WebSocket): - await websocket.accept() - - async def heartbeat(): - while True: - try: - await asyncio.sleep(HEARTBEAT_INTERVAL) - await websocket.send_json({"type": "ping"}) - - # Wait for pong response - response = await asyncio.wait_for( - websocket.receive_json(), - timeout=10.0 - ) - - if response.get("type") != "pong": - raise ValueError("Expected pong") - except: - await websocket.close() - break - - # Start heartbeat task - heartbeat_task = asyncio.create_task(heartbeat()) - - try: - while True: - data = await websocket.receive_text() - # Process messages - finally: - heartbeat_task.cancel() -``` - -### Client-Side Heartbeat (TypeScript) - -```typescript -class HeartbeatWebSocket { - private ws: WebSocket - private heartbeatInterval: NodeJS.Timeout | null = null - - connect(url: string) { - this.ws = new WebSocket(url) - - this.ws.onopen = () => { - this.startHeartbeat() - } - - this.ws.onmessage = (event) => { - const msg = JSON.parse(event.data) - - if (msg.type === 'ping') { - this.ws.send(JSON.stringify({ type: 'pong' })) - } else { - this.handleMessage(msg) - } - } - - this.ws.onclose = () => { - this.stopHeartbeat() - } - } - - private startHeartbeat() { - this.heartbeatInterval = setInterval(() => { - if (this.ws.readyState === WebSocket.OPEN) { - this.ws.send(JSON.stringify({ type: 'ping' })) - } - }, 30000) - } - - private stopHeartbeat() { - if (this.heartbeatInterval) { - clearInterval(this.heartbeatInterval) - this.heartbeatInterval = null - } - } -} -``` - -## Message Framing - -WebSocket supports text and binary frames. - -### Text Frames (JSON) - -```python -# Send -await websocket.send_json({ - "type": "message", - "content": "Hello", - "timestamp": datetime.now().isoformat() -}) - -# Receive -data = await websocket.receive_json() -``` - -### Binary Frames (Protocol Buffers, MessagePack) - -```python -import msgpack - -# Send -binary_data = msgpack.packb({"user": "alice", "msg": "hello"}) -await websocket.send_bytes(binary_data) - -# Receive -binary_data = await websocket.receive_bytes() -data = msgpack.unpackb(binary_data) -``` - -## Horizontal Scaling - -### Challenge - -WebSocket connections are stateful - user connections on Server A cannot directly reach users on Server B. - -``` -User A → Server 1 -User B → Server 2 - -User A sends message → Server 1 → ??? → Server 2 → User B -``` - -### Solution 1: Redis Pub/Sub - -Broadcast messages across all servers using Redis. - -**Architecture:** -``` -Server 1 → Redis Pub/Sub ← Server 2 - ↓ ↓ -User A User B -``` - -**Implementation:** -```python -import redis.asyncio as redis - -redis_client = redis.from_url("redis://localhost:6379") - -class ConnectionManager: - def __init__(self): - self.active_connections: set[WebSocket] = set() - self.pubsub = None - - async def connect(self, websocket: WebSocket): - await websocket.accept() - self.active_connections.add(websocket) - - # Start listening to Redis - if not self.pubsub: - self.pubsub = redis_client.pubsub() - await self.pubsub.subscribe("broadcast") - asyncio.create_task(self._listen_redis()) - - async def disconnect(self, websocket: WebSocket): - self.active_connections.remove(websocket) - - async def broadcast(self, message: str): - # Publish to Redis (reaches ALL servers) - await redis_client.publish("broadcast", message) - - async def _listen_redis(self): - """Listen to Redis and broadcast to local connections""" - async for message in self.pubsub.listen(): - if message['type'] == 'message': - await self._send_to_local(message['data']) - - async def _send_to_local(self, message: str): - """Send to local connections only""" - for connection in self.active_connections: - try: - await connection.send_text(message) - except: - await self.disconnect(connection) - -manager = ConnectionManager() - -@app.websocket("/ws") -async def websocket_endpoint(websocket: WebSocket): - await manager.connect(websocket) - - try: - while True: - data = await websocket.receive_text() - # Broadcast via Redis to ALL servers - await manager.broadcast(data) - except WebSocketDisconnect: - await manager.disconnect(websocket) -``` - -### Solution 2: Sticky Sessions (Load Balancer) - -Route same user to same server using load balancer. - -**Nginx Configuration:** -```nginx -upstream websocket_backend { - # Sticky sessions based on client IP - ip_hash; - - server backend1:8080; - server backend2:8080; - server backend3:8080; -} - -server { - listen 80; - - location /ws { - proxy_pass http://websocket_backend; - proxy_http_version 1.1; - proxy_set_header Upgrade $http_upgrade; - proxy_set_header Connection "upgrade"; - proxy_set_header Host $host; - proxy_set_header X-Real-IP $remote_addr; - proxy_read_timeout 86400; # 24 hours - } -} -``` - -**HAProxy Configuration:** -``` -backend websocket_backend - balance source # Hash based on source IP - hash-type consistent - - server backend1 10.0.1.1:8080 check - server backend2 10.0.1.2:8080 check - server backend3 10.0.1.3:8080 check -``` - -**Pros:** -- Simple - no Redis dependency -- No broadcast needed (users on same server) - -**Cons:** -- Users can't communicate across servers -- Uneven load distribution -- Reconnection may hit different server - -## Connection Limits - -### Browser Limits - -Browsers limit concurrent connections per domain: -- HTTP/1.1: 6-8 connections -- WebSocket: Typically 200-255 per domain - -**Workaround: Use subdomains** -```typescript -const ws1 = new WebSocket('ws://ws1.example.com/ws') -const ws2 = new WebSocket('ws://ws2.example.com/ws') -const ws3 = new WebSocket('ws://ws3.example.com/ws') -``` - -### Server Limits - -**File Descriptor Limits:** -```bash -# Check current limit -ulimit -n - -# Set higher limit -ulimit -n 65536 - -# Or in /etc/security/limits.conf: -* soft nofile 65536 -* hard nofile 65536 -``` - -**Per-Process Connection Limits:** -- Python (asyncio): ~10,000+ connections per process -- Rust (tokio): ~100,000+ connections per process -- Go: ~1,000,000+ connections (goroutines) - -## Error Codes - -Common WebSocket close codes: - -| Code | Meaning | When to Use | -|------|---------|-------------| -| 1000 | Normal closure | Clean shutdown | -| 1001 | Going away | Server restart, browser navigation | -| 1002 | Protocol error | Invalid message format | -| 1003 | Unsupported data | Wrong data type | -| 1008 | Policy violation | Authentication failed | -| 1011 | Unexpected condition | Server error | - -**Implementation:** -```python -# Normal close -await websocket.close(code=1000, reason="Goodbye") - -# Authentication failed -await websocket.close(code=1008, reason="Unauthorized") - -# Server error -await websocket.close(code=1011, reason="Internal error") -``` - -## CORS Configuration - -WebSocket upgrade requests include Origin header. - -**Python (FastAPI):** -```python -from fastapi.middleware.cors import CORSMiddleware - -app.add_middleware( - CORSMiddleware, - allow_origins=["https://example.com"], - allow_credentials=True, - allow_methods=["*"], - allow_headers=["*"], -) - -@app.websocket("/ws") -async def websocket_endpoint(websocket: WebSocket): - # Check origin manually if needed - origin = websocket.headers.get("origin") - if origin not in ["https://example.com", "https://app.example.com"]: - await websocket.close(code=1008) - return - - await websocket.accept() -``` - -**Go (gorilla/websocket):** -```go -var upgrader = websocket.Upgrader{ - CheckOrigin: func(r *http.Request) bool { - origin := r.Header.Get("Origin") - return origin == "https://example.com" || - origin == "https://app.example.com" - }, -} -``` - -## Best Practices - -1. **Always validate authentication** during handshake -2. **Implement heartbeat** to detect dead connections -3. **Use exponential backoff** for reconnection (client-side) -4. **Rate limit** messages per connection -5. **Close connections gracefully** with appropriate codes -6. **Monitor connection count** and set alerts -7. **Use Redis pub/sub** for horizontal scaling -8. **Configure CORS** explicitly for security -9. **Set timeouts** for read/write operations -10. **Log errors** with connection metadata for debugging diff --git a/.claude/skills/implementing-realtime-sync/scripts/test_websocket_connection.py b/.claude/skills/implementing-realtime-sync/scripts/test_websocket_connection.py deleted file mode 100644 index e7a0fc518..000000000 --- a/.claude/skills/implementing-realtime-sync/scripts/test_websocket_connection.py +++ /dev/null @@ -1,164 +0,0 @@ -#!/usr/bin/env python3 -""" -WebSocket Connection Testing Tool - -Tests WebSocket server connectivity, authentication, and message handling. -Useful for validating WebSocket implementations before frontend integration. - -Usage: - python test_websocket_connection.py --url ws://localhost:8000/ws - python test_websocket_connection.py --url wss://api.example.com/ws --auth-token SECRET -""" - -import asyncio -import argparse -import json -import sys -from datetime import datetime -from typing import Optional - -try: - import websockets -except ImportError: - print("❌ Error: websockets library not installed") - print("Install with: pip install websockets") - sys.exit(1) - - -async def test_connection( - url: str, - auth_token: Optional[str] = None, - test_messages: int = 5, - timeout: int = 10 -): - """Test WebSocket connection with optional authentication.""" - - headers = {} - if auth_token: - headers["Authorization"] = f"Bearer {auth_token}" - - print(f"🔌 Connecting to {url}...") - print(f" Auth: {'Yes' if auth_token else 'No'}") - print(f" Timeout: {timeout}s") - print("") - - try: - async with websockets.connect(url, extra_headers=headers) as ws: - print(f"✅ Connected successfully!") - print(f" Server: {ws.response_headers.get('Server', 'Unknown')}") - print(f" Protocol: {ws.subprotocol or 'None'}") - print("") - - # Test ping/pong - print("🏓 Testing ping/pong...") - pong_waiter = await ws.ping() - latency = await asyncio.wait_for(pong_waiter, timeout=timeout) - print(f"✅ Pong received (latency: {latency:.2f}s)") - print("") - - # Send test messages - print(f"📤 Sending {test_messages} test messages...") - for i in range(test_messages): - message = { - "type": "test", - "sequence": i + 1, - "timestamp": datetime.utcnow().isoformat(), - "payload": f"Test message {i + 1}" - } - - await ws.send(json.dumps(message)) - print(f" Sent: {message['payload']}") - - # Wait for response - try: - response = await asyncio.wait_for( - ws.recv(), - timeout=timeout - ) - print(f" Received: {response[:100]}...") - except asyncio.TimeoutError: - print(f" ⚠️ No response (timeout: {timeout}s)") - - await asyncio.sleep(0.5) - - print("") - print("✅ All tests passed!") - print("") - print("📊 Summary:") - print(f" Connection: Success") - print(f" Ping/Pong: Success") - print(f" Messages sent: {test_messages}") - - return True - - except websockets.exceptions.InvalidStatusCode as e: - print(f"❌ Connection failed: HTTP {e.status_code}") - if e.status_code == 401: - print(" Hint: Check auth token") - elif e.status_code == 403: - print(" Hint: Authorization denied") - return False - - except websockets.exceptions.InvalidURI: - print(f"❌ Invalid WebSocket URI: {url}") - print(" Hint: Use ws:// or wss:// scheme") - return False - - except asyncio.TimeoutError: - print(f"❌ Connection timeout after {timeout}s") - print(" Hint: Server may be down or unreachable") - return False - - except ConnectionRefusedError: - print(f"❌ Connection refused") - print(" Hint: Is the server running?") - return False - - except Exception as e: - print(f"❌ Unexpected error: {e}") - return False - - -def main(): - parser = argparse.ArgumentParser( - description="Test WebSocket server connectivity" - ) - parser.add_argument( - "--url", - required=True, - help="WebSocket URL (ws:// or wss://)" - ) - parser.add_argument( - "--auth-token", - help="Bearer token for authentication" - ) - parser.add_argument( - "--messages", - type=int, - default=5, - help="Number of test messages to send (default: 5)" - ) - parser.add_argument( - "--timeout", - type=int, - default=10, - help="Timeout in seconds (default: 10)" - ) - - args = parser.parse_args() - - # Run async test - success = asyncio.run( - test_connection( - url=args.url, - auth_token=args.auth_token, - test_messages=args.messages, - timeout=args.timeout - ) - ) - - sys.exit(0 if success else 1) - - -if __name__ == "__main__": - main() diff --git a/.claude/skills/implementing-search-filter/SKILL.md b/.claude/skills/implementing-search-filter/SKILL.md deleted file mode 100644 index 3af0787f8..000000000 --- a/.claude/skills/implementing-search-filter/SKILL.md +++ /dev/null @@ -1,204 +0,0 @@ ---- -name: implementing-search-filter -description: Implements search and filter interfaces for both frontend (React/TypeScript) and backend (Python) with debouncing, query management, and database integration. Use when adding search functionality, building filter UIs, implementing faceted search, or optimizing search performance. ---- - -# Search & Filter Implementation - -Implement search and filter interfaces with comprehensive frontend components and backend query optimization. - -## Purpose - -This skill provides production-ready patterns for implementing search and filtering functionality across the full stack. It covers React/TypeScript components for the frontend (search inputs, filter UIs, autocomplete) and Python patterns for the backend (SQLAlchemy queries, Elasticsearch integration, API design). The skill emphasizes performance optimization, accessibility, and user experience. - -## When to Use - -- Building product search with category and price filters -- Implementing autocomplete/typeahead search -- Creating faceted search interfaces with dynamic counts -- Adding search to data tables or lists -- Building advanced boolean search for power users -- Implementing backend search with SQLAlchemy or Django ORM -- Integrating Elasticsearch for full-text search -- Optimizing search performance with debouncing and caching -- Creating accessible search experiences - -## Core Components - -### Frontend Search Patterns - -**Search Input with Debouncing** -- Implement 300ms debounce for performance -- Show loading states during search -- Clear button (X) for resetting -- Keyboard shortcuts (Cmd/Ctrl+K) -- See `references/search-input-patterns.md` - -**Autocomplete/Typeahead** -- Suggestion dropdown with keyboard navigation -- Highlight matched text in suggestions -- Recent searches and popular items -- Prevent request flooding with debouncing -- See `references/autocomplete-patterns.md` - -**Filter UI Components** -- Checkbox filters for multi-select -- Range sliders for numerical values -- Dropdown filters for single selection -- Filter chips showing active selections -- See `references/filter-ui-patterns.md` - -### Backend Query Patterns - -**Database Query Building** -- Dynamic query construction with SQLAlchemy -- Django ORM filter chaining -- Index optimization for search columns -- Full-text search in PostgreSQL -- See `references/database-querying.md` - -**Elasticsearch Integration** -- Document indexing strategies -- Query DSL for complex searches -- Faceted aggregations -- Relevance scoring and boosting -- See `references/elasticsearch-integration.md` - -**API Design** -- RESTful search endpoints -- Query parameter validation -- Pagination with cursor/offset -- Response caching strategies -- See `references/api-design.md` - -## Implementation Workflows - -### Client-Side Search (<1000 items) - -1. Load data into memory -2. Implement filter functions in JavaScript -3. Apply debounced search on text input -4. Update results instantly -5. Maintain filter state in React - -### Server-Side Search (>1000 items) - -1. Design search API endpoint -2. Validate and sanitize query parameters -3. Build database query dynamically -4. Apply pagination -5. Return results with metadata -6. Cache frequent queries - -### Hybrid Approach - -1. Use client-side filtering for immediate feedback -2. Fetch server results in background -3. Merge and deduplicate results -4. Update UI progressively -5. Cache recent searches locally - -## Performance Optimization - -### Frontend Optimization - -**Debouncing Implementation** -- Use `debounce` from lodash or custom implementation -- Cancel pending requests on new input -- Show skeleton loaders during fetch -- Script: `scripts/debounce_calculator.js` - -**Query Parameter Management** -- Sync filters with URL for shareable searches -- Use React Router or Next.js for URL state -- Compress complex queries -- See `references/query-parameter-management.md` - -### Backend Optimization - -**Query Optimization** -- Create appropriate database indexes -- Use query analyzers to identify bottlenecks -- Implement query result caching -- Script: `scripts/generate_filter_query.py` - -**Validation & Security** -- Sanitize all search inputs -- Prevent SQL injection -- Rate limit search endpoints -- Script: `scripts/validate_search_params.py` - -## Accessibility Requirements - -### ARIA Patterns - -- Use `role="search"` for search regions -- Implement `aria-live` for result updates -- Provide clear labels for filters -- Support keyboard-only navigation - -### Keyboard Support - -- Tab through all interactive elements -- Arrow keys for autocomplete navigation -- Escape to close dropdowns -- Enter to select/submit - -## Technology Stack - -### Frontend Libraries - -**Primary: Downshift (Autocomplete)** -- Accessible autocomplete primitives -- Headless/unstyled for flexibility -- WAI-ARIA compliant -- Install: `npm install downshift` - -**Alternative: React Select** -- Full-featured select/filter component -- Built-in async search -- Multi-select support - -### Backend Technologies - -**Python/SQLAlchemy** -- Dynamic query building -- Relationship loading optimization -- Query result pagination - -**Python/Django** -- Django Filter backend -- Django REST Framework filters -- Full-text search with PostgreSQL - -**Elasticsearch (Python)** -- elasticsearch-py client -- elasticsearch-dsl for query building - -## Bundled Resources - -### References -- `references/search-input-patterns.md` - Input implementations -- `references/autocomplete-patterns.md` - Typeahead patterns -- `references/filter-ui-patterns.md` - Filter components -- `references/database-querying.md` - SQL query patterns -- `references/elasticsearch-integration.md` - Elasticsearch setup -- `references/api-design.md` - API endpoint patterns -- `references/performance-optimization.md` - Performance tips -- `references/library-comparison.md` - Library evaluation - -### Scripts -- `scripts/generate_filter_query.py` - Build SQL/ES queries -- `scripts/validate_search_params.py` - Validate inputs -- `scripts/debounce_calculator.js` - Calculate debounce timing - -### Examples -- `examples/product-search.tsx` - E-commerce search -- `examples/autocomplete-search.tsx` - Autocomplete implementation -- `examples/sqlalchemy_search.py` - SQLAlchemy patterns -- `examples/fastapi_search.py` - FastAPI search endpoint -- `examples/django_filter_backend.py` - Django filters - -### Assets -- `assets/filter-config-schema.json` - Filter configuration -- `assets/search-api-spec.json` - OpenAPI specification \ No newline at end of file diff --git a/.claude/skills/implementing-search-filter/assets/filter-config-schema.json b/.claude/skills/implementing-search-filter/assets/filter-config-schema.json deleted file mode 100644 index f918f887c..000000000 --- a/.claude/skills/implementing-search-filter/assets/filter-config-schema.json +++ /dev/null @@ -1,435 +0,0 @@ -{ - "$schema": "http://json-schema.org/draft-07/schema#", - "title": "Search Filter Configuration", - "description": "Configuration schema for search and filter interfaces", - "type": "object", - "properties": { - "searchConfig": { - "type": "object", - "properties": { - "enabled": { - "type": "boolean", - "default": true, - "description": "Enable search functionality" - }, - "debounceMs": { - "type": "integer", - "minimum": 0, - "maximum": 2000, - "default": 300, - "description": "Debounce delay in milliseconds" - }, - "minChars": { - "type": "integer", - "minimum": 0, - "maximum": 10, - "default": 2, - "description": "Minimum characters before triggering search" - }, - "maxLength": { - "type": "integer", - "minimum": 10, - "maximum": 500, - "default": 200, - "description": "Maximum search query length" - }, - "placeholder": { - "type": "string", - "default": "Search...", - "description": "Search input placeholder text" - }, - "searchFields": { - "type": "array", - "items": { - "type": "string" - }, - "default": ["title", "description", "tags"], - "description": "Fields to search in" - }, - "enableAutocomplete": { - "type": "boolean", - "default": true, - "description": "Enable autocomplete suggestions" - }, - "autocompleteLimit": { - "type": "integer", - "minimum": 1, - "maximum": 50, - "default": 10, - "description": "Maximum autocomplete suggestions" - } - } - }, - "filters": { - "type": "array", - "items": { - "$ref": "#/definitions/filterDefinition" - }, - "description": "List of available filters" - }, - "facets": { - "type": "object", - "properties": { - "enabled": { - "type": "boolean", - "default": true, - "description": "Enable faceted search" - }, - "showCounts": { - "type": "boolean", - "default": true, - "description": "Show result counts for each facet" - }, - "dynamicCounts": { - "type": "boolean", - "default": true, - "description": "Update counts dynamically as filters change" - }, - "collapsible": { - "type": "boolean", - "default": true, - "description": "Allow facet sections to be collapsed" - } - } - }, - "sorting": { - "type": "object", - "properties": { - "enabled": { - "type": "boolean", - "default": true, - "description": "Enable sorting options" - }, - "defaultSort": { - "type": "string", - "default": "relevance", - "description": "Default sort order" - }, - "options": { - "type": "array", - "items": { - "$ref": "#/definitions/sortOption" - }, - "description": "Available sort options" - } - } - }, - "pagination": { - "type": "object", - "properties": { - "enabled": { - "type": "boolean", - "default": true, - "description": "Enable pagination" - }, - "defaultPerPage": { - "type": "integer", - "minimum": 1, - "maximum": 100, - "default": 20, - "description": "Default results per page" - }, - "perPageOptions": { - "type": "array", - "items": { - "type": "integer" - }, - "default": [10, 20, 50, 100], - "description": "Available per-page options" - }, - "maxPages": { - "type": "integer", - "minimum": 1, - "maximum": 1000, - "default": 100, - "description": "Maximum number of pages" - }, - "showInfo": { - "type": "boolean", - "default": true, - "description": "Show pagination info (e.g., 'Showing 1-20 of 100')" - } - } - }, - "ui": { - "type": "object", - "properties": { - "layout": { - "type": "string", - "enum": ["sidebar", "top", "modal", "drawer"], - "default": "sidebar", - "description": "Filter panel layout" - }, - "mobileLayout": { - "type": "string", - "enum": ["drawer", "modal", "accordion"], - "default": "drawer", - "description": "Mobile filter layout" - }, - "showActiveFilters": { - "type": "boolean", - "default": true, - "description": "Display active filter chips" - }, - "showClearAll": { - "type": "boolean", - "default": true, - "description": "Show 'Clear all' button" - }, - "animations": { - "type": "boolean", - "default": true, - "description": "Enable UI animations" - }, - "theme": { - "type": "string", - "enum": ["light", "dark", "auto"], - "default": "auto", - "description": "UI theme" - } - } - }, - "performance": { - "type": "object", - "properties": { - "caching": { - "type": "object", - "properties": { - "enabled": { - "type": "boolean", - "default": true, - "description": "Enable result caching" - }, - "ttl": { - "type": "integer", - "minimum": 0, - "maximum": 3600, - "default": 300, - "description": "Cache TTL in seconds" - }, - "maxSize": { - "type": "integer", - "minimum": 0, - "maximum": 1000, - "default": 100, - "description": "Maximum cache entries" - } - } - }, - "virtualization": { - "type": "object", - "properties": { - "enabled": { - "type": "boolean", - "default": false, - "description": "Enable virtual scrolling for large result sets" - }, - "threshold": { - "type": "integer", - "minimum": 100, - "maximum": 10000, - "default": 1000, - "description": "Item count threshold for virtualization" - } - } - }, - "lazyLoading": { - "type": "object", - "properties": { - "enabled": { - "type": "boolean", - "default": true, - "description": "Enable lazy loading of results" - }, - "imageLoading": { - "type": "string", - "enum": ["eager", "lazy", "auto"], - "default": "lazy", - "description": "Image loading strategy" - } - } - } - } - }, - "api": { - "type": "object", - "properties": { - "searchEndpoint": { - "type": "string", - "format": "uri", - "description": "Search API endpoint" - }, - "autocompleteEndpoint": { - "type": "string", - "format": "uri", - "description": "Autocomplete API endpoint" - }, - "method": { - "type": "string", - "enum": ["GET", "POST"], - "default": "POST", - "description": "HTTP method for search requests" - }, - "headers": { - "type": "object", - "additionalProperties": { - "type": "string" - }, - "description": "Additional headers for API requests" - }, - "timeout": { - "type": "integer", - "minimum": 1000, - "maximum": 60000, - "default": 10000, - "description": "API request timeout in milliseconds" - }, - "retries": { - "type": "integer", - "minimum": 0, - "maximum": 5, - "default": 3, - "description": "Number of retry attempts on failure" - } - }, - "required": ["searchEndpoint"] - } - }, - "definitions": { - "filterDefinition": { - "type": "object", - "properties": { - "id": { - "type": "string", - "description": "Unique filter identifier" - }, - "label": { - "type": "string", - "description": "Display label for the filter" - }, - "type": { - "type": "string", - "enum": ["checkbox", "radio", "range", "dropdown", "date", "boolean"], - "description": "Filter UI type" - }, - "field": { - "type": "string", - "description": "Field name in data/API" - }, - "dataType": { - "type": "string", - "enum": ["string", "number", "boolean", "date"], - "description": "Data type of the field" - }, - "options": { - "type": "array", - "items": { - "type": "object", - "properties": { - "value": { - "type": ["string", "number", "boolean"] - }, - "label": { - "type": "string" - }, - "count": { - "type": "integer", - "minimum": 0 - } - }, - "required": ["value", "label"] - }, - "description": "Predefined options for select-type filters" - }, - "range": { - "type": "object", - "properties": { - "min": { - "type": "number" - }, - "max": { - "type": "number" - }, - "step": { - "type": "number" - }, - "prefix": { - "type": "string" - }, - "suffix": { - "type": "string" - } - }, - "description": "Configuration for range filters" - }, - "multiple": { - "type": "boolean", - "default": true, - "description": "Allow multiple selections" - }, - "required": { - "type": "boolean", - "default": false, - "description": "Is this filter required" - }, - "defaultValue": { - "description": "Default filter value" - }, - "placeholder": { - "type": "string", - "description": "Placeholder text for input filters" - }, - "validation": { - "type": "object", - "properties": { - "pattern": { - "type": "string", - "description": "Regex pattern for validation" - }, - "min": { - "type": "number", - "description": "Minimum value" - }, - "max": { - "type": "number", - "description": "Maximum value" - }, - "minLength": { - "type": "integer", - "description": "Minimum string length" - }, - "maxLength": { - "type": "integer", - "description": "Maximum string length" - } - } - } - }, - "required": ["id", "label", "type", "field"] - }, - "sortOption": { - "type": "object", - "properties": { - "value": { - "type": "string", - "description": "Sort value sent to API" - }, - "label": { - "type": "string", - "description": "Display label" - }, - "field": { - "type": "string", - "description": "Field to sort by" - }, - "order": { - "type": "string", - "enum": ["asc", "desc"], - "description": "Sort order" - } - }, - "required": ["value", "label", "field", "order"] - } - }, - "required": ["api"] -} \ No newline at end of file diff --git a/.claude/skills/implementing-search-filter/assets/search-api-spec.json b/.claude/skills/implementing-search-filter/assets/search-api-spec.json deleted file mode 100644 index 3a78a6967..000000000 --- a/.claude/skills/implementing-search-filter/assets/search-api-spec.json +++ /dev/null @@ -1,615 +0,0 @@ -{ - "openapi": "3.0.3", - "info": { - "title": "Search & Filter API", - "description": "RESTful API specification for search and filter functionality", - "version": "1.0.0" - }, - "servers": [ - { - "url": "https://api.example.com/v1", - "description": "Production server" - }, - { - "url": "http://localhost:8000/v1", - "description": "Development server" - } - ], - "paths": { - "/search": { - "get": { - "summary": "Search with query parameters", - "description": "Perform search using URL query parameters. Suitable for simple searches.", - "operationId": "searchGet", - "tags": ["Search"], - "parameters": [ - { - "name": "q", - "in": "query", - "description": "Search query text", - "required": false, - "schema": { - "type": "string", - "minLength": 1, - "maxLength": 200 - }, - "example": "laptop" - }, - { - "name": "category", - "in": "query", - "description": "Filter by categories", - "required": false, - "style": "form", - "explode": true, - "schema": { - "type": "array", - "items": { - "type": "string" - } - } - }, - { - "name": "brand", - "in": "query", - "description": "Filter by brands", - "required": false, - "style": "form", - "explode": true, - "schema": { - "type": "array", - "items": { - "type": "string" - } - } - }, - { - "name": "min_price", - "in": "query", - "description": "Minimum price filter", - "required": false, - "schema": { - "type": "number", - "minimum": 0 - } - }, - { - "name": "max_price", - "in": "query", - "description": "Maximum price filter", - "required": false, - "schema": { - "type": "number", - "minimum": 0 - } - }, - { - "name": "in_stock", - "in": "query", - "description": "Filter for in-stock items only", - "required": false, - "schema": { - "type": "boolean" - } - }, - { - "name": "sort", - "in": "query", - "description": "Sort order", - "required": false, - "schema": { - "type": "string", - "enum": ["relevance", "price_asc", "price_desc", "newest", "rating"], - "default": "relevance" - } - }, - { - "name": "page", - "in": "query", - "description": "Page number", - "required": false, - "schema": { - "type": "integer", - "minimum": 1, - "maximum": 100, - "default": 1 - } - }, - { - "name": "per_page", - "in": "query", - "description": "Results per page", - "required": false, - "schema": { - "type": "integer", - "minimum": 1, - "maximum": 100, - "default": 20 - } - } - ], - "responses": { - "200": { - "description": "Successful search", - "content": { - "application/json": { - "schema": { - "$ref": "#/components/schemas/SearchResponse" - } - } - } - }, - "400": { - "description": "Invalid parameters", - "content": { - "application/json": { - "schema": { - "$ref": "#/components/schemas/ErrorResponse" - } - } - } - }, - "429": { - "description": "Rate limit exceeded", - "content": { - "application/json": { - "schema": { - "$ref": "#/components/schemas/ErrorResponse" - } - } - } - }, - "500": { - "description": "Internal server error", - "content": { - "application/json": { - "schema": { - "$ref": "#/components/schemas/ErrorResponse" - } - } - } - } - } - }, - "post": { - "summary": "Search with request body", - "description": "Perform search using JSON request body. Suitable for complex searches.", - "operationId": "searchPost", - "tags": ["Search"], - "requestBody": { - "required": true, - "content": { - "application/json": { - "schema": { - "$ref": "#/components/schemas/SearchRequest" - } - } - } - }, - "responses": { - "200": { - "description": "Successful search", - "content": { - "application/json": { - "schema": { - "$ref": "#/components/schemas/SearchResponse" - } - } - } - }, - "400": { - "description": "Invalid request", - "content": { - "application/json": { - "schema": { - "$ref": "#/components/schemas/ErrorResponse" - } - } - } - }, - "429": { - "description": "Rate limit exceeded", - "content": { - "application/json": { - "schema": { - "$ref": "#/components/schemas/ErrorResponse" - } - } - } - }, - "500": { - "description": "Internal server error", - "content": { - "application/json": { - "schema": { - "$ref": "#/components/schemas/ErrorResponse" - } - } - } - } - } - } - }, - "/autocomplete": { - "get": { - "summary": "Get search suggestions", - "description": "Returns autocomplete suggestions based on query prefix", - "operationId": "autocomplete", - "tags": ["Search"], - "parameters": [ - { - "name": "q", - "in": "query", - "description": "Query prefix", - "required": true, - "schema": { - "type": "string", - "minLength": 2, - "maxLength": 50 - } - }, - { - "name": "limit", - "in": "query", - "description": "Maximum suggestions", - "required": false, - "schema": { - "type": "integer", - "minimum": 1, - "maximum": 20, - "default": 10 - } - }, - { - "name": "type", - "in": "query", - "description": "Suggestion type filter", - "required": false, - "schema": { - "type": "string", - "enum": ["all", "products", "categories", "brands"] - } - } - ], - "responses": { - "200": { - "description": "Suggestions found", - "content": { - "application/json": { - "schema": { - "$ref": "#/components/schemas/AutocompleteResponse" - } - } - } - } - } - } - } - }, - "components": { - "schemas": { - "SearchRequest": { - "type": "object", - "properties": { - "query": { - "type": "string", - "minLength": 1, - "maxLength": 200, - "description": "Search query text" - }, - "filters": { - "$ref": "#/components/schemas/SearchFilters" - }, - "sort_by": { - "type": "string", - "enum": ["relevance", "price_asc", "price_desc", "newest", "rating"], - "default": "relevance", - "description": "Sort order" - }, - "page": { - "type": "integer", - "minimum": 1, - "maximum": 100, - "default": 1, - "description": "Page number" - }, - "per_page": { - "type": "integer", - "minimum": 1, - "maximum": 100, - "default": 20, - "description": "Results per page" - }, - "include_facets": { - "type": "boolean", - "default": true, - "description": "Include facet counts in response" - } - } - }, - "SearchFilters": { - "type": "object", - "properties": { - "categories": { - "type": "array", - "items": { - "type": "string" - }, - "maxItems": 20, - "description": "Category filters" - }, - "brands": { - "type": "array", - "items": { - "type": "string" - }, - "maxItems": 20, - "description": "Brand filters" - }, - "min_price": { - "type": "number", - "minimum": 0, - "description": "Minimum price" - }, - "max_price": { - "type": "number", - "minimum": 0, - "description": "Maximum price" - }, - "in_stock": { - "type": "boolean", - "description": "Only in-stock items" - }, - "min_rating": { - "type": "number", - "minimum": 1, - "maximum": 5, - "description": "Minimum rating" - }, - "tags": { - "type": "array", - "items": { - "type": "string" - }, - "description": "Tag filters" - } - } - }, - "SearchResponse": { - "type": "object", - "required": ["products", "total", "page", "per_page", "total_pages"], - "properties": { - "products": { - "type": "array", - "items": { - "$ref": "#/components/schemas/Product" - }, - "description": "Search results" - }, - "total": { - "type": "integer", - "minimum": 0, - "description": "Total number of results" - }, - "page": { - "type": "integer", - "minimum": 1, - "description": "Current page" - }, - "per_page": { - "type": "integer", - "minimum": 1, - "description": "Results per page" - }, - "total_pages": { - "type": "integer", - "minimum": 0, - "description": "Total number of pages" - }, - "facets": { - "$ref": "#/components/schemas/Facets" - }, - "query_time_ms": { - "type": "number", - "description": "Query execution time in milliseconds" - }, - "cached": { - "type": "boolean", - "description": "Whether result was served from cache" - } - } - }, - "Product": { - "type": "object", - "required": ["id", "title", "price", "category", "brand"], - "properties": { - "id": { - "type": "string", - "description": "Unique product identifier" - }, - "title": { - "type": "string", - "description": "Product title" - }, - "description": { - "type": "string", - "description": "Product description" - }, - "price": { - "type": "number", - "minimum": 0, - "description": "Product price" - }, - "category": { - "type": "string", - "description": "Product category" - }, - "brand": { - "type": "string", - "description": "Product brand" - }, - "rating": { - "type": "number", - "minimum": 0, - "maximum": 5, - "description": "Average rating" - }, - "review_count": { - "type": "integer", - "minimum": 0, - "description": "Number of reviews" - }, - "in_stock": { - "type": "boolean", - "description": "Stock availability" - }, - "image_url": { - "type": "string", - "format": "uri", - "description": "Product image URL" - }, - "tags": { - "type": "array", - "items": { - "type": "string" - }, - "description": "Product tags" - } - } - }, - "Facets": { - "type": "object", - "properties": { - "categories": { - "type": "array", - "items": { - "$ref": "#/components/schemas/Facet" - } - }, - "brands": { - "type": "array", - "items": { - "$ref": "#/components/schemas/Facet" - } - }, - "price_ranges": { - "type": "array", - "items": { - "$ref": "#/components/schemas/PriceRangeFacet" - } - }, - "ratings": { - "type": "array", - "items": { - "$ref": "#/components/schemas/Facet" - } - } - } - }, - "Facet": { - "type": "object", - "required": ["value", "count"], - "properties": { - "value": { - "type": "string", - "description": "Facet value" - }, - "count": { - "type": "integer", - "minimum": 0, - "description": "Number of items" - } - } - }, - "PriceRangeFacet": { - "type": "object", - "required": ["min", "max", "count", "label"], - "properties": { - "min": { - "type": "number", - "description": "Minimum price in range" - }, - "max": { - "type": "number", - "description": "Maximum price in range (null for open-ended)" - }, - "label": { - "type": "string", - "description": "Display label" - }, - "count": { - "type": "integer", - "minimum": 0, - "description": "Number of items in range" - } - } - }, - "AutocompleteResponse": { - "type": "object", - "required": ["query", "suggestions"], - "properties": { - "query": { - "type": "string", - "description": "Original query" - }, - "suggestions": { - "type": "array", - "items": { - "$ref": "#/components/schemas/Suggestion" - } - } - } - }, - "Suggestion": { - "type": "object", - "required": ["text", "type"], - "properties": { - "text": { - "type": "string", - "description": "Suggestion text" - }, - "type": { - "type": "string", - "enum": ["product", "category", "brand", "query"], - "description": "Suggestion type" - }, - "category": { - "type": "string", - "description": "Associated category" - }, - "product_id": { - "type": "string", - "description": "Product ID (for product suggestions)" - }, - "count": { - "type": "integer", - "description": "Result count for this suggestion" - } - } - }, - "ErrorResponse": { - "type": "object", - "required": ["error", "message", "timestamp"], - "properties": { - "error": { - "type": "string", - "description": "Error code" - }, - "message": { - "type": "string", - "description": "Error message" - }, - "details": { - "type": "object", - "description": "Additional error details" - }, - "timestamp": { - "type": "string", - "format": "date-time", - "description": "Error timestamp" - } - } - } - } - } -} \ No newline at end of file diff --git a/.claude/skills/implementing-search-filter/examples/autocomplete-search.tsx b/.claude/skills/implementing-search-filter/examples/autocomplete-search.tsx deleted file mode 100644 index d81908d05..000000000 --- a/.claude/skills/implementing-search-filter/examples/autocomplete-search.tsx +++ /dev/null @@ -1,545 +0,0 @@ -/** - * Advanced autocomplete search implementation with Downshift - * - * Features: - * - Accessible autocomplete with keyboard navigation - * - Debounced API calls - * - Recent searches and suggestions - * - Highlighting of matched text - * - Loading states and error handling - */ - -import React, { useState, useEffect, useCallback, useRef } from 'react'; -import { useCombobox } from 'downshift'; -import { Search, Clock, TrendingUp, X, Loader2 } from 'lucide-react'; -import { useDebounce } from '../hooks/useDebounce'; - -// Types -interface Suggestion { - id: string; - text: string; - type: 'product' | 'category' | 'brand' | 'recent' | 'trending'; - category?: string; - count?: number; - metadata?: any; -} - -interface AutocompleteProps { - onSearch: (query: string) => void; - onSelect: (item: Suggestion) => void; - placeholder?: string; - minChars?: number; - debounceMs?: number; - maxSuggestions?: number; -} - -export function AutocompleteSearch({ - onSearch, - onSelect, - placeholder = 'Search products, categories, brands...', - minChars = 2, - debounceMs = 300, - maxSuggestions = 10 -}: AutocompleteProps) { - // State - const [inputValue, setInputValue] = useState(''); - const [suggestions, setSuggestions] = useState([]); - const [recentSearches, setRecentSearches] = useState([]); - const [isLoading, setIsLoading] = useState(false); - const [error, setError] = useState(null); - - // Refs - const abortControllerRef = useRef(null); - - // Debounced search value - const debouncedSearchTerm = useDebounce(inputValue, debounceMs); - - // Load recent searches from localStorage - useEffect(() => { - const stored = localStorage.getItem('recentSearches'); - if (stored) { - try { - const parsed = JSON.parse(stored); - setRecentSearches(parsed.slice(0, 5)); - } catch (e) { - console.error('Failed to parse recent searches:', e); - } - } - }, []); - - // Fetch suggestions - const fetchSuggestions = useCallback(async (query: string) => { - // Cancel previous request - if (abortControllerRef.current) { - abortControllerRef.current.abort(); - } - - // Don't search if query too short - if (query.length < minChars) { - setSuggestions([]); - return; - } - - // Create new abort controller - abortControllerRef.current = new AbortController(); - - setIsLoading(true); - setError(null); - - try { - const response = await fetch(`/api/autocomplete?q=${encodeURIComponent(query)}&limit=${maxSuggestions}`, { - signal: abortControllerRef.current.signal - }); - - if (!response.ok) { - throw new Error('Failed to fetch suggestions'); - } - - const data = await response.json(); - setSuggestions(data.suggestions); - } catch (err) { - if (err.name === 'AbortError') { - // Request was cancelled, ignore - return; - } - setError('Failed to load suggestions'); - setSuggestions([]); - } finally { - setIsLoading(false); - } - }, [minChars, maxSuggestions]); - - // Fetch suggestions when debounced value changes - useEffect(() => { - if (debouncedSearchTerm) { - fetchSuggestions(debouncedSearchTerm); - } else { - setSuggestions([]); - } - }, [debouncedSearchTerm, fetchSuggestions]); - - // Save to recent searches - const saveToRecent = useCallback((text: string) => { - const newRecent: Suggestion = { - id: `recent-${Date.now()}`, - text, - type: 'recent' - }; - - const updated = [ - newRecent, - ...recentSearches.filter(r => r.text !== text) - ].slice(0, 5); - - setRecentSearches(updated); - localStorage.setItem('recentSearches', JSON.stringify(updated)); - }, [recentSearches]); - - // Get display items (suggestions or recent/trending) - const displayItems = inputValue.length >= minChars - ? suggestions - : recentSearches; - - // Setup Downshift - const { - isOpen, - getMenuProps, - getInputProps, - highlightedIndex, - getItemProps, - selectedItem, - reset - } = useCombobox({ - items: displayItems, - inputValue, - onInputValueChange: ({ inputValue: newValue }) => { - setInputValue(newValue || ''); - }, - onSelectedItemChange: ({ selectedItem }) => { - if (selectedItem) { - setInputValue(selectedItem.text); - onSelect(selectedItem); - saveToRecent(selectedItem.text); - reset(); - } - }, - itemToString: (item) => item?.text || '' - }); - - // Handle form submission - const handleSubmit = (e: React.FormEvent) => { - e.preventDefault(); - if (inputValue.trim()) { - onSearch(inputValue); - saveToRecent(inputValue); - reset(); - } - }; - - // Clear input - const handleClear = () => { - setInputValue(''); - setSuggestions([]); - reset(); - }; - - return ( -
    -
    -
    - - - - - - Type to search. Use arrow keys to navigate suggestions. - - - {/* Loading indicator */} - {isLoading && ( -
    - -
    - )} - - {/* Clear button */} - {inputValue && ( - - )} -
    -
    - - {/* Suggestions dropdown */} -
    - {isOpen && displayItems.length > 0 && ( - <> - {/* Show section header for recent searches */} - {inputValue.length < minChars && recentSearches.length > 0 && ( -
    -
    - - Recent Searches - -
    -
    - )} - - {/* Render suggestions */} - {displayItems.map((item, index) => ( - - ))} - - )} - - {/* No results message */} - {isOpen && inputValue.length >= minChars && !isLoading && suggestions.length === 0 && ( -
    - No suggestions found for "{inputValue}" -
    - )} - - {/* Error message */} - {error && ( -
    - {error} -
    - )} -
    -
    - ); -} - -// Suggestion Item Component -interface SuggestionItemProps { - item: Suggestion; - isHighlighted: boolean; - query: string; -} - -function SuggestionItem({ - item, - isHighlighted, - query, - ...props -}: SuggestionItemProps & React.HTMLAttributes) { - return ( -
  • -
    - {/* Icon based on type */} -
    - {item.type === 'recent' && } - {item.type === 'trending' && } - {item.type === 'product' && } -
    - - {/* Main text with highlighting */} -
    - - - {/* Additional metadata */} - {item.category && ( - in {item.category} - )} -
    - - {/* Result count */} - {item.count !== undefined && ( - {item.count} - )} -
    -
  • - ); -} - -// Highlight matching text -interface HighlightTextProps { - text: string; - highlight: string; -} - -function HighlightText({ text, highlight }: HighlightTextProps) { - if (!highlight.trim()) { - return {text}; - } - - const regex = new RegExp(`(${highlight})`, 'gi'); - const parts = text.split(regex); - - return ( - - {parts.map((part, index) => - regex.test(part) ? ( - - {part} - - ) : ( - {part} - ) - )} - - ); -} - -// Custom debounce hook -function useDebounce(value: T, delay: number): T { - const [debouncedValue, setDebouncedValue] = useState(value); - - useEffect(() => { - const handler = setTimeout(() => { - setDebouncedValue(value); - }, delay); - - return () => { - clearTimeout(handler); - }; - }, [value, delay]); - - return debouncedValue; -} - -// Styles (CSS-in-JS or separate stylesheet) -const styles = ` -.autocomplete-search { - position: relative; - width: 100%; - max-width: 600px; -} - -.search-form { - width: 100%; -} - -.search-input-wrapper { - position: relative; - display: flex; - align-items: center; - background: var(--search-input-bg); - border: 1px solid var(--search-input-border); - border-radius: var(--search-border-radius); - padding: 0 12px; - transition: all 0.2s ease; -} - -.search-input-wrapper:focus-within { - border-color: var(--search-input-focus-border); - box-shadow: 0 0 0 3px var(--search-input-focus-ring); -} - -.search-icon { - color: var(--search-icon-color); - flex-shrink: 0; -} - -.search-input { - flex: 1; - border: none; - background: none; - padding: 12px; - font-size: 16px; - outline: none; -} - -.loading-indicator { - margin-left: 8px; -} - -.clear-button { - margin-left: 8px; - padding: 4px; - background: none; - border: none; - cursor: pointer; - color: var(--color-text-secondary); - transition: color 0.2s; -} - -.clear-button:hover { - color: var(--color-text-primary); -} - -.suggestions-dropdown { - position: absolute; - top: 100%; - left: 0; - right: 0; - margin-top: 4px; - background: var(--color-white); - border: 1px solid var(--color-border); - border-radius: var(--radius-md); - box-shadow: var(--shadow-lg); - max-height: 400px; - overflow-y: auto; - z-index: 1000; -} - -.suggestions-section { - padding: 8px 12px; - border-bottom: 1px solid var(--color-border); -} - -.section-header { - display: flex; - align-items: center; - gap: 8px; - font-size: 12px; - color: var(--color-text-secondary); -} - -.clear-recent { - margin-left: auto; - background: none; - border: none; - color: var(--color-primary); - cursor: pointer; - font-size: 12px; -} - -.suggestion-item { - padding: 12px; - cursor: pointer; - transition: background 0.1s; -} - -.suggestion-item:hover, -.suggestion-item.highlighted { - background: var(--color-gray-50); -} - -.suggestion-content { - display: flex; - align-items: center; - gap: 12px; -} - -.suggestion-icon { - color: var(--color-text-secondary); - flex-shrink: 0; -} - -.suggestion-text { - flex: 1; -} - -.suggestion-category { - margin-left: 8px; - font-size: 12px; - color: var(--color-text-secondary); -} - -.suggestion-count { - font-size: 12px; - color: var(--color-text-secondary); - background: var(--color-gray-100); - padding: 2px 8px; - border-radius: var(--radius-sm); -} - -.highlight { - background: var(--result-highlight-bg); - color: var(--result-highlight-text); - font-weight: 500; -} - -.no-suggestions, -.suggestions-error { - padding: 16px; - text-align: center; - color: var(--color-text-secondary); -} - -.suggestions-error { - color: var(--color-error); -} - -.animate-spin { - animation: spin 1s linear infinite; -} - -@keyframes spin { - from { transform: rotate(0deg); } - to { transform: rotate(360deg); } -} - -.sr-only { - position: absolute; - width: 1px; - height: 1px; - padding: 0; - margin: -1px; - overflow: hidden; - clip: rect(0, 0, 0, 0); - white-space: nowrap; - border: 0; -} -`; \ No newline at end of file diff --git a/.claude/skills/implementing-search-filter/examples/django_filter_backend.py b/.claude/skills/implementing-search-filter/examples/django_filter_backend.py deleted file mode 100644 index a75dc9833..000000000 --- a/.claude/skills/implementing-search-filter/examples/django_filter_backend.py +++ /dev/null @@ -1,513 +0,0 @@ -""" -Django REST Framework filter backend implementation. - -This example demonstrates advanced filtering with Django REST Framework, -including custom filter backends, faceted search, and query optimization. -""" - -from django.db import models -from django.db.models import Q, Count, Avg, F, Value, CharField -from django.contrib.postgres.search import SearchVector, SearchQuery, SearchRank -from django.contrib.postgres.aggregates import ArrayAgg -from rest_framework import viewsets, filters, status -from rest_framework.decorators import action -from rest_framework.response import Response -from rest_framework.pagination import PageNumberPagination -from django_filters import rest_framework as df -from django.core.cache import cache -from typing import Dict, List, Any -import json -import hashlib - - -# Models -class Product(models.Model): - """Product model with search-optimized fields.""" - - title = models.CharField(max_length=200, db_index=True) - description = models.TextField() - category = models.ForeignKey('Category', on_delete=models.CASCADE, related_name='products') - brand = models.ForeignKey('Brand', on_delete=models.CASCADE, related_name='products') - price = models.DecimalField(max_digits=10, decimal_places=2, db_index=True) - rating = models.DecimalField(max_digits=3, decimal_places=2, null=True, blank=True) - in_stock = models.BooleanField(default=True, db_index=True) - tags = models.ManyToManyField('Tag', related_name='products') - created_at = models.DateTimeField(auto_now_add=True, db_index=True) - updated_at = models.DateTimeField(auto_now=True) - - # PostgreSQL specific: Full-text search vector - search_vector = SearchVector('title', weight='A') + SearchVector('description', weight='B') - - class Meta: - indexes = [ - models.Index(fields=['category', 'brand']), - models.Index(fields=['price', '-created_at']), - models.Index(fields=['-rating', 'in_stock']), - ] - - def __str__(self): - return self.title - - -class Category(models.Model): - name = models.CharField(max_length=50, unique=True) - parent = models.ForeignKey('self', null=True, blank=True, on_delete=models.CASCADE) - slug = models.SlugField(unique=True) - - class Meta: - verbose_name_plural = 'Categories' - - -class Brand(models.Model): - name = models.CharField(max_length=50, unique=True) - slug = models.SlugField(unique=True) - - -class Tag(models.Model): - name = models.CharField(max_length=30, unique=True) - - -# Custom Filter Backend -class FacetedSearchBackend(filters.BaseFilterBackend): - """Custom filter backend for faceted search with dynamic counts.""" - - def filter_queryset(self, request, queryset, view): - """Apply filters while maintaining facet counts.""" - - # Get filter parameters - params = request.query_params - - # Text search - query = params.get('q', '').strip() - if query: - queryset = self._apply_text_search(queryset, query) - - # Category filter - categories = params.getlist('category') - if categories: - queryset = queryset.filter(category__slug__in=categories) - - # Brand filter - brands = params.getlist('brand') - if brands: - queryset = queryset.filter(brand__slug__in=brands) - - # Price range - min_price = params.get('min_price') - max_price = params.get('max_price') - if min_price: - queryset = queryset.filter(price__gte=min_price) - if max_price: - queryset = queryset.filter(price__lte=max_price) - - # Stock filter - in_stock = params.get('in_stock') - if in_stock: - queryset = queryset.filter(in_stock=in_stock.lower() == 'true') - - # Rating filter - min_rating = params.get('min_rating') - if min_rating: - queryset = queryset.filter(rating__gte=min_rating) - - return queryset - - def _apply_text_search(self, queryset, query): - """Apply PostgreSQL full-text search.""" - from django.db import connection - - if connection.vendor == 'postgresql': - # Use PostgreSQL full-text search - search_query = SearchQuery(query, config='english') - search_vector = SearchVector('title', weight='A') + \ - SearchVector('description', weight='B') - - queryset = queryset.annotate( - search=search_vector, - rank=SearchRank(search_vector, search_query) - ).filter(search=search_query).order_by('-rank') - else: - # Fallback to LIKE queries - queryset = queryset.filter( - Q(title__icontains=query) | - Q(description__icontains=query) - ) - - return queryset - - -# Django Filter -class ProductFilter(df.FilterSet): - """Product filter using django-filter.""" - - q = df.CharFilter(method='search') - category = df.ModelMultipleChoiceFilter( - field_name='category__slug', - to_field_name='slug', - queryset=Category.objects.all() - ) - brand = df.ModelMultipleChoiceFilter( - field_name='brand__slug', - to_field_name='slug', - queryset=Brand.objects.all() - ) - min_price = df.NumberFilter(field_name='price', lookup_expr='gte') - max_price = df.NumberFilter(field_name='price', lookup_expr='lte') - in_stock = df.BooleanFilter() - min_rating = df.NumberFilter(field_name='rating', lookup_expr='gte') - tags = df.ModelMultipleChoiceFilter( - field_name='tags__name', - to_field_name='name', - queryset=Tag.objects.all() - ) - - # Date filters - created_after = df.DateFilter(field_name='created_at', lookup_expr='gte') - created_before = df.DateFilter(field_name='created_at', lookup_expr='lte') - - # Ordering - o = df.OrderingFilter( - fields=( - ('price', 'price'), - ('rating', 'rating'), - ('created_at', 'newest'), - ), - field_labels={ - 'price': 'Price', - '-price': 'Price (high to low)', - 'rating': 'Rating (low to high)', - '-rating': 'Rating (high to low)', - '-created_at': 'Newest first', - } - ) - - def search(self, queryset, name, value): - """Custom search method with relevance scoring.""" - if not value: - return queryset - - # PostgreSQL full-text search - from django.db import connection - - if connection.vendor == 'postgresql': - search_query = SearchQuery(value, config='english') - search_vector = SearchVector('title', weight='A') + \ - SearchVector('description', weight='B') - - return queryset.annotate( - rank=SearchRank(search_vector, search_query) - ).filter( - Q(title__icontains=value) | - Q(description__icontains=value) - ).order_by('-rank') - - # Fallback for other databases - return queryset.filter( - Q(title__icontains=value) | - Q(description__icontains=value) - ) - - class Meta: - model = Product - fields = ['q', 'category', 'brand', 'min_price', 'max_price', - 'in_stock', 'min_rating', 'tags'] - - -# Serializers -from rest_framework import serializers - - -class ProductSerializer(serializers.ModelSerializer): - """Product serializer with nested relationships.""" - - category = serializers.StringRelatedField() - brand = serializers.StringRelatedField() - tags = serializers.StringRelatedField(many=True) - - class Meta: - model = Product - fields = ['id', 'title', 'description', 'category', 'brand', - 'price', 'rating', 'in_stock', 'tags', 'created_at'] - - -class FacetSerializer(serializers.Serializer): - """Facet serializer for filter options.""" - - value = serializers.CharField() - label = serializers.CharField() - count = serializers.IntegerField() - - -class SearchResultSerializer(serializers.Serializer): - """Search result with facets.""" - - products = ProductSerializer(many=True) - facets = serializers.DictField(child=FacetSerializer(many=True)) - total = serializers.IntegerField() - page = serializers.IntegerField() - page_size = serializers.IntegerField() - - -# Custom Pagination -class SearchPagination(PageNumberPagination): - """Custom pagination for search results.""" - - page_size = 20 - page_size_query_param = 'page_size' - max_page_size = 100 - - def get_paginated_response(self, data): - """Include additional metadata in response.""" - return Response({ - 'products': data, - 'pagination': { - 'total': self.page.paginator.count, - 'page': self.page.number, - 'page_size': self.get_page_size(self.request), - 'total_pages': self.page.paginator.num_pages, - 'next': self.get_next_link(), - 'previous': self.get_previous_link() - } - }) - - -# ViewSet -class ProductViewSet(viewsets.ReadOnlyModelViewSet): - """Product search and filter viewset.""" - - queryset = Product.objects.all() - serializer_class = ProductSerializer - filter_backends = [ - FacetedSearchBackend, - df.DjangoFilterBackend, - filters.OrderingFilter - ] - filterset_class = ProductFilter - pagination_class = SearchPagination - ordering_fields = ['price', 'rating', 'created_at'] - ordering = ['-created_at'] - - def get_queryset(self): - """Optimize queryset with select/prefetch related.""" - queryset = super().get_queryset() - - # Optimize database queries - queryset = queryset.select_related('category', 'brand') - queryset = queryset.prefetch_related('tags') - - # Add annotations for computed fields - queryset = queryset.annotate( - review_count=Count('reviews', distinct=True), - avg_rating=Avg('reviews__rating') - ) - - return queryset - - def list(self, request, *args, **kwargs): - """Override list to include facets.""" - - # Check cache - cache_key = self._get_cache_key(request) - cached_result = cache.get(cache_key) - - if cached_result: - return Response(cached_result) - - # Get filtered queryset - queryset = self.filter_queryset(self.get_queryset()) - - # Get facets before pagination - facets = self._get_facets(queryset, request) - - # Paginate - page = self.paginate_queryset(queryset) - if page is not None: - serializer = self.get_serializer(page, many=True) - response = self.get_paginated_response(serializer.data) - response.data['facets'] = facets - - # Cache result - cache.set(cache_key, response.data, 300) # 5 minutes - - return response - - serializer = self.get_serializer(queryset, many=True) - return Response({ - 'products': serializer.data, - 'facets': facets - }) - - def _get_facets(self, queryset, request): - """Generate facet counts for filters.""" - - facets = {} - - # Category facets - category_facets = queryset.values('category__name', 'category__slug')\ - .annotate(count=Count('id'))\ - .order_by('-count')[:20] - - facets['categories'] = [ - { - 'value': item['category__slug'], - 'label': item['category__name'], - 'count': item['count'] - } - for item in category_facets - ] - - # Brand facets - brand_facets = queryset.values('brand__name', 'brand__slug')\ - .annotate(count=Count('id'))\ - .order_by('-count')[:20] - - facets['brands'] = [ - { - 'value': item['brand__slug'], - 'label': item['brand__name'], - 'count': item['count'] - } - for item in brand_facets - ] - - # Price range facets - price_ranges = [ - (0, 50, 'Under $50'), - (50, 100, '$50-$100'), - (100, 200, '$100-$200'), - (200, 500, '$200-$500'), - (500, None, 'Over $500') - ] - - price_facets = [] - for min_price, max_price, label in price_ranges: - count_query = queryset.filter(price__gte=min_price) - if max_price: - count_query = count_query.filter(price__lt=max_price) - - count = count_query.count() - if count > 0: - price_facets.append({ - 'value': f'{min_price}-{max_price or "inf"}', - 'label': label, - 'count': count - }) - - facets['price_ranges'] = price_facets - - # In stock count - in_stock_count = queryset.filter(in_stock=True).count() - out_of_stock_count = queryset.filter(in_stock=False).count() - - facets['availability'] = [ - {'value': 'true', 'label': 'In Stock', 'count': in_stock_count}, - {'value': 'false', 'label': 'Out of Stock', 'count': out_of_stock_count} - ] - - return facets - - def _get_cache_key(self, request): - """Generate cache key from request parameters.""" - params = dict(request.query_params) - # Sort for consistent hashing - params_str = json.dumps(params, sort_keys=True) - return f'search:{hashlib.md5(params_str.encode()).hexdigest()}' - - @action(detail=False, methods=['get']) - def autocomplete(self, request): - """Autocomplete endpoint for search suggestions.""" - - query = request.query_params.get('q', '').strip() - if len(query) < 2: - return Response({'suggestions': []}) - - # Get suggestions from products - suggestions = Product.objects.filter( - Q(title__icontains=query) | - Q(brand__name__icontains=query) | - Q(category__name__icontains=query) - ).values('title').distinct()[:10] - - # Format response - return Response({ - 'query': query, - 'suggestions': [ - { - 'text': item['title'], - 'type': 'product' - } - for item in suggestions - ] - }) - - @action(detail=False, methods=['get']) - def export(self, request): - """Export search results to CSV.""" - - queryset = self.filter_queryset(self.get_queryset()) - - # Limit export size - queryset = queryset[:1000] - - import csv - from django.http import HttpResponse - - response = HttpResponse(content_type='text/csv') - response['Content-Disposition'] = 'attachment; filename="products.csv"' - - writer = csv.writer(response) - writer.writerow(['ID', 'Title', 'Category', 'Brand', 'Price', 'In Stock']) - - for product in queryset: - writer.writerow([ - product.id, - product.title, - product.category.name, - product.brand.name, - product.price, - product.in_stock - ]) - - return response - - -# Management Command for Search Index -from django.core.management.base import BaseCommand - - -class Command(BaseCommand): - """Management command to rebuild search index.""" - - help = 'Rebuild PostgreSQL search index' - - def handle(self, *args, **options): - from django.db import connection - - with connection.cursor() as cursor: - # Create GIN index for full-text search - cursor.execute(""" - CREATE INDEX IF NOT EXISTS products_search_vector_idx - ON products_product - USING GIN( - to_tsvector('english', - COALESCE(title, '') || ' ' || - COALESCE(description, '') - ) - ) - """) - - self.stdout.write( - self.style.SUCCESS('Successfully created search index') - ) - - -# URLs -from django.urls import path, include -from rest_framework.routers import DefaultRouter - -router = DefaultRouter() -router.register('products', ProductViewSet) - -urlpatterns = [ - path('api/', include(router.urls)), -] \ No newline at end of file diff --git a/.claude/skills/implementing-search-filter/examples/fastapi_search.py b/.claude/skills/implementing-search-filter/examples/fastapi_search.py deleted file mode 100644 index ccafeb81b..000000000 --- a/.claude/skills/implementing-search-filter/examples/fastapi_search.py +++ /dev/null @@ -1,446 +0,0 @@ -""" -FastAPI search endpoint implementation with validation and caching. - -This example shows how to build a production-ready search API with FastAPI, -including request validation, response caching, and error handling. -""" - -from fastapi import FastAPI, Query, HTTPException, Depends, Request -from fastapi.responses import JSONResponse -from pydantic import BaseModel, Field, validator -from typing import Optional, List, Dict, Any -from datetime import datetime, timedelta -from enum import Enum -import asyncio -import hashlib -import json -import time -from functools import lru_cache - -app = FastAPI(title="Product Search API", version="1.0.0") - - -# Enums and Models -class SortOrder(str, Enum): - """Available sort orders.""" - relevance = "relevance" - price_asc = "price_asc" - price_desc = "price_desc" - newest = "newest" - oldest = "oldest" - rating = "rating" - - -class SearchFilters(BaseModel): - """Search filter model with validation.""" - - categories: Optional[List[str]] = Field(None, max_items=20, description="Product categories") - brands: Optional[List[str]] = Field(None, max_items=20, description="Product brands") - min_price: Optional[float] = Field(None, ge=0, le=1000000, description="Minimum price") - max_price: Optional[float] = Field(None, ge=0, le=1000000, description="Maximum price") - in_stock: Optional[bool] = Field(None, description="Only show in-stock items") - min_rating: Optional[float] = Field(None, ge=1, le=5, description="Minimum rating") - - @validator('max_price') - def validate_price_range(cls, v, values): - """Ensure max_price >= min_price.""" - if v is not None and 'min_price' in values and values['min_price'] is not None: - if v < values['min_price']: - raise ValueError('max_price must be greater than or equal to min_price') - return v - - -class SearchRequest(BaseModel): - """Search request model for POST endpoint.""" - - query: Optional[str] = Field(None, min_length=1, max_length=200, description="Search query") - filters: Optional[SearchFilters] = Field(None, description="Search filters") - sort_by: SortOrder = Field(SortOrder.relevance, description="Sort order") - page: int = Field(1, ge=1, le=100, description="Page number") - per_page: int = Field(20, ge=1, le=100, description="Results per page") - include_facets: bool = Field(True, description="Include facet counts") - - -class Product(BaseModel): - """Product model.""" - - id: str - title: str - description: Optional[str] - category: str - brand: str - price: float - rating: Optional[float] - in_stock: bool - image_url: Optional[str] - created_at: datetime - - -class Facet(BaseModel): - """Facet item model.""" - - value: str - count: int - label: Optional[str] = None - - -class SearchResponse(BaseModel): - """Search response model.""" - - products: List[Product] - total: int - page: int - per_page: int - total_pages: int - facets: Optional[Dict[str, List[Facet]]] = None - query_time_ms: float - cached: bool = False - - -# Cache Implementation -class SearchCache: - """Simple in-memory cache for search results.""" - - def __init__(self, ttl: int = 300, max_size: int = 100): - self.cache: Dict[str, tuple] = {} - self.ttl = ttl - self.max_size = max_size - - def get_key(self, params: dict) -> str: - """Generate cache key from search parameters.""" - # Sort keys for consistent hashing - sorted_params = json.dumps(params, sort_keys=True, default=str) - return hashlib.md5(sorted_params.encode()).hexdigest() - - def get(self, params: dict) -> Optional[dict]: - """Get cached result if available and not expired.""" - key = self.get_key(params) - - if key in self.cache: - result, timestamp = self.cache[key] - if time.time() - timestamp < self.ttl: - return result - - # Expired, remove from cache - del self.cache[key] - - return None - - def set(self, params: dict, result: dict): - """Cache search result.""" - # Check cache size limit - if len(self.cache) >= self.max_size: - # Remove oldest entry (simple FIFO) - oldest_key = next(iter(self.cache)) - del self.cache[oldest_key] - - key = self.get_key(params) - self.cache[key] = (result, time.time()) - - def clear(self): - """Clear all cached results.""" - self.cache.clear() - - -# Initialize cache -search_cache = SearchCache(ttl=300, max_size=100) - - -# Dependency Injection -async def get_search_service(): - """Dependency to get search service.""" - # In production, this would return your actual search service - # For example, database session, Elasticsearch client, etc. - return MockSearchService() - - -# Mock Search Service (replace with actual implementation) -class MockSearchService: - """Mock search service for demonstration.""" - - async def search(self, request: SearchRequest) -> Dict[str, Any]: - """Perform mock search.""" - # Simulate some processing time - await asyncio.sleep(0.1) - - # Mock products - products = [ - { - "id": f"prod_{i}", - "title": f"Product {i}", - "description": f"Description for product {i}", - "category": "Electronics", - "brand": "BrandX", - "price": 100.0 + i * 10, - "rating": 4.5, - "in_stock": True, - "image_url": f"https://example.com/product_{i}.jpg", - "created_at": datetime.utcnow() - } - for i in range(1, 21) - ] - - # Mock facets - facets = { - "categories": [ - {"value": "Electronics", "count": 150}, - {"value": "Computers", "count": 75}, - {"value": "Accessories", "count": 50} - ], - "brands": [ - {"value": "BrandX", "count": 100}, - {"value": "BrandY", "count": 80}, - {"value": "BrandZ", "count": 45} - ], - "price_ranges": [ - {"value": "0-50", "label": "Under $50", "count": 30}, - {"value": "50-100", "label": "$50-$100", "count": 45}, - {"value": "100-200", "label": "$100-$200", "count": 60}, - {"value": "200-inf", "label": "Over $200", "count": 40} - ] - } - - return { - "products": products, - "total": 175, - "facets": facets if request.include_facets else None - } - - -# API Endpoints -@app.get("/api/v1/search", response_model=SearchResponse, summary="Search products (GET)") -async def search_get( - q: Optional[str] = Query(None, min_length=1, max_length=200, description="Search query"), - category: Optional[List[str]] = Query(None, description="Filter by categories"), - brand: Optional[List[str]] = Query(None, description="Filter by brands"), - min_price: Optional[float] = Query(None, ge=0, description="Minimum price"), - max_price: Optional[float] = Query(None, ge=0, description="Maximum price"), - in_stock: Optional[bool] = Query(None, description="Only in-stock items"), - sort: SortOrder = Query(SortOrder.relevance, description="Sort order"), - page: int = Query(1, ge=1, le=100, description="Page number"), - per_page: int = Query(20, ge=1, le=100, description="Results per page"), - include_facets: bool = Query(True, description="Include facet counts"), - service: MockSearchService = Depends(get_search_service) -): - """ - Search products using query parameters. - - This endpoint is suitable for simple searches that can be expressed in URL parameters. - """ - start_time = time.time() - - # Build search request - search_request = SearchRequest( - query=q, - filters=SearchFilters( - categories=category, - brands=brand, - min_price=min_price, - max_price=max_price, - in_stock=in_stock - ), - sort_by=sort, - page=page, - per_page=per_page, - include_facets=include_facets - ) - - # Check cache - cache_params = search_request.dict() - cached_result = search_cache.get(cache_params) - - if cached_result: - query_time = (time.time() - start_time) * 1000 - return SearchResponse( - **cached_result, - query_time_ms=query_time, - cached=True - ) - - # Perform search - try: - results = await service.search(search_request) - except Exception as e: - raise HTTPException(status_code=500, detail=f"Search failed: {str(e)}") - - # Calculate pagination - total_pages = (results['total'] + per_page - 1) // per_page - - # Build response - response_data = { - "products": results['products'], - "total": results['total'], - "page": page, - "per_page": per_page, - "total_pages": total_pages, - "facets": results.get('facets') - } - - # Cache result - search_cache.set(cache_params, response_data) - - query_time = (time.time() - start_time) * 1000 - return SearchResponse( - **response_data, - query_time_ms=query_time, - cached=False - ) - - -@app.post("/api/v1/search", response_model=SearchResponse, summary="Search products (POST)") -async def search_post( - request: SearchRequest, - service: MockSearchService = Depends(get_search_service) -): - """ - Search products using request body. - - This endpoint is suitable for complex searches with many filters or when the query - might exceed URL length limits. - """ - start_time = time.time() - - # Check cache - cache_params = request.dict() - cached_result = search_cache.get(cache_params) - - if cached_result: - query_time = (time.time() - start_time) * 1000 - return SearchResponse( - **cached_result, - query_time_ms=query_time, - cached=True - ) - - # Perform search - try: - results = await service.search(request) - except Exception as e: - raise HTTPException(status_code=500, detail=f"Search failed: {str(e)}") - - # Calculate pagination - total_pages = (results['total'] + request.per_page - 1) // request.per_page - - # Build response - response_data = { - "products": results['products'], - "total": results['total'], - "page": request.page, - "per_page": request.per_page, - "total_pages": total_pages, - "facets": results.get('facets') - } - - # Cache result - search_cache.set(cache_params, response_data) - - query_time = (time.time() - start_time) * 1000 - return SearchResponse( - **response_data, - query_time_ms=query_time, - cached=False - ) - - -@app.get("/api/v1/autocomplete", summary="Get search suggestions") -async def autocomplete( - q: str = Query(..., min_length=2, max_length=50, description="Query prefix"), - limit: int = Query(10, ge=1, le=20, description="Number of suggestions"), - service: MockSearchService = Depends(get_search_service) -): - """ - Get autocomplete suggestions for search input. - - Returns suggestions based on the provided query prefix. - """ - # Mock autocomplete suggestions - suggestions = [ - { - "text": f"{q} suggestion {i}", - "category": "Electronics" if i % 2 == 0 else "Computers", - "type": "product" if i < 5 else "category" - } - for i in range(1, min(limit + 1, 11)) - ] - - return { - "query": q, - "suggestions": suggestions - } - - -@app.delete("/api/v1/cache", summary="Clear search cache") -async def clear_cache(): - """ - Clear the search result cache. - - This endpoint should be protected in production. - """ - search_cache.clear() - return {"message": "Cache cleared successfully"} - - -@app.get("/api/v1/health", summary="Health check") -async def health_check(): - """ - Check if the search service is healthy. - """ - return { - "status": "healthy", - "timestamp": datetime.utcnow(), - "cache_size": len(search_cache.cache), - "version": "1.0.0" - } - - -# Error Handlers -@app.exception_handler(ValueError) -async def value_error_handler(request: Request, exc: ValueError): - """Handle validation errors.""" - return JSONResponse( - status_code=400, - content={ - "error": "Invalid input", - "message": str(exc), - "timestamp": datetime.utcnow().isoformat() - } - ) - - -@app.exception_handler(HTTPException) -async def http_exception_handler(request: Request, exc: HTTPException): - """Handle HTTP exceptions.""" - return JSONResponse( - status_code=exc.status_code, - content={ - "error": "Request failed", - "message": exc.detail, - "timestamp": datetime.utcnow().isoformat() - } - ) - - -# Middleware for logging -@app.middleware("http") -async def log_requests(request: Request, call_next): - """Log all search requests.""" - start_time = time.time() - - # Process request - response = await call_next(request) - - # Log request details - process_time = time.time() - start_time - - # In production, use proper logging - if request.url.path.startswith("/api/v1/search"): - print(f"Search request: {request.url.path}") - print(f"Query params: {request.url.query}") - print(f"Process time: {process_time:.3f}s") - - return response - - -if __name__ == "__main__": - import uvicorn - uvicorn.run(app, host="0.0.0.0", port=8000) \ No newline at end of file diff --git a/.claude/skills/implementing-search-filter/examples/product-search.tsx b/.claude/skills/implementing-search-filter/examples/product-search.tsx deleted file mode 100644 index 833d0a059..000000000 --- a/.claude/skills/implementing-search-filter/examples/product-search.tsx +++ /dev/null @@ -1,445 +0,0 @@ -/** - * Complete e-commerce product search implementation with filters - * - * Features: - * - Search input with debouncing - * - Multiple filter types (category, price, brand) - * - URL state management - * - Faceted search with counts - * - Responsive design - */ - -import React, { useState, useEffect, useCallback, useMemo } from 'react'; -import { useSearchParams } from 'react-router-dom'; -import { useDebounce } from '../hooks/useDebounce'; -import { Search, X, Filter, ChevronDown } from 'lucide-react'; - -// Types -interface Product { - id: string; - title: string; - description: string; - price: number; - category: string; - brand: string; - rating: number; - imageUrl: string; - inStock: boolean; -} - -interface SearchFilters { - query?: string; - categories?: string[]; - brands?: string[]; - minPrice?: number; - maxPrice?: number; - inStock?: boolean; - sortBy?: string; -} - -interface Facet { - value: string; - count: number; -} - -interface SearchResults { - products: Product[]; - facets: { - categories: Facet[]; - brands: Facet[]; - priceRanges: Facet[]; - }; - total: number; - page: number; - totalPages: number; -} - -// Main Component -export function ProductSearch() { - const [searchParams, setSearchParams] = useSearchParams(); - const [results, setResults] = useState(null); - const [isLoading, setIsLoading] = useState(false); - const [isMobileFilterOpen, setIsMobileFilterOpen] = useState(false); - - // Parse filters from URL - const filters = useMemo(() => { - return { - query: searchParams.get('q') || undefined, - categories: searchParams.getAll('category'), - brands: searchParams.getAll('brand'), - minPrice: searchParams.get('min_price') - ? parseFloat(searchParams.get('min_price')!) - : undefined, - maxPrice: searchParams.get('max_price') - ? parseFloat(searchParams.get('max_price')!) - : undefined, - inStock: searchParams.get('in_stock') === 'true', - sortBy: searchParams.get('sort') || 'relevance' - }; - }, [searchParams]); - - // Update URL with new filters - const updateFilters = useCallback((newFilters: Partial) => { - const params = new URLSearchParams(); - - // Merge with existing filters - const merged = { ...filters, ...newFilters }; - - // Build URL params - if (merged.query) params.set('q', merged.query); - merged.categories?.forEach(cat => params.append('category', cat)); - merged.brands?.forEach(brand => params.append('brand', brand)); - if (merged.minPrice) params.set('min_price', merged.minPrice.toString()); - if (merged.maxPrice) params.set('max_price', merged.maxPrice.toString()); - if (merged.inStock) params.set('in_stock', 'true'); - if (merged.sortBy && merged.sortBy !== 'relevance') { - params.set('sort', merged.sortBy); - } - - setSearchParams(params); - }, [filters, setSearchParams]); - - // Perform search - const performSearch = useCallback(async (searchFilters: SearchFilters) => { - setIsLoading(true); - - try { - const response = await fetch('/api/search', { - method: 'POST', - headers: { 'Content-Type': 'application/json' }, - body: JSON.stringify(searchFilters) - }); - - if (!response.ok) throw new Error('Search failed'); - - const data = await response.json(); - setResults(data); - } catch (error) { - console.error('Search error:', error); - // Handle error - show toast, etc. - } finally { - setIsLoading(false); - } - }, []); - - // Search when filters change - useEffect(() => { - performSearch(filters); - }, [filters, performSearch]); - - return ( -
    - {/* Search Header */} - updateFilters({ query })} - resultCount={results?.total} - isLoading={isLoading} - /> - -
    - {/* Mobile Filter Toggle */} - - - {/* Desktop Filters Sidebar */} - - - {/* Results Area */} -
    - {/* Active Filters */} - { - const newFilters = { ...filters }; - if (key === 'query') { - delete newFilters.query; - } else if (Array.isArray(newFilters[key])) { - newFilters[key] = newFilters[key].filter(v => v !== value); - } else { - delete newFilters[key]; - } - updateFilters(newFilters); - }} - onClearAll={() => setSearchParams(new URLSearchParams())} - /> - - {/* Sort Bar */} - updateFilters({ sortBy })} - resultCount={results?.total} - /> - - {/* Product Grid */} - {isLoading ? ( - - ) : results && results.products.length > 0 ? ( - - ) : ( - - )} - - {/* Pagination */} - {results && results.totalPages > 1 && ( - updateFilters({ page })} - /> - )} -
    -
    - - {/* Mobile Filter Drawer */} - {isMobileFilterOpen && ( - setIsMobileFilterOpen(false)} - /> - )} -
    - ); -} - -// Search Header Component -function SearchHeader({ query, onSearch, resultCount, isLoading }) { - const [localQuery, setLocalQuery] = useState(query || ''); - const debouncedQuery = useDebounce(localQuery, 300); - - useEffect(() => { - if (debouncedQuery !== query) { - onSearch(debouncedQuery); - } - }, [debouncedQuery, query, onSearch]); - - return ( -
    -
    - - setLocalQuery(e.target.value)} - placeholder="Search products..." - className="search-input" - aria-label="Search products" - /> - {localQuery && ( - - )} -
    - - {resultCount !== undefined && ( -
    - {isLoading ? ( - Searching... - ) : ( - {resultCount.toLocaleString()} results - )} -
    - )} -
    - ); -} - -// Filter Panel Component -function FilterPanel({ filters, facets, onFilterChange, onClearAll }) { - const hasActiveFilters = getActiveFilterCount(filters) > 0; - - return ( -
    -
    -

    Filters

    - {hasActiveFilters && ( - - )} -
    - - {/* Category Filter */} - - {facets?.categories.map(facet => ( - { - const categories = filters.categories || []; - onFilterChange({ - categories: checked - ? [...categories, facet.value] - : categories.filter(c => c !== facet.value) - }); - }} - /> - ))} - - - {/* Price Range */} - - { - onFilterChange({ minPrice: min, maxPrice: max }); - }} - /> - - - {/* Brand Filter */} - - {facets?.brands.map(facet => ( - { - const brands = filters.brands || []; - onFilterChange({ - brands: checked - ? [...brands, facet.value] - : brands.filter(b => b !== facet.value) - }); - }} - /> - ))} - - - {/* Stock Filter */} - - onFilterChange({ inStock: checked })} - /> - -
    - ); -} - -// Helper Components -function FilterSection({ title, children }) { - const [isOpen, setIsOpen] = useState(true); - - return ( -
    - - {isOpen && ( -
    - {children} -
    - )} -
    - ); -} - -function CheckboxFilter({ label, count, checked, onChange }) { - return ( - - ); -} - -function PriceRangeFilter({ min, max, onChange }) { - const [localMin, setLocalMin] = useState(min || ''); - const [localMax, setLocalMax] = useState(max || ''); - - const handleApply = () => { - onChange( - localMin ? parseFloat(localMin) : undefined, - localMax ? parseFloat(localMax) : undefined - ); - }; - - return ( -
    -
    - setLocalMin(e.target.value)} - onBlur={handleApply} - /> - to - setLocalMax(e.target.value)} - onBlur={handleApply} - /> -
    -
    - ); -} - -// Utility Functions -function getActiveFilterCount(filters: SearchFilters): number { - let count = 0; - if (filters.query) count++; - if (filters.categories?.length) count += filters.categories.length; - if (filters.brands?.length) count += filters.brands.length; - if (filters.minPrice || filters.maxPrice) count++; - if (filters.inStock) count++; - return count; -} - -// Additional components would include: -// - ActiveFilters -// - SortBar -// - ProductGrid -// - LoadingGrid -// - NoResults -// - Pagination -// - MobileFilterDrawer - -// These follow similar patterns and would be implemented based on specific UI requirements \ No newline at end of file diff --git a/.claude/skills/implementing-search-filter/examples/sqlalchemy_search.py b/.claude/skills/implementing-search-filter/examples/sqlalchemy_search.py deleted file mode 100644 index f2f2d2e46..000000000 --- a/.claude/skills/implementing-search-filter/examples/sqlalchemy_search.py +++ /dev/null @@ -1,392 +0,0 @@ -""" -SQLAlchemy search implementation with dynamic filtering and pagination. - -This example demonstrates building complex search queries with SQLAlchemy, -including full-text search, faceted filtering, and performance optimization. -""" - -from sqlalchemy import create_engine, Column, Integer, String, Float, Boolean, DateTime, Text, Index, func, and_, or_ -from sqlalchemy.ext.declarative import declarative_base -from sqlalchemy.orm import sessionmaker, Query -from sqlalchemy.dialects.postgresql import TSVECTOR -from typing import Dict, List, Any, Optional, Tuple -from datetime import datetime - -Base = declarative_base() - - -class Product(Base): - """Product model with search-optimized fields.""" - - __tablename__ = 'products' - - id = Column(Integer, primary_key=True) - title = Column(String(200), nullable=False) - description = Column(Text) - category = Column(String(50), index=True) - brand = Column(String(50), index=True) - price = Column(Float, index=True) - rating = Column(Float) - in_stock = Column(Boolean, default=True, index=True) - created_at = Column(DateTime, default=datetime.utcnow, index=True) - tags = Column(Text) # Comma-separated tags - - # PostgreSQL full-text search vector (optional) - search_vector = Column(TSVECTOR) - - # Composite indexes for common filter combinations - __table_args__ = ( - Index('idx_category_brand', 'category', 'brand'), - Index('idx_price_category', 'price', 'category'), - Index('idx_created_desc', created_at.desc()), - ) - - -class ProductSearcher: - """Advanced product search with SQLAlchemy.""" - - def __init__(self, session): - self.session = session - - def search( - self, - query: Optional[str] = None, - filters: Optional[Dict[str, Any]] = None, - sort_by: str = 'relevance', - page: int = 1, - per_page: int = 20, - include_facets: bool = True - ) -> Dict[str, Any]: - """ - Perform product search with filters and facets. - - Args: - query: Search query text - filters: Dictionary of filters to apply - sort_by: Sort order (relevance, price_asc, price_desc, newest, rating) - page: Page number (1-based) - per_page: Results per page - include_facets: Whether to include facet counts - - Returns: - Dictionary with results, facets, and metadata - """ - filters = filters or {} - - # Build base query - base_query = self.session.query(Product) - - # Apply text search - if query: - base_query = self._add_text_search(base_query, query) - - # Apply filters - base_query = self._apply_filters(base_query, filters) - - # Get total count before pagination - total_count = base_query.count() - - # Apply sorting - sorted_query = self._apply_sorting(base_query, sort_by, bool(query)) - - # Apply pagination - paginated_query = self._apply_pagination(sorted_query, page, per_page) - - # Execute query - results = paginated_query.all() - - # Get facets if requested - facets = {} - if include_facets: - facets = self._get_facets(base_query, filters) - - return { - 'results': [self._serialize_product(p) for p in results], - 'total': total_count, - 'page': page, - 'per_page': per_page, - 'total_pages': (total_count + per_page - 1) // per_page, - 'facets': facets - } - - def _add_text_search(self, query: Query, search_term: str) -> Query: - """Add full-text search to query.""" - - # Check if using PostgreSQL - if self.session.bind.dialect.name == 'postgresql': - # Use PostgreSQL full-text search - search_query = func.plainto_tsquery('english', search_term) - - # Create search vector from multiple fields - search_vector = func.to_tsvector( - 'english', - func.coalesce(Product.title, '') + ' ' + - func.coalesce(Product.description, '') + ' ' + - func.coalesce(Product.tags, '') - ) - - # Add search condition and ranking - query = query.filter(search_vector.match(search_query)) - - # Add relevance score for sorting - query = query.add_columns( - func.ts_rank(search_vector, search_query).label('relevance') - ) - else: - # Fallback to LIKE for other databases - search_pattern = f'%{search_term}%' - query = query.filter( - or_( - Product.title.ilike(search_pattern), - Product.description.ilike(search_pattern), - Product.tags.ilike(search_pattern) - ) - ) - - return query - - def _apply_filters(self, query: Query, filters: Dict[str, Any]) -> Query: - """Apply filters to query.""" - - # Category filter - if 'categories' in filters and filters['categories']: - query = query.filter(Product.category.in_(filters['categories'])) - - # Brand filter - if 'brands' in filters and filters['brands']: - query = query.filter(Product.brand.in_(filters['brands'])) - - # Price range filter - if 'min_price' in filters: - query = query.filter(Product.price >= filters['min_price']) - if 'max_price' in filters: - query = query.filter(Product.price <= filters['max_price']) - - # Stock filter - if filters.get('in_stock'): - query = query.filter(Product.in_stock == True) - - # Rating filter - if 'min_rating' in filters: - query = query.filter(Product.rating >= filters['min_rating']) - - # Date range filter - if 'date_from' in filters: - query = query.filter(Product.created_at >= filters['date_from']) - if 'date_to' in filters: - query = query.filter(Product.created_at <= filters['date_to']) - - return query - - def _apply_sorting(self, query: Query, sort_by: str, has_search: bool) -> Query: - """Apply sorting to query.""" - - sort_options = { - 'price_asc': Product.price.asc(), - 'price_desc': Product.price.desc(), - 'newest': Product.created_at.desc(), - 'oldest': Product.created_at.asc(), - 'rating': Product.rating.desc(), - } - - if sort_by == 'relevance' and has_search: - # Sort by relevance if text search was performed - if self.session.bind.dialect.name == 'postgresql': - query = query.order_by(text('relevance DESC')) - else: - # Fallback to newest for non-PostgreSQL - query = query.order_by(Product.created_at.desc()) - elif sort_by in sort_options: - query = query.order_by(sort_options[sort_by]) - else: - # Default sort - query = query.order_by(Product.created_at.desc()) - - return query - - def _apply_pagination(self, query: Query, page: int, per_page: int) -> Query: - """Apply pagination to query.""" - offset = (page - 1) * per_page - return query.offset(offset).limit(per_page) - - def _get_facets(self, base_query: Query, active_filters: Dict[str, Any]) -> Dict[str, List[Dict]]: - """Get facet counts for filters.""" - facets = {} - - # Category facets - category_query = self._get_base_facet_query(base_query, active_filters, 'categories') - category_facets = category_query.with_entities( - Product.category, - func.count(Product.id).label('count') - ).group_by(Product.category).all() - - facets['categories'] = [ - {'value': cat, 'count': count} - for cat, count in category_facets if cat - ] - - # Brand facets - brand_query = self._get_base_facet_query(base_query, active_filters, 'brands') - brand_facets = brand_query.with_entities( - Product.brand, - func.count(Product.id).label('count') - ).group_by(Product.brand).all() - - facets['brands'] = [ - {'value': brand, 'count': count} - for brand, count in brand_facets if brand - ] - - # Price range facets - facets['price_ranges'] = self._get_price_range_facets(base_query, active_filters) - - # In stock count - in_stock_query = self._get_base_facet_query(base_query, active_filters, 'in_stock') - in_stock_count = in_stock_query.filter(Product.in_stock == True).count() - - facets['availability'] = [ - {'value': 'in_stock', 'count': in_stock_count} - ] - - return facets - - def _get_base_facet_query( - self, - base_query: Query, - active_filters: Dict[str, Any], - exclude_filter: str - ) -> Query: - """ - Get base query for facet counting, excluding the current filter. - This ensures facet counts show what would be available if that filter was removed. - """ - # Clone the base query - facet_query = base_query - - # Apply all filters except the one we're counting - filters_to_apply = {k: v for k, v in active_filters.items() if k != exclude_filter} - return self._apply_filters(self.session.query(Product), filters_to_apply) - - def _get_price_range_facets(self, base_query: Query, active_filters: Dict[str, Any]) -> List[Dict]: - """Calculate price range facets.""" - - # Define price ranges - ranges = [ - (0, 50, 'Under $50'), - (50, 100, '$50 - $100'), - (100, 200, '$100 - $200'), - (200, 500, '$200 - $500'), - (500, None, 'Over $500') - ] - - # Get base query without price filters - query_without_price = self._get_base_facet_query( - base_query, - active_filters, - 'price' - ) - - facets = [] - for min_price, max_price, label in ranges: - range_query = query_without_price - range_query = range_query.filter(Product.price >= min_price) - if max_price: - range_query = range_query.filter(Product.price < max_price) - - count = range_query.count() - if count > 0: - facets.append({ - 'value': f'{min_price}-{max_price or "inf"}', - 'label': label, - 'count': count - }) - - return facets - - def _serialize_product(self, product: Product) -> Dict: - """Serialize product for API response.""" - return { - 'id': product.id, - 'title': product.title, - 'description': product.description, - 'category': product.category, - 'brand': product.brand, - 'price': product.price, - 'rating': product.rating, - 'in_stock': product.in_stock, - 'created_at': product.created_at.isoformat() if product.created_at else None, - 'tags': product.tags.split(',') if product.tags else [] - } - - -class SearchOptimizer: - """Query optimization utilities.""" - - @staticmethod - def explain_query(session, query: Query) -> str: - """Get query execution plan (PostgreSQL).""" - if session.bind.dialect.name != 'postgresql': - return "EXPLAIN only available for PostgreSQL" - - sql = str(query.statement.compile(compile_kwargs={"literal_binds": True})) - result = session.execute(f"EXPLAIN ANALYZE {sql}") - return '\n'.join([row[0] for row in result]) - - @staticmethod - def add_search_indexes(engine): - """Create optimized indexes for search.""" - - with engine.connect() as conn: - # Full-text search index (PostgreSQL) - if engine.dialect.name == 'postgresql': - conn.execute(""" - CREATE INDEX IF NOT EXISTS idx_product_search_vector - ON products - USING GIN(to_tsvector('english', - COALESCE(title, '') || ' ' || - COALESCE(description, '') || ' ' || - COALESCE(tags, '') - )) - """) - - # Standard indexes for filtering - conn.execute("CREATE INDEX IF NOT EXISTS idx_products_category ON products(category)") - conn.execute("CREATE INDEX IF NOT EXISTS idx_products_brand ON products(brand)") - conn.execute("CREATE INDEX IF NOT EXISTS idx_products_price ON products(price)") - conn.execute("CREATE INDEX IF NOT EXISTS idx_products_in_stock ON products(in_stock)") - - conn.commit() - - -# Usage Example -if __name__ == '__main__': - # Setup database - engine = create_engine('postgresql://user:pass@localhost/shop') - Session = sessionmaker(bind=engine) - session = Session() - - # Create tables and indexes - Base.metadata.create_all(engine) - SearchOptimizer.add_search_indexes(engine) - - # Initialize searcher - searcher = ProductSearcher(session) - - # Perform search - results = searcher.search( - query='laptop', - filters={ - 'categories': ['Electronics', 'Computers'], - 'min_price': 500, - 'max_price': 2000, - 'in_stock': True - }, - sort_by='price_asc', - page=1, - per_page=20, - include_facets=True - ) - - print(f"Found {results['total']} products") - print(f"Page {results['page']} of {results['total_pages']}") - print(f"Facets: {results['facets']}") \ No newline at end of file diff --git a/.claude/skills/implementing-search-filter/outputs.yaml b/.claude/skills/implementing-search-filter/outputs.yaml deleted file mode 100644 index 5140b729d..000000000 --- a/.claude/skills/implementing-search-filter/outputs.yaml +++ /dev/null @@ -1,368 +0,0 @@ -skill: "implementing-search-filter" -version: "1.0" -domain: "frontend" - -base_outputs: - - path: "src/components/Search*.{tsx,jsx,ts,js}" - must_contain: - - "search" - - "debounce" - - "onChange" - description: "Search input component with debouncing and clear functionality" - - - path: "src/components/*Filter*.{tsx,jsx,ts,js}" - must_contain: - - "filter" - - "onChange" - description: "Filter components (checkbox, range, dropdown, etc.)" - - - path: "src/hooks/useDebounce.{ts,tsx,js}" - must_contain: - - "useEffect" - - "setTimeout" - - "debounce" - description: "Custom debounce hook for search input optimization" - - - path: "src/hooks/useSearch*.{ts,tsx,js}" - must_contain: - - "search" - - "filter" - - "useState" - description: "Search and filter state management hook" - -conditional_outputs: - maturity: - starter: - - path: "src/components/SimpleSearch.{tsx,jsx}" - must_contain: - - "input" - - "search" - - "onChange" - description: "Basic search input with minimal features" - - - path: "src/components/BasicFilters.{tsx,jsx}" - must_contain: - - "checkbox" - - "filter" - description: "Simple checkbox or dropdown filters" - - intermediate: - - path: "src/components/SearchBar.{tsx,jsx}" - must_contain: - - "debounce" - - "loading" - - "clear" - description: "Search bar with debouncing, loading states, and clear button" - - - path: "src/components/FilterPanel.{tsx,jsx}" - must_contain: - - "filter" - - "facet" - - "count" - description: "Filter panel with facet counts and multiple filter types" - - - path: "src/components/ActiveFilters.{tsx,jsx}" - must_contain: - - "chip" - - "badge" - - "remove" - description: "Active filter chips/badges with remove functionality" - - advanced: - - path: "src/components/AdvancedSearch.{tsx,jsx}" - must_contain: - - "autocomplete" - - "suggestion" - - "downshift" - description: "Advanced search with autocomplete/typeahead suggestions" - - - path: "src/components/FacetedSearch.{tsx,jsx}" - must_contain: - - "facet" - - "aggregation" - - "count" - description: "Faceted search with dynamic counts and aggregations" - - - path: "src/components/SearchResults.{tsx,jsx}" - must_contain: - - "result" - - "highlight" - - "pagination" - description: "Search results with highlighting and pagination" - - - path: "src/utils/searchParams.{ts,js}" - must_contain: - - "URLSearchParams" - - "serialize" - - "deserialize" - description: "URL parameter management for shareable search state" - - frontend_framework: - react: - - path: "src/components/*Search*.{tsx,jsx}" - must_contain: - - "useState" - - "useEffect" - description: "React-based search components" - - - path: "src/hooks/useSearch.{ts,tsx}" - must_contain: - - "useState" - - "useCallback" - - "useMemo" - description: "React hooks for search state management" - - vue: - - path: "src/components/*Search*.vue" - must_contain: - - "ref" - - "computed" - - "watch" - description: "Vue-based search components" - - - path: "src/composables/useSearch.{ts,js}" - must_contain: - - "ref" - - "computed" - description: "Vue composables for search functionality" - - angular: - - path: "src/app/components/*-search/*.component.ts" - must_contain: - - "Component" - - "OnInit" - - "FormControl" - description: "Angular search components" - - - path: "src/app/services/search.service.ts" - must_contain: - - "Injectable" - - "Observable" - - "HttpClient" - description: "Angular search service" - - state_management: - redux: - - path: "src/store/search/searchSlice.{ts,js}" - must_contain: - - "createSlice" - - "reducer" - - "actions" - description: "Redux slice for search state" - - - path: "src/store/search/searchThunks.{ts,js}" - must_contain: - - "createAsyncThunk" - - "async" - description: "Redux thunks for async search operations" - - zustand: - - path: "src/stores/searchStore.{ts,js}" - must_contain: - - "create" - - "set" - - "get" - description: "Zustand store for search state" - - context: - - path: "src/contexts/SearchContext.{tsx,jsx}" - must_contain: - - "createContext" - - "Provider" - - "useContext" - description: "React Context for search state management" - - styling: - tailwind: - - path: "src/components/*Search*.{tsx,jsx}" - must_contain: - - "className" - - "flex" - description: "Components styled with Tailwind CSS utility classes" - - css_modules: - - path: "src/components/*Search*.module.css" - must_contain: - - ".search" - - ".filter" - description: "CSS modules for search component styling" - - styled_components: - - path: "src/components/*Search*.styled.{ts,tsx}" - must_contain: - - "styled" - - "css" - description: "Styled-components for search UI" - - backend_integration: - rest_api: - - path: "src/api/search.{ts,js}" - must_contain: - - "fetch" - - "api/search" - - "params" - description: "REST API client for search endpoints" - - - path: "backend/api/search.{py,js,ts}" - must_contain: - - "search" - - "filter" - - "query" - description: "Backend search API endpoint" - - graphql: - - path: "src/graphql/queries/search.{ts,js}" - must_contain: - - "query" - - "search" - - "filter" - description: "GraphQL queries for search" - - elasticsearch: - - path: "backend/search/elasticsearch.{py,js}" - must_contain: - - "elasticsearch" - - "query" - - "index" - description: "Elasticsearch integration for full-text search" - - database: - sqlalchemy: - - path: "backend/queries/search_queries.py" - must_contain: - - "select" - - "filter" - - "query" - description: "SQLAlchemy search query builders" - - django_orm: - - path: "backend/views/search_views.py" - must_contain: - - "filter" - - "Q" - - "queryset" - description: "Django ORM search views with filters" - - prisma: - - path: "backend/services/search.{ts,js}" - must_contain: - - "prisma" - - "findMany" - - "where" - description: "Prisma-based search service" - -scaffolding: - - path: "src/types/search.{ts,d.ts}" - reason: "TypeScript type definitions for search interfaces, filters, and API responses" - - - path: "src/config/searchConfig.{ts,js,json}" - reason: "Search configuration including debounce timing, pagination defaults, API endpoints" - - - path: "src/utils/queryBuilder.{ts,js}" - reason: "Query string builder for converting filters to URL parameters" - - - path: "src/utils/filterHelpers.{ts,js}" - reason: "Helper functions for filter validation, transformation, and sanitization" - - - path: "tests/search.test.{ts,tsx,js,jsx}" - reason: "Test suite for search functionality, debouncing, and filter logic" - - - path: "tests/filters.test.{ts,tsx,js,jsx}" - reason: "Test suite for filter components and state management" - -metadata: - primary_blueprints: - - "dashboard" - - "crud-api" - - "frontend" - - "data-pipeline" - - secondary_blueprints: - - "api-first" - - "observability" - - contributes_to: - - "Search input components with debouncing" - - "Autocomplete/typeahead interfaces" - - "Filter panels (checkbox, range, dropdown)" - - "Faceted search with dynamic counts" - - "Active filter chips/badges" - - "URL-based filter state management" - - "Backend search APIs (REST/GraphQL)" - - "Database query optimization" - - "Elasticsearch integration" - - "Search result highlighting" - - "Pagination and sorting" - - "Mobile-responsive filter drawers" - - "Accessible search experiences (ARIA, keyboard navigation)" - - common_patterns: - - "Debounced search input (300ms default)" - - "Search state in URL parameters" - - "Client-side filtering for <1000 items" - - "Server-side search for >1000 items" - - "Hybrid approach with optimistic updates" - - "Request cancellation for pending searches" - - "Loading states and skeleton loaders" - - "Empty state handling" - - "Error handling with retry logic" - - "Result caching (300s TTL default)" - - key_libraries: - frontend: - - name: "downshift" - purpose: "Accessible autocomplete primitives" - install: "npm install downshift" - - name: "react-select" - purpose: "Full-featured select/filter component" - install: "npm install react-select" - - name: "lodash.debounce" - purpose: "Debounce utility" - install: "npm install lodash.debounce" - - backend_python: - - name: "elasticsearch" - purpose: "Elasticsearch client" - install: "pip install elasticsearch" - - name: "django-filter" - purpose: "Django REST Framework filters" - install: "pip install django-filter" - - name: "sqlalchemy" - purpose: "SQL query builder" - install: "pip install sqlalchemy" - - backend_nodejs: - - name: "@elastic/elasticsearch" - purpose: "Elasticsearch client for Node.js" - install: "npm install @elastic/elasticsearch" - - name: "express-validator" - purpose: "Request validation middleware" - install: "npm install express-validator" - - accessibility_requirements: - - "role=\"search\" for search regions" - - "aria-live regions for result updates" - - "aria-label on filter controls" - - "Keyboard navigation (Tab, Arrow keys, Enter, Escape)" - - "Focus management in autocomplete" - - "Screen reader announcements for filter changes" - - performance_considerations: - - "Debounce search input (300ms recommended)" - - "Cancel pending requests on new input" - - "Index optimization for search columns" - - "Query result caching (5-minute TTL)" - - "Pagination for large result sets" - - "Virtual scrolling for >1000 items" - - "Lazy loading of images" - - "Compression for complex URL state" - - example_use_cases: - - "E-commerce product search with category/price filters" - - "Data table search and column filtering" - - "Document search with full-text indexing" - - "User directory with multi-criteria filters" - - "Job board with location/salary/skills filters" - - "Real estate search with map integration" - - "Content management system search" - - "Log viewer with advanced query syntax" diff --git a/.claude/skills/implementing-search-filter/references/api-design.md b/.claude/skills/implementing-search-filter/references/api-design.md deleted file mode 100644 index 4b9bf2c6c..000000000 --- a/.claude/skills/implementing-search-filter/references/api-design.md +++ /dev/null @@ -1,735 +0,0 @@ -# Search API Design Patterns - - -## Table of Contents - -- [RESTful Search Endpoints](#restful-search-endpoints) - - [Basic Search API Design](#basic-search-api-design) - - [Autocomplete API](#autocomplete-api) -- [Advanced Query Parameters](#advanced-query-parameters) - - [Query DSL Support](#query-dsl-support) - - [GraphQL Search Schema](#graphql-search-schema) -- [Pagination Strategies](#pagination-strategies) - - [Offset-Based Pagination](#offset-based-pagination) - - [Cursor-Based Pagination](#cursor-based-pagination) -- [Rate Limiting and Caching](#rate-limiting-and-caching) - - [API Rate Limiting](#api-rate-limiting) - - [Response Caching](#response-caching) -- [Error Handling](#error-handling) - - [Comprehensive Error Responses](#comprehensive-error-responses) -- [API Documentation](#api-documentation) - - [OpenAPI Specification](#openapi-specification) - -## RESTful Search Endpoints - -### Basic Search API Design -```python -from fastapi import FastAPI, Query, HTTPException -from typing import Optional, List, Dict, Any -from pydantic import BaseModel, Field, validator -from datetime import datetime - -app = FastAPI() - -class SearchFilters(BaseModel): - """Validated search filters.""" - category: Optional[List[str]] = Field(None, description="Filter by categories") - brand: Optional[List[str]] = Field(None, description="Filter by brands") - min_price: Optional[float] = Field(None, ge=0, description="Minimum price") - max_price: Optional[float] = Field(None, ge=0, description="Maximum price") - in_stock: Optional[bool] = Field(None, description="Only in-stock items") - tags: Optional[List[str]] = Field(None, description="Filter by tags") - - @validator('max_price') - def validate_price_range(cls, v, values): - if v and 'min_price' in values and values['min_price']: - if v < values['min_price']: - raise ValueError('max_price must be greater than min_price') - return v - -class SearchRequest(BaseModel): - """Search request body for POST requests.""" - query: Optional[str] = Field(None, min_length=1, max_length=200) - filters: Optional[SearchFilters] = None - sort_by: Optional[str] = Field('relevance', regex='^(relevance|price_asc|price_desc|newest|rating)$') - page: int = Field(1, ge=1, le=100) - size: int = Field(20, ge=1, le=100) - include_facets: bool = Field(True, description="Include facet counts") - -class SearchResponse(BaseModel): - """Search response structure.""" - total: int - page: int - size: int - items: List[Dict[str, Any]] - facets: Optional[Dict[str, List[Dict]]] = None - query_time_ms: float - - class Config: - json_schema_extra = { - "example": { - "total": 150, - "page": 1, - "size": 20, - "items": [...], - "facets": { - "category": [ - {"value": "Electronics", "count": 45}, - {"value": "Books", "count": 32} - ] - }, - "query_time_ms": 125.5 - } - } - -# GET endpoint for simple searches -@app.get("/api/v1/search", response_model=SearchResponse) -async def search_get( - q: Optional[str] = Query(None, min_length=1, max_length=200, description="Search query"), - category: Optional[List[str]] = Query(None, description="Categories filter"), - brand: Optional[List[str]] = Query(None, description="Brands filter"), - min_price: Optional[float] = Query(None, ge=0, description="Minimum price"), - max_price: Optional[float] = Query(None, ge=0, description="Maximum price"), - in_stock: Optional[bool] = Query(None, description="Stock filter"), - sort: str = Query('relevance', regex='^(relevance|price_asc|price_desc|newest|rating)$'), - page: int = Query(1, ge=1, le=100), - size: int = Query(20, ge=1, le=100) -): - """ - Search products with query and filters. - - - **q**: Search query text - - **category**: Filter by categories (multiple allowed) - - **brand**: Filter by brands (multiple allowed) - - **min_price**: Minimum price filter - - **max_price**: Maximum price filter - - **in_stock**: Show only in-stock items - - **sort**: Sort order - - **page**: Page number (1-based) - - **size**: Results per page - """ - - # Build filters - filters = SearchFilters( - category=category, - brand=brand, - min_price=min_price, - max_price=max_price, - in_stock=in_stock - ) - - # Execute search - results = await perform_search( - query=q, - filters=filters, - sort_by=sort, - page=page, - size=size - ) - - return results - -# POST endpoint for complex searches -@app.post("/api/v1/search", response_model=SearchResponse) -async def search_post(request: SearchRequest): - """ - Advanced search with complex filters. - - Accepts a JSON body with search parameters. - Useful for complex queries that exceed URL length limits. - """ - - results = await perform_search( - query=request.query, - filters=request.filters, - sort_by=request.sort_by, - page=request.page, - size=request.size, - include_facets=request.include_facets - ) - - return results -``` - -### Autocomplete API -```python -class AutocompleteResponse(BaseModel): - """Autocomplete suggestions response.""" - suggestions: List[Dict[str, Any]] - query_time_ms: float - -@app.get("/api/v1/autocomplete", response_model=AutocompleteResponse) -async def autocomplete( - q: str = Query(..., min_length=2, max_length=50, description="Query prefix"), - size: int = Query(10, ge=1, le=20, description="Number of suggestions"), - include_categories: bool = Query(False, description="Include category in suggestions") -): - """ - Get autocomplete suggestions for search input. - - Returns suggestions based on partial query match. - Optimized for real-time typeahead functionality. - """ - - import time - start_time = time.time() - - # Get suggestions from search backend - suggestions = await get_autocomplete_suggestions( - prefix=q, - size=size, - include_categories=include_categories - ) - - query_time_ms = (time.time() - start_time) * 1000 - - return AutocompleteResponse( - suggestions=suggestions, - query_time_ms=query_time_ms - ) -``` - -## Advanced Query Parameters - -### Query DSL Support -```python -from typing import Union -import json - -class QueryDSL(BaseModel): - """Domain Specific Language for complex queries.""" - - # Boolean operators - must: Optional[List[Union[str, Dict]]] = None - should: Optional[List[Union[str, Dict]]] = None - must_not: Optional[List[Union[str, Dict]]] = None - - # Field-specific queries - fields: Optional[Dict[str, Any]] = None - - # Advanced options - fuzzy: Optional[bool] = False - boost: Optional[Dict[str, float]] = None - minimum_should_match: Optional[int] = None - -@app.post("/api/v1/search/advanced") -async def advanced_search( - query_dsl: QueryDSL, - filters: Optional[SearchFilters] = None, - page: int = Query(1, ge=1), - size: int = Query(20, ge=1, le=100) -): - """ - Execute advanced search with query DSL. - - Supports boolean logic, field-specific queries, and boosting. - - Example query DSL: - ```json - { - "must": ["laptop"], - "should": ["gaming", "professional"], - "must_not": ["refurbished"], - "fields": { - "brand": "dell", - "category": "computers" - }, - "fuzzy": true, - "boost": { - "title": 2.0, - "description": 1.5 - } - } - ``` - """ - - # Build and execute complex query - results = await execute_dsl_query( - dsl=query_dsl, - filters=filters, - page=page, - size=size - ) - - return results -``` - -### GraphQL Search Schema -```python -import strawberry -from typing import Optional, List - -@strawberry.type -class Product: - id: str - title: str - description: str - price: float - category: str - brand: str - rating: Optional[float] - in_stock: bool - -@strawberry.type -class SearchResult: - total: int - items: List[Product] - facets: Optional[str] # JSON string of facets - -@strawberry.input -class SearchInput: - query: Optional[str] = None - category: Optional[List[str]] = None - min_price: Optional[float] = None - max_price: Optional[float] = None - sort_by: str = "relevance" - page: int = 1 - size: int = 20 - -@strawberry.type -class Query: - @strawberry.field - async def search(self, input: SearchInput) -> SearchResult: - """GraphQL search endpoint.""" - results = await perform_search( - query=input.query, - filters={ - 'category': input.category, - 'min_price': input.min_price, - 'max_price': input.max_price - }, - sort_by=input.sort_by, - page=input.page, - size=input.size - ) - - return SearchResult( - total=results['total'], - items=[Product(**item) for item in results['items']], - facets=json.dumps(results.get('facets', {})) - ) - -schema = strawberry.Schema(query=Query) -``` - -## Pagination Strategies - -### Offset-Based Pagination -```python -class OffsetPagination: - """Traditional offset-based pagination.""" - - @staticmethod - def paginate( - query, - page: int = 1, - per_page: int = 20, - max_per_page: int = 100 - ): - """Apply offset pagination to query.""" - # Validate inputs - page = max(1, page) - per_page = min(max_per_page, max(1, per_page)) - - # Calculate offset - offset = (page - 1) * per_page - - # Get total count - total = query.count() - - # Apply pagination - items = query.offset(offset).limit(per_page).all() - - # Calculate metadata - total_pages = (total + per_page - 1) // per_page - has_next = page < total_pages - has_prev = page > 1 - - return { - 'items': items, - 'page': page, - 'per_page': per_page, - 'total': total, - 'total_pages': total_pages, - 'has_next': has_next, - 'has_prev': has_prev, - 'next_page': page + 1 if has_next else None, - 'prev_page': page - 1 if has_prev else None - } -``` - -### Cursor-Based Pagination -```python -import base64 -from datetime import datetime - -class CursorPagination: - """Cursor-based pagination for real-time data.""" - - @staticmethod - def encode_cursor(position: Dict) -> str: - """Encode position as cursor.""" - cursor_data = json.dumps(position, default=str) - return base64.b64encode(cursor_data.encode()).decode() - - @staticmethod - def decode_cursor(cursor: str) -> Dict: - """Decode cursor to position.""" - try: - cursor_data = base64.b64decode(cursor.encode()).decode() - return json.loads(cursor_data) - except: - raise ValueError("Invalid cursor") - - @staticmethod - def paginate_with_cursor( - query, - cursor: Optional[str] = None, - limit: int = 20, - order_by: str = 'created_at' - ): - """Apply cursor pagination.""" - - # Decode cursor if provided - if cursor: - position = CursorPagination.decode_cursor(cursor) - query = query.filter( - getattr(Product, order_by) > position[order_by] - ) - - # Order and limit - query = query.order_by(getattr(Product, order_by)) - items = query.limit(limit + 1).all() - - # Check if there are more items - has_next = len(items) > limit - if has_next: - items = items[:-1] # Remove extra item - - # Create next cursor - next_cursor = None - if items and has_next: - last_item = items[-1] - next_cursor = CursorPagination.encode_cursor({ - order_by: getattr(last_item, order_by) - }) - - return { - 'items': items, - 'next_cursor': next_cursor, - 'has_next': has_next - } - -@app.get("/api/v1/search/cursor") -async def search_with_cursor( - q: Optional[str] = None, - cursor: Optional[str] = None, - limit: int = Query(20, ge=1, le=100) -): - """Search with cursor-based pagination.""" - - results = await search_with_cursor_pagination( - query=q, - cursor=cursor, - limit=limit - ) - - return results -``` - -## Rate Limiting and Caching - -### API Rate Limiting -```python -from slowapi import Limiter, _rate_limit_exceeded_handler -from slowapi.util import get_remote_address -from slowapi.errors import RateLimitExceeded - -# Create limiter -limiter = Limiter( - key_func=get_remote_address, - default_limits=["100/minute"] -) - -app.state.limiter = limiter -app.add_exception_handler(RateLimitExceeded, _rate_limit_exceeded_handler) - -@app.get("/api/v1/search") -@limiter.limit("10/second") # More restrictive limit for search -async def search_rate_limited( - request: Request, - q: str = Query(...), - page: int = 1, - size: int = 20 -): - """Rate-limited search endpoint.""" - return await perform_search(q, page, size) -``` - -### Response Caching -```python -from fastapi_cache import FastAPICache -from fastapi_cache.decorator import cache -from fastapi_cache.backends.redis import RedisBackend -import redis - -# Initialize cache on startup -@app.on_event("startup") -async def startup(): - redis_client = redis.Redis(host="localhost", port=6379) - FastAPICache.init(RedisBackend(redis_client), prefix="search-cache:") - -@app.get("/api/v1/search") -@cache(expire=300) # Cache for 5 minutes -async def cached_search( - q: str, - category: Optional[List[str]] = None, - page: int = 1, - size: int = 20 -): - """Cached search endpoint.""" - - # Cache key includes all parameters - results = await perform_search( - query=q, - filters={'category': category}, - page=page, - size=size - ) - - return results - -# Custom cache key generation -def get_cache_key(func, *args, **kwargs): - """Generate cache key from search parameters.""" - params = { - 'query': kwargs.get('q'), - 'filters': kwargs.get('filters'), - 'page': kwargs.get('page', 1), - 'size': kwargs.get('size', 20) - } - - # Sort and hash parameters - import hashlib - param_str = json.dumps(params, sort_keys=True) - return f"search:{hashlib.md5(param_str.encode()).hexdigest()}" -``` - -## Error Handling - -### Comprehensive Error Responses -```python -from enum import Enum - -class ErrorCode(str, Enum): - INVALID_QUERY = "INVALID_QUERY" - INVALID_FILTERS = "INVALID_FILTERS" - SERVICE_UNAVAILABLE = "SERVICE_UNAVAILABLE" - RATE_LIMITED = "RATE_LIMITED" - INTERNAL_ERROR = "INTERNAL_ERROR" - -class ErrorResponse(BaseModel): - error: ErrorCode - message: str - details: Optional[Dict] = None - timestamp: datetime = Field(default_factory=datetime.utcnow) - -@app.exception_handler(ValueError) -async def value_error_handler(request: Request, exc: ValueError): - return JSONResponse( - status_code=400, - content=ErrorResponse( - error=ErrorCode.INVALID_QUERY, - message=str(exc), - details={"path": request.url.path} - ).dict() - ) - -@app.exception_handler(HTTPException) -async def http_exception_handler(request: Request, exc: HTTPException): - return JSONResponse( - status_code=exc.status_code, - content=ErrorResponse( - error=ErrorCode.INTERNAL_ERROR, - message=exc.detail, - details={"status_code": exc.status_code} - ).dict() - ) - -# Service health check -@app.get("/api/v1/health") -async def health_check(): - """Check search service health.""" - try: - # Verify backend connectivity - await check_elasticsearch_connection() - await check_database_connection() - - return { - "status": "healthy", - "timestamp": datetime.utcnow(), - "services": { - "elasticsearch": "up", - "database": "up", - "cache": "up" - } - } - except Exception as e: - raise HTTPException( - status_code=503, - detail="Search service unavailable" - ) -``` - -## API Documentation - -### OpenAPI Specification -```yaml -openapi: 3.0.0 -info: - title: Search API - version: 1.0.0 - description: Product search and filtering API - -paths: - /api/v1/search: - get: - summary: Search products - parameters: - - name: q - in: query - required: false - schema: - type: string - minLength: 1 - maxLength: 200 - description: Search query - - - name: category - in: query - required: false - schema: - type: array - items: - type: string - style: form - explode: true - description: Filter by categories - - - name: min_price - in: query - required: false - schema: - type: number - minimum: 0 - description: Minimum price filter - - - name: max_price - in: query - required: false - schema: - type: number - minimum: 0 - description: Maximum price filter - - - name: page - in: query - required: false - schema: - type: integer - minimum: 1 - default: 1 - description: Page number - - - name: size - in: query - required: false - schema: - type: integer - minimum: 1 - maximum: 100 - default: 20 - description: Results per page - - responses: - 200: - description: Search results - content: - application/json: - schema: - $ref: '#/components/schemas/SearchResponse' - - 400: - description: Invalid parameters - content: - application/json: - schema: - $ref: '#/components/schemas/ErrorResponse' - - 429: - description: Rate limited - - 500: - description: Internal server error - -components: - schemas: - SearchResponse: - type: object - properties: - total: - type: integer - description: Total number of results - page: - type: integer - description: Current page - size: - type: integer - description: Results per page - items: - type: array - items: - $ref: '#/components/schemas/Product' - facets: - type: object - additionalProperties: - type: array - items: - type: object - properties: - value: - type: string - count: - type: integer - - Product: - type: object - properties: - id: - type: string - title: - type: string - description: - type: string - price: - type: number - category: - type: string - brand: - type: string - in_stock: - type: boolean - - ErrorResponse: - type: object - properties: - error: - type: string - message: - type: string - details: - type: object - timestamp: - type: string - format: date-time -``` \ No newline at end of file diff --git a/.claude/skills/implementing-search-filter/references/autocomplete-patterns.md b/.claude/skills/implementing-search-filter/references/autocomplete-patterns.md deleted file mode 100644 index f729f6e77..000000000 --- a/.claude/skills/implementing-search-filter/references/autocomplete-patterns.md +++ /dev/null @@ -1,790 +0,0 @@ -# Autocomplete and Typeahead Patterns - - -## Table of Contents - -- [Basic Autocomplete Implementation](#basic-autocomplete-implementation) - - [React Autocomplete with Downshift](#react-autocomplete-with-downshift) - - [Highlight Matching Text](#highlight-matching-text) -- [Async Autocomplete with API](#async-autocomplete-with-api) - - [Debounced API Autocomplete](#debounced-api-autocomplete) -- [Advanced Autocomplete Features](#advanced-autocomplete-features) - - [Multi-Section Autocomplete](#multi-section-autocomplete) - - [Recent Searches and Suggestions](#recent-searches-and-suggestions) -- [Search-as-you-type Implementation](#search-as-you-type-implementation) - - [Real-time Search Results](#real-time-search-results) -- [Performance Optimization](#performance-optimization) - - [Virtual Scrolling for Large Lists](#virtual-scrolling-for-large-lists) - - [Memoized Filtering](#memoized-filtering) -- [Accessibility Features](#accessibility-features) - - [ARIA Live Regions](#aria-live-regions) - -## Basic Autocomplete Implementation - -### React Autocomplete with Downshift -```tsx -import { useCombobox } from 'downshift'; -import { useState, useMemo } from 'react'; - -interface AutocompleteProps { - items: T[]; - onSelect: (item: T | null) => void; - itemToString: (item: T | null) => string; - placeholder?: string; - filterFunction?: (items: T[], inputValue: string) => T[]; -} - -export function Autocomplete({ - items, - onSelect, - itemToString, - placeholder = 'Type to search...', - filterFunction -}: AutocompleteProps) { - const [inputItems, setInputItems] = useState(items); - - const defaultFilter = (items: T[], inputValue: string) => { - return items.filter(item => - itemToString(item) - .toLowerCase() - .includes(inputValue.toLowerCase()) - ); - }; - - const { - isOpen, - getToggleButtonProps, - getLabelProps, - getMenuProps, - getInputProps, - highlightedIndex, - getItemProps, - selectedItem, - inputValue - } = useCombobox({ - items: inputItems, - itemToString, - onInputValueChange: ({ inputValue }) => { - const filterFn = filterFunction || defaultFilter; - setInputItems(filterFn(items, inputValue || '')); - }, - onSelectedItemChange: ({ selectedItem }) => { - onSelect(selectedItem || null); - } - }); - - return ( -
    -
    - - -
    - -
      - {isOpen && inputItems.length > 0 && ( - inputItems.map((item, index) => ( -
    • - -
    • - )) - )} - - {isOpen && inputItems.length === 0 && ( -
    • - No results found for "{inputValue}" -
    • - )} -
    -
    - ); -} -``` - -### Highlight Matching Text -```tsx -interface HighlightMatchProps { - text: string; - query: string; -} - -export function HighlightMatch({ text, query }: HighlightMatchProps) { - if (!query) return <>{text}; - - const parts = text.split(new RegExp(`(${query})`, 'gi')); - - return ( - <> - {parts.map((part, index) => - part.toLowerCase() === query.toLowerCase() ? ( - - {part} - - ) : ( - {part} - ) - )} - - ); -} -``` - -## Async Autocomplete with API - -### Debounced API Autocomplete -```tsx -import { useState, useEffect, useCallback } from 'react'; -import { debounce } from 'lodash'; - -interface AsyncAutocompleteProps { - fetchSuggestions: (query: string) => Promise; - onSelect: (value: string) => void; - debounceMs?: number; - minChars?: number; -} - -export function AsyncAutocomplete({ - fetchSuggestions, - onSelect, - debounceMs = 300, - minChars = 2 -}: AsyncAutocompleteProps) { - const [inputValue, setInputValue] = useState(''); - const [suggestions, setSuggestions] = useState([]); - const [isLoading, setIsLoading] = useState(false); - const [isOpen, setIsOpen] = useState(false); - const [selectedIndex, setSelectedIndex] = useState(-1); - - // Debounced fetch function - const debouncedFetch = useCallback( - debounce(async (query: string) => { - if (query.length < minChars) { - setSuggestions([]); - setIsLoading(false); - return; - } - - setIsLoading(true); - try { - const results = await fetchSuggestions(query); - setSuggestions(results); - setIsOpen(true); - } catch (error) { - console.error('Failed to fetch suggestions:', error); - setSuggestions([]); - } finally { - setIsLoading(false); - } - }, debounceMs), - [fetchSuggestions, minChars] - ); - - // Fetch suggestions when input changes - useEffect(() => { - debouncedFetch(inputValue); - return () => debouncedFetch.cancel(); - }, [inputValue, debouncedFetch]); - - // Keyboard navigation - const handleKeyDown = (e: React.KeyboardEvent) => { - if (!isOpen || suggestions.length === 0) return; - - switch (e.key) { - case 'ArrowDown': - e.preventDefault(); - setSelectedIndex(prev => - prev < suggestions.length - 1 ? prev + 1 : 0 - ); - break; - - case 'ArrowUp': - e.preventDefault(); - setSelectedIndex(prev => - prev > 0 ? prev - 1 : suggestions.length - 1 - ); - break; - - case 'Enter': - e.preventDefault(); - if (selectedIndex >= 0) { - const selected = suggestions[selectedIndex]; - setInputValue(selected); - onSelect(selected); - setIsOpen(false); - } - break; - - case 'Escape': - setIsOpen(false); - setSelectedIndex(-1); - break; - } - }; - - return ( -
    -
    - setInputValue(e.target.value)} - onKeyDown={handleKeyDown} - onFocus={() => suggestions.length > 0 && setIsOpen(true)} - onBlur={() => setTimeout(() => setIsOpen(false), 200)} - placeholder="Start typing to search..." - aria-autocomplete="list" - aria-expanded={isOpen} - aria-controls="suggestions-list" - aria-activedescendant={ - selectedIndex >= 0 ? `suggestion-${selectedIndex}` : undefined - } - /> - - {isLoading && ( -
    - -
    - )} -
    - - {isOpen && suggestions.length > 0 && ( -
      - {suggestions.map((suggestion, index) => ( -
    • setSelectedIndex(index)} - onClick={() => { - setInputValue(suggestion); - onSelect(suggestion); - setIsOpen(false); - }} - > - -
    • - ))} -
    - )} -
    - ); -} -``` - -## Advanced Autocomplete Features - -### Multi-Section Autocomplete -```tsx -interface Section { - title: string; - items: T[]; -} - -interface MultiSectionAutocompleteProps { - sections: Section[]; - onSelect: (item: T) => void; - itemToString: (item: T) => string; - renderItem?: (item: T, isHighlighted: boolean) => React.ReactNode; -} - -export function MultiSectionAutocomplete({ - sections, - onSelect, - itemToString, - renderItem -}: MultiSectionAutocompleteProps) { - const [inputValue, setInputValue] = useState(''); - const [highlightedSection, setHighlightedSection] = useState(0); - const [highlightedItem, setHighlightedItem] = useState(0); - - // Filter sections based on input - const filteredSections = useMemo(() => { - if (!inputValue) return sections; - - return sections - .map(section => ({ - ...section, - items: section.items.filter(item => - itemToString(item) - .toLowerCase() - .includes(inputValue.toLowerCase()) - ) - })) - .filter(section => section.items.length > 0); - }, [sections, inputValue, itemToString]); - - // Navigate through sections and items - const handleKeyDown = (e: React.KeyboardEvent) => { - // Implementation of keyboard navigation - // through sections and items - }; - - return ( -
    - setInputValue(e.target.value)} - onKeyDown={handleKeyDown} - placeholder="Search..." - /> - - {inputValue && filteredSections.length > 0 && ( -
    - {filteredSections.map((section, sectionIndex) => ( -
    -
    {section.title}
    -
      - {section.items.map((item, itemIndex) => { - const isHighlighted = - sectionIndex === highlightedSection && - itemIndex === highlightedItem; - - return ( -
    • onSelect(item)} - > - {renderItem ? ( - renderItem(item, isHighlighted) - ) : ( - - )} -
    • - ); - })} -
    -
    - ))} -
    - )} -
    - ); -} -``` - -### Recent Searches and Suggestions -```tsx -interface SmartAutocompleteProps { - fetchSuggestions: (query: string) => Promise; - recentSearches: string[]; - popularSearches: string[]; - onSelect: (value: string) => void; - onClearRecent: () => void; -} - -export function SmartAutocomplete({ - fetchSuggestions, - recentSearches, - popularSearches, - onSelect, - onClearRecent -}: SmartAutocompleteProps) { - const [inputValue, setInputValue] = useState(''); - const [suggestions, setSuggestions] = useState([]); - const [showInitial, setShowInitial] = useState(true); - - const sections = useMemo(() => { - const result = []; - - if (showInitial && !inputValue) { - if (recentSearches.length > 0) { - result.push({ - title: 'Recent Searches', - items: recentSearches, - icon: '🕐', - clearable: true - }); - } - - if (popularSearches.length > 0) { - result.push({ - title: 'Trending', - items: popularSearches, - icon: '🔥', - clearable: false - }); - } - } else if (suggestions.length > 0) { - result.push({ - title: 'Suggestions', - items: suggestions, - icon: '🔍', - clearable: false - }); - } - - return result; - }, [showInitial, inputValue, recentSearches, popularSearches, suggestions]); - - return ( -
    - { - setInputValue(e.target.value); - setShowInitial(false); - }} - onFocus={() => setShowInitial(true)} - placeholder="Search or select from suggestions..." - /> - - {sections.length > 0 && ( -
    - {sections.map((section) => ( -
    -
    - {section.icon} - {section.title} - {section.clearable && ( - - )} -
    - -
      - {section.items.map((item) => ( -
    • onSelect(item)} - className="smart-item" - > - {item} -
    • - ))} -
    -
    - ))} -
    - )} -
    - ); -} -``` - -## Search-as-you-type Implementation - -### Real-time Search Results -```tsx -interface SearchAsYouTypeProps { - searchFunction: (query: string) => Promise; - renderResult: (result: SearchResult) => React.ReactNode; - minChars?: number; - debounceMs?: number; -} - -interface SearchResult { - id: string; - title: string; - description: string; - category: string; - url: string; -} - -export function SearchAsYouType({ - searchFunction, - renderResult, - minChars = 2, - debounceMs = 200 -}: SearchAsYouTypeProps) { - const [query, setQuery] = useState(''); - const [results, setResults] = useState([]); - const [isSearching, setIsSearching] = useState(false); - const [showResults, setShowResults] = useState(false); - - const performSearch = useCallback( - debounce(async (searchQuery: string) => { - if (searchQuery.length < minChars) { - setResults([]); - setIsSearching(false); - return; - } - - setIsSearching(true); - try { - const searchResults = await searchFunction(searchQuery); - setResults(searchResults); - setShowResults(true); - } catch (error) { - console.error('Search failed:', error); - setResults([]); - } finally { - setIsSearching(false); - } - }, debounceMs), - [searchFunction, minChars] - ); - - useEffect(() => { - performSearch(query); - }, [query, performSearch]); - - return ( -
    -
    - setQuery(e.target.value)} - placeholder="Start typing to search..." - aria-label="Search" - aria-describedby="search-status" - /> - - {isSearching && ( - - Searching... - - )} -
    - - {showResults && results.length > 0 && ( -
    -
    - Found {results.length} results for "{query}" -
    - -
    - {results.map(result => ( -
    - {renderResult(result)} -
    - ))} -
    -
    - )} - - {showResults && results.length === 0 && !isSearching && query.length >= minChars && ( -
    - No results found for "{query}" -
    - )} -
    - ); -} -``` - -## Performance Optimization - -### Virtual Scrolling for Large Lists -```tsx -import { FixedSizeList as List } from 'react-window'; - -interface VirtualAutocompleteProps { - items: string[]; - itemHeight?: number; - maxHeight?: number; - onSelect: (item: string) => void; -} - -export function VirtualAutocomplete({ - items, - itemHeight = 35, - maxHeight = 300, - onSelect -}: VirtualAutocompleteProps) { - const [inputValue, setInputValue] = useState(''); - - const filteredItems = useMemo(() => { - if (!inputValue) return items; - return items.filter(item => - item.toLowerCase().includes(inputValue.toLowerCase()) - ); - }, [items, inputValue]); - - const Row = ({ index, style }: { index: number; style: React.CSSProperties }) => ( -
    onSelect(filteredItems[index])} - > - -
    - ); - - return ( -
    - setInputValue(e.target.value)} - placeholder="Search from thousands of items..." - /> - - {filteredItems.length > 0 && ( - - {Row} - - )} -
    - ); -} -``` - -### Memoized Filtering -```tsx -import { useMemo } from 'react'; - -function useFuzzySearch( - items: T[], - searchQuery: string, - options: { - keys: string[]; - threshold?: number; - includeScore?: boolean; - } -) { - return useMemo(() => { - if (!searchQuery) return items; - - // Simple fuzzy matching implementation - const fuzzyMatch = (str: string, pattern: string) => { - pattern = pattern.toLowerCase(); - str = str.toLowerCase(); - - let patternIdx = 0; - let strIdx = 0; - let score = 0; - - while (patternIdx < pattern.length && strIdx < str.length) { - if (pattern[patternIdx] === str[strIdx]) { - score++; - patternIdx++; - } - strIdx++; - } - - return { - matched: patternIdx === pattern.length, - score: score / pattern.length - }; - }; - - return items - .map(item => { - const scores = options.keys.map(key => { - const value = String(item[key]); - return fuzzyMatch(value, searchQuery); - }); - - const bestMatch = scores.reduce((best, current) => - current.score > best.score ? current : best - ); - - return { - item, - ...bestMatch - }; - }) - .filter(result => result.matched) - .sort((a, b) => b.score - a.score) - .map(result => options.includeScore ? result : result.item); - }, [items, searchQuery, options]); -} -``` - -## Accessibility Features - -### ARIA Live Regions -```tsx -function AccessibleAutocomplete() { - const [results, setResults] = useState([]); - const [announcement, setAnnouncement] = useState(''); - - useEffect(() => { - // Announce results to screen readers - if (results.length > 0) { - setAnnouncement(`${results.length} suggestions available`); - } else { - setAnnouncement('No suggestions available'); - } - }, [results]); - - return ( - <> -
    - {announcement} -
    - -
    0} - aria-haspopup="listbox" - aria-owns="suggestions" - > - - - - Type to search. Use arrow keys to navigate suggestions. - - -
      - {results.map((result, index) => ( -
    • - {result} -
    • - ))} -
    -
    - - ); -} -``` \ No newline at end of file diff --git a/.claude/skills/implementing-search-filter/references/database-querying.md b/.claude/skills/implementing-search-filter/references/database-querying.md deleted file mode 100644 index d8838210f..000000000 --- a/.claude/skills/implementing-search-filter/references/database-querying.md +++ /dev/null @@ -1,561 +0,0 @@ -# Database Querying Patterns - - -## Table of Contents - -- [SQLAlchemy Dynamic Queries](#sqlalchemy-dynamic-queries) - - [Basic Filter Building](#basic-filter-building) - - [Advanced Text Search with PostgreSQL](#advanced-text-search-with-postgresql) - - [Faceted Search with Aggregations](#faceted-search-with-aggregations) -- [Django ORM Patterns](#django-orm-patterns) - - [Django Filter Backend](#django-filter-backend) - - [Django Full-Text Search](#django-full-text-search) -- [Query Optimization](#query-optimization) - - [Index Strategies](#index-strategies) - - [Query Performance Monitoring](#query-performance-monitoring) - - [Query Result Caching](#query-result-caching) -- [Security Considerations](#security-considerations) - - [SQL Injection Prevention](#sql-injection-prevention) - -## SQLAlchemy Dynamic Queries - -### Basic Filter Building -```python -from sqlalchemy.orm import Session -from sqlalchemy import and_, or_, func -from typing import Dict, List, Any, Optional - -class SearchQueryBuilder: - """Build dynamic SQLAlchemy queries from search parameters.""" - - def __init__(self, model, session: Session): - self.model = model - self.session = session - self.query = session.query(model) - - def add_text_search(self, search_term: str, columns: List[str]): - """Add text search across multiple columns.""" - if not search_term: - return self - - search_term = f"%{search_term}%" - conditions = [] - - for column in columns: - if hasattr(self.model, column): - conditions.append( - getattr(self.model, column).ilike(search_term) - ) - - if conditions: - self.query = self.query.filter(or_(*conditions)) - - return self - - def add_filters(self, filters: Dict[str, Any]): - """Add exact match filters.""" - for key, value in filters.items(): - if value is not None and hasattr(self.model, key): - if isinstance(value, list): - # IN clause for multiple values - self.query = self.query.filter( - getattr(self.model, key).in_(value) - ) - else: - # Exact match - self.query = self.query.filter( - getattr(self.model, key) == value - ) - - return self - - def add_range_filter(self, column: str, min_val: Any, max_val: Any): - """Add range filter (e.g., price range).""" - if hasattr(self.model, column): - if min_val is not None: - self.query = self.query.filter( - getattr(self.model, column) >= min_val - ) - if max_val is not None: - self.query = self.query.filter( - getattr(self.model, column) <= max_val - ) - - return self - - def add_sorting(self, sort_by: str, order: str = 'asc'): - """Add sorting to query.""" - if hasattr(self.model, sort_by): - column = getattr(self.model, sort_by) - if order == 'desc': - self.query = self.query.order_by(column.desc()) - else: - self.query = self.query.order_by(column.asc()) - - return self - - def paginate(self, page: int = 1, per_page: int = 20): - """Add pagination.""" - offset = (page - 1) * per_page - self.query = self.query.offset(offset).limit(per_page) - return self - - def execute(self): - """Execute the query and return results.""" - return self.query.all() - - def count(self): - """Get total count without pagination.""" - return self.query.count() -``` - -### Advanced Text Search with PostgreSQL -```python -from sqlalchemy import text, func -from sqlalchemy.dialects.postgresql import TSVECTOR - -class PostgreSQLSearch: - """Full-text search using PostgreSQL.""" - - @staticmethod - def create_search_vector(model): - """Create tsvector index for full-text search.""" - # Add this to your model - search_vector = func.to_tsvector( - 'english', - func.coalesce(model.title, '') + ' ' + - func.coalesce(model.description, '') + ' ' + - func.coalesce(model.tags, '') - ) - return search_vector - - def search_products(self, session: Session, query: str, filters: Dict = None): - """Perform full-text search with ranking.""" - from models import Product - - # Create tsquery - search_query = func.plainto_tsquery('english', query) - - # Build base query with ranking - q = session.query( - Product, - func.ts_rank( - func.to_tsvector('english', Product.search_text), - search_query - ).label('rank') - ).filter( - func.to_tsvector('english', Product.search_text).match(search_query) - ) - - # Add additional filters - if filters: - for key, value in filters.items(): - if hasattr(Product, key): - q = q.filter(getattr(Product, key) == value) - - # Order by relevance - q = q.order_by(text('rank DESC')) - - return q.all() - - def create_search_index(self, session: Session): - """Create GIN index for better performance.""" - sql = """ - CREATE INDEX idx_product_search_vector - ON products - USING GIN (to_tsvector('english', - COALESCE(title, '') || ' ' || - COALESCE(description, '') || ' ' || - COALESCE(tags, '') - )); - """ - session.execute(text(sql)) - session.commit() -``` - -### Faceted Search with Aggregations -```python -from sqlalchemy import func, distinct - -class FacetedSearch: - """Generate facets with counts for filters.""" - - def get_facets(self, session: Session, base_filters: Dict = None): - """Get available facets with counts.""" - from models import Product - - facets = {} - - # Base query with existing filters - base_query = session.query(Product) - if base_filters: - for key, value in base_filters.items(): - if key != 'category': # Don't apply the facet we're counting - base_query = base_query.filter( - getattr(Product, key) == value - ) - - # Category facet - category_facets = base_query.with_entities( - Product.category, - func.count(Product.id).label('count') - ).group_by(Product.category).all() - - facets['category'] = [ - {'value': cat, 'count': count} - for cat, count in category_facets - ] - - # Brand facet - brand_facets = base_query.with_entities( - Product.brand, - func.count(Product.id).label('count') - ).group_by(Product.brand).all() - - facets['brand'] = [ - {'value': brand, 'count': count} - for brand, count in brand_facets - ] - - # Price range facet - price_ranges = [ - (0, 50, 'Under $50'), - (50, 100, '$50 - $100'), - (100, 200, '$100 - $200'), - (200, None, 'Over $200') - ] - - facets['price_range'] = [] - for min_price, max_price, label in price_ranges: - q = base_query - q = q.filter(Product.price >= min_price) - if max_price: - q = q.filter(Product.price < max_price) - - count = q.count() - if count > 0: - facets['price_range'].append({ - 'value': f"{min_price}-{max_price or 'inf'}", - 'label': label, - 'count': count - }) - - return facets -``` - -## Django ORM Patterns - -### Django Filter Backend -```python -from django.db.models import Q, Count, Avg -from django_filters import FilterSet, CharFilter, RangeFilter -from rest_framework import filters - -class ProductFilter(FilterSet): - """Django filter for product search.""" - - search = CharFilter(method='search_filter') - price = RangeFilter() - category = CharFilter(field_name='category__name', lookup_expr='iexact') - brand = CharFilter(field_name='brand', lookup_expr='icontains') - - class Meta: - model = Product - fields = ['search', 'price', 'category', 'brand', 'in_stock'] - - def search_filter(self, queryset, name, value): - """Custom search across multiple fields.""" - return queryset.filter( - Q(title__icontains=value) | - Q(description__icontains=value) | - Q(tags__icontains=value) - ) - -class ProductViewSet(viewsets.ModelViewSet): - """ViewSet with search and filtering.""" - - queryset = Product.objects.all() - serializer_class = ProductSerializer - filter_backends = [ - filters.SearchFilter, - filters.OrderingFilter, - DjangoFilterBackend - ] - filterset_class = ProductFilter - search_fields = ['title', 'description', 'tags'] - ordering_fields = ['price', 'created_at', 'rating'] - ordering = ['-created_at'] # Default ordering - - def get_queryset(self): - """Optimize query with select_related and prefetch_related.""" - queryset = super().get_queryset() - queryset = queryset.select_related('category', 'brand') - queryset = queryset.prefetch_related('reviews', 'images') - - # Add annotations for computed fields - queryset = queryset.annotate( - avg_rating=Avg('reviews__rating'), - review_count=Count('reviews') - ) - - return queryset -``` - -### Django Full-Text Search -```python -from django.contrib.postgres.search import ( - SearchVector, SearchQuery, SearchRank, TrigramSimilarity -) - -class PostgreSQLFullTextSearch: - """PostgreSQL full-text search in Django.""" - - def search_products(self, query: str): - """Perform full-text search with ranking.""" - from products.models import Product - - # Create search vector - search_vector = SearchVector( - 'title', weight='A' - ) + SearchVector( - 'description', weight='B' - ) + SearchVector( - 'tags', weight='C' - ) - - # Create search query - search_query = SearchQuery(query, config='english') - - # Perform search with ranking - results = Product.objects.annotate( - search=search_vector, - rank=SearchRank(search_vector, search_query) - ).filter( - search=search_query - ).order_by('-rank') - - return results - - def trigram_search(self, query: str): - """Use trigram similarity for fuzzy matching.""" - from products.models import Product - - return Product.objects.annotate( - similarity=TrigramSimilarity('title', query) - ).filter( - similarity__gt=0.1 - ).order_by('-similarity') - - def combined_search(self, query: str): - """Combine full-text and trigram search.""" - from products.models import Product - - # Full-text search - search_vector = SearchVector('title', 'description') - search_query = SearchQuery(query) - - # Combine with trigram similarity - results = Product.objects.annotate( - search_rank=SearchRank(search_vector, search_query), - title_similarity=TrigramSimilarity('title', query), - combined_score=F('search_rank') + F('title_similarity') - ).filter( - Q(search=search_query) | Q(title_similarity__gt=0.1) - ).order_by('-combined_score') - - return results -``` - -## Query Optimization - -### Index Strategies -```python -""" -Database indexes for search optimization. -Add these to your models or migrations. -""" - -# SQLAlchemy indexes -from sqlalchemy import Index - -class Product(Base): - __tablename__ = 'products' - - id = Column(Integer, primary_key=True) - title = Column(String, nullable=False) - description = Column(Text) - category = Column(String, index=True) # Single column index - brand = Column(String, index=True) - price = Column(Numeric(10, 2), index=True) - created_at = Column(DateTime, index=True) - - # Composite indexes - __table_args__ = ( - Index('idx_category_brand', 'category', 'brand'), - Index('idx_price_category', 'price', 'category'), - Index('idx_search_fields', 'title', 'description'), # For text search - ) - -# Django indexes -class Product(models.Model): - title = models.CharField(max_length=200, db_index=True) - description = models.TextField() - category = models.CharField(max_length=50, db_index=True) - price = models.DecimalField(max_digits=10, decimal_places=2, db_index=True) - - class Meta: - indexes = [ - models.Index(fields=['category', 'brand']), - models.Index(fields=['price', '-created_at']), # Compound index - models.Index(fields=['title'], name='title_idx'), - ] -``` - -### Query Performance Monitoring -```python -import time -from contextlib import contextmanager -import logging - -logger = logging.getLogger(__name__) - -@contextmanager -def query_performance_monitor(operation_name: str): - """Monitor query execution time.""" - start_time = time.time() - - try: - yield - finally: - execution_time = time.time() - start_time - - if execution_time > 1.0: # Log slow queries - logger.warning( - f"Slow query detected: {operation_name} " - f"took {execution_time:.2f} seconds" - ) - else: - logger.info( - f"Query {operation_name} " - f"executed in {execution_time:.3f} seconds" - ) - -# Usage -def search_products(query: str, filters: Dict): - with query_performance_monitor("product_search"): - results = SearchQueryBuilder(Product, session)\ - .add_text_search(query, ['title', 'description'])\ - .add_filters(filters)\ - .execute() - - return results -``` - -### Query Result Caching -```python -from functools import lru_cache -import hashlib -import json - -class QueryCache: - """Simple query result caching.""" - - def __init__(self, ttl_seconds: int = 300): - self.cache = {} - self.ttl = ttl_seconds - - def _generate_key(self, query: str, filters: Dict): - """Generate cache key from query parameters.""" - cache_data = { - 'query': query, - 'filters': filters - } - cache_string = json.dumps(cache_data, sort_keys=True) - return hashlib.md5(cache_string.encode()).hexdigest() - - def get(self, query: str, filters: Dict): - """Get cached results if available.""" - key = self._generate_key(query, filters) - - if key in self.cache: - result, timestamp = self.cache[key] - if time.time() - timestamp < self.ttl: - return result - else: - del self.cache[key] - - return None - - def set(self, query: str, filters: Dict, results): - """Cache query results.""" - key = self._generate_key(query, filters) - self.cache[key] = (results, time.time()) - - def clear(self): - """Clear all cached results.""" - self.cache.clear() - -# Usage -cache = QueryCache(ttl_seconds=300) - -def cached_search(query: str, filters: Dict): - # Check cache - cached = cache.get(query, filters) - if cached: - return cached - - # Perform search - results = perform_search(query, filters) - - # Cache results - cache.set(query, filters, results) - - return results -``` - -## Security Considerations - -### SQL Injection Prevention -```python -class SecureQueryBuilder: - """Secure query building with input validation.""" - - @staticmethod - def sanitize_search_term(term: str) -> str: - """Sanitize search input.""" - # Remove SQL special characters - dangerous_chars = [';', '--', '/*', '*/', 'xp_', 'sp_', '@@', '@'] - for char in dangerous_chars: - term = term.replace(char, '') - - # Limit length - return term[:100] - - @staticmethod - def validate_column_name(column: str, allowed_columns: List[str]) -> bool: - """Validate column name against whitelist.""" - return column in allowed_columns - - @staticmethod - def validate_sort_order(order: str) -> str: - """Validate sort order.""" - return 'desc' if order.lower() == 'desc' else 'asc' - - def build_safe_query(self, params: Dict): - """Build query with validation.""" - allowed_columns = ['title', 'description', 'category', 'brand', 'price'] - - # Validate and sanitize inputs - if 'search' in params: - params['search'] = self.sanitize_search_term(params['search']) - - if 'sort_by' in params: - if not self.validate_column_name(params['sort_by'], allowed_columns): - params['sort_by'] = 'created_at' # Default - - if 'order' in params: - params['order'] = self.validate_sort_order(params['order']) - - # Build query safely using parameterized queries - return self._build_query(params) -``` \ No newline at end of file diff --git a/.claude/skills/implementing-search-filter/references/elasticsearch-integration.md b/.claude/skills/implementing-search-filter/references/elasticsearch-integration.md deleted file mode 100644 index 1ff6e22b9..000000000 --- a/.claude/skills/implementing-search-filter/references/elasticsearch-integration.md +++ /dev/null @@ -1,736 +0,0 @@ -# Elasticsearch Integration Patterns - - -## Table of Contents - -- [Python Elasticsearch Client Setup](#python-elasticsearch-client-setup) - - [Basic Connection](#basic-connection) -- [Index Design and Mappings](#index-design-and-mappings) - - [Product Search Index](#product-search-index) -- [Search Query Patterns](#search-query-patterns) - - [Full-Text Search with Filters](#full-text-search-with-filters) - - [Autocomplete/Suggestions](#autocompletesuggestions) -- [Advanced Search Features](#advanced-search-features) - - [Boolean Query Builder](#boolean-query-builder) - - [Relevance Tuning](#relevance-tuning) -- [Performance Optimization](#performance-optimization) - - [Query Caching](#query-caching) - - [Scroll API for Large Results](#scroll-api-for-large-results) -- [Index Management](#index-management) - - [Reindexing Strategy](#reindexing-strategy) -- [Error Handling](#error-handling) - - [Robust Search with Retries](#robust-search-with-retries) - -## Python Elasticsearch Client Setup - -### Basic Connection -```python -from elasticsearch import Elasticsearch -from elasticsearch.helpers import bulk -import logging - -logger = logging.getLogger(__name__) - -class ElasticsearchClient: - """Elasticsearch client wrapper with connection management.""" - - def __init__(self, hosts=['localhost:9200'], **kwargs): - """Initialize Elasticsearch connection.""" - self.es = Elasticsearch( - hosts=hosts, - # Authentication if needed - http_auth=kwargs.get('http_auth'), - # Connection parameters - timeout=kwargs.get('timeout', 30), - max_retries=kwargs.get('max_retries', 3), - retry_on_timeout=kwargs.get('retry_on_timeout', True) - ) - - # Verify connection - if not self.es.ping(): - raise ValueError("Connection to Elasticsearch failed") - - logger.info(f"Connected to Elasticsearch: {self.es.info()['version']['number']}") - - def create_index(self, index_name: str, mappings: dict, settings: dict = None): - """Create an index with mappings.""" - body = {} - - if settings: - body['settings'] = settings - - if mappings: - body['mappings'] = mappings - - if not self.es.indices.exists(index=index_name): - self.es.indices.create(index=index_name, body=body) - logger.info(f"Created index: {index_name}") - else: - logger.info(f"Index {index_name} already exists") - - def delete_index(self, index_name: str): - """Delete an index.""" - if self.es.indices.exists(index=index_name): - self.es.indices.delete(index=index_name) - logger.info(f"Deleted index: {index_name}") -``` - -## Index Design and Mappings - -### Product Search Index -```python -class ProductIndexManager: - """Manage product search index.""" - - PRODUCT_INDEX = 'products' - - PRODUCT_MAPPING = { - 'properties': { - 'id': {'type': 'keyword'}, - 'title': { - 'type': 'text', - 'analyzer': 'standard', - 'fields': { - 'keyword': {'type': 'keyword'}, - 'suggest': { - 'type': 'search_as_you_type' - } - } - }, - 'description': { - 'type': 'text', - 'analyzer': 'english' - }, - 'category': { - 'type': 'keyword', - 'fields': { - 'text': {'type': 'text'} - } - }, - 'brand': {'type': 'keyword'}, - 'price': {'type': 'float'}, - 'tags': {'type': 'keyword'}, - 'in_stock': {'type': 'boolean'}, - 'created_at': {'type': 'date'}, - 'rating': {'type': 'float'}, - 'review_count': {'type': 'integer'}, - 'image_url': {'type': 'keyword'}, - 'attributes': { - 'type': 'nested', - 'properties': { - 'name': {'type': 'keyword'}, - 'value': {'type': 'keyword'} - } - } - } - } - - INDEX_SETTINGS = { - 'number_of_shards': 1, - 'number_of_replicas': 1, - 'analysis': { - 'analyzer': { - 'autocomplete': { - 'tokenizer': 'autocomplete', - 'filter': ['lowercase'] - }, - 'autocomplete_search': { - 'tokenizer': 'lowercase' - } - }, - 'tokenizer': { - 'autocomplete': { - 'type': 'edge_ngram', - 'min_gram': 2, - 'max_gram': 10, - 'token_chars': ['letter', 'digit'] - } - } - } - } - - def __init__(self, es_client: ElasticsearchClient): - self.es = es_client.es - - def create_product_index(self): - """Create product index with optimized mappings.""" - self.es_client.create_index( - self.PRODUCT_INDEX, - self.PRODUCT_MAPPING, - self.INDEX_SETTINGS - ) - - def index_products(self, products: list): - """Bulk index products.""" - actions = [] - - for product in products: - action = { - '_index': self.PRODUCT_INDEX, - '_id': product['id'], - '_source': product - } - actions.append(action) - - success, failed = bulk(self.es, actions, raise_on_error=False) - logger.info(f"Indexed {success} products, {len(failed)} failed") - - if failed: - logger.error(f"Failed to index: {failed}") - - return success, failed -``` - -## Search Query Patterns - -### Full-Text Search with Filters -```python -from typing import Dict, List, Optional, Any - -class ProductSearcher: - """Execute product searches with Elasticsearch.""" - - def __init__(self, es_client: ElasticsearchClient): - self.es = es_client.es - - def search( - self, - query: str = None, - filters: Dict[str, Any] = None, - sort_by: str = None, - page: int = 1, - size: int = 20, - facets: List[str] = None - ): - """Perform product search with filters and facets.""" - - # Build Elasticsearch query - es_query = self._build_query(query, filters) - - # Build request body - body = { - 'query': es_query, - 'from': (page - 1) * size, - 'size': size - } - - # Add sorting - if sort_by: - body['sort'] = self._build_sort(sort_by) - - # Add aggregations for facets - if facets: - body['aggs'] = self._build_aggregations(facets) - - # Execute search - response = self.es.search( - index='products', - body=body - ) - - return self._parse_response(response) - - def _build_query(self, query: str, filters: Dict[str, Any]): - """Build Elasticsearch query with filters.""" - must = [] - filter_clauses = [] - - # Text search - if query: - must.append({ - 'multi_match': { - 'query': query, - 'fields': [ - 'title^3', # Boost title matches - 'description^2', # Medium boost for description - 'tags', - 'category.text', - 'brand' - ], - 'type': 'best_fields', - 'fuzziness': 'AUTO' - } - }) - - # Apply filters - if filters: - for field, value in filters.items(): - if isinstance(value, list): - # Multiple values - use terms query - filter_clauses.append({ - 'terms': {field: value} - }) - elif isinstance(value, dict): - # Range filter - if 'min' in value or 'max' in value: - range_filter = {} - if 'min' in value: - range_filter['gte'] = value['min'] - if 'max' in value: - range_filter['lte'] = value['max'] - - filter_clauses.append({ - 'range': {field: range_filter} - }) - else: - # Exact match - filter_clauses.append({ - 'term': {field: value} - }) - - # Combine queries - if must or filter_clauses: - return { - 'bool': { - 'must': must, - 'filter': filter_clauses - } - } - else: - return {'match_all': {}} - - def _build_sort(self, sort_by: str): - """Build sort clause.""" - sort_options = { - 'relevance': ['_score'], - 'price_asc': [{'price': 'asc'}], - 'price_desc': [{'price': 'desc'}], - 'newest': [{'created_at': 'desc'}], - 'rating': [{'rating': 'desc'}], - } - - return sort_options.get(sort_by, ['_score']) - - def _build_aggregations(self, facets: List[str]): - """Build aggregations for faceted search.""" - aggs = {} - - for facet in facets: - if facet == 'price': - # Range aggregation for price - aggs['price_ranges'] = { - 'range': { - 'field': 'price', - 'ranges': [ - {'key': 'Under $50', 'to': 50}, - {'key': '$50-$100', 'from': 50, 'to': 100}, - {'key': '$100-$200', 'from': 100, 'to': 200}, - {'key': 'Over $200', 'from': 200} - ] - } - } - else: - # Terms aggregation for categorical fields - aggs[facet] = { - 'terms': { - 'field': facet, - 'size': 20 - } - } - - return aggs - - def _parse_response(self, response): - """Parse Elasticsearch response.""" - results = { - 'total': response['hits']['total']['value'], - 'items': [], - 'facets': {} - } - - # Extract search results - for hit in response['hits']['hits']: - item = hit['_source'] - item['_score'] = hit['_score'] - results['items'].append(item) - - # Extract facets - if 'aggregations' in response: - for facet_name, facet_data in response['aggregations'].items(): - if 'buckets' in facet_data: - results['facets'][facet_name] = [ - { - 'value': bucket.get('key'), - 'count': bucket.get('doc_count') - } - for bucket in facet_data['buckets'] - ] - - return results -``` - -### Autocomplete/Suggestions -```python -class AutocompleteSearcher: - """Implement autocomplete with Elasticsearch.""" - - def __init__(self, es_client: ElasticsearchClient): - self.es = es_client.es - - def suggest(self, prefix: str, size: int = 10): - """Get autocomplete suggestions.""" - body = { - 'query': { - 'multi_match': { - 'query': prefix, - 'type': 'bool_prefix', - 'fields': [ - 'title.suggest', - 'title.suggest._2gram', - 'title.suggest._3gram' - ] - } - }, - 'size': size, - '_source': ['title', 'category', 'brand'] - } - - response = self.es.search(index='products', body=body) - - suggestions = [] - for hit in response['hits']['hits']: - suggestions.append({ - 'text': hit['_source']['title'], - 'category': hit['_source'].get('category'), - 'brand': hit['_source'].get('brand') - }) - - return suggestions - - def search_as_you_type(self, query: str): - """Real-time search suggestions.""" - body = { - 'suggest': { - 'product-suggest': { - 'prefix': query, - 'completion': { - 'field': 'title.suggest', - 'size': 5, - 'skip_duplicates': True - } - } - } - } - - response = self.es.search(index='products', body=body) - - suggestions = [] - for option in response['suggest']['product-suggest'][0]['options']: - suggestions.append({ - 'text': option['text'], - 'score': option['_score'] - }) - - return suggestions -``` - -## Advanced Search Features - -### Boolean Query Builder -```python -class BooleanQueryBuilder: - """Build complex boolean queries for Elasticsearch.""" - - def build_advanced_query(self, search_params: Dict): - """ - Build advanced query with AND/OR/NOT operators. - - Example params: - { - 'must': ['laptop', 'dell'], - 'should': ['gaming', 'professional'], - 'must_not': ['refurbished'], - 'fields': { - 'title': 'laptop', - 'brand': 'dell' - } - } - """ - bool_query = { - 'bool': {} - } - - # Must clauses (AND) - if 'must' in search_params: - bool_query['bool']['must'] = [ - {'match': {'_all': term}} - for term in search_params['must'] - ] - - # Should clauses (OR) - if 'should' in search_params: - bool_query['bool']['should'] = [ - {'match': {'_all': term}} - for term in search_params['should'] - ] - bool_query['bool']['minimum_should_match'] = 1 - - # Must not clauses (NOT) - if 'must_not' in search_params: - bool_query['bool']['must_not'] = [ - {'match': {'_all': term}} - for term in search_params['must_not'] - ] - - # Field-specific searches - if 'fields' in search_params: - if 'must' not in bool_query['bool']: - bool_query['bool']['must'] = [] - - for field, value in search_params['fields'].items(): - bool_query['bool']['must'].append({ - 'match': {field: value} - }) - - return bool_query -``` - -### Relevance Tuning -```python -class RelevanceTuner: - """Tune search relevance with boosting and scoring.""" - - def search_with_boosting(self, query: str, user_context: Dict = None): - """Search with context-aware boosting.""" - - # Base query - base_query = { - 'multi_match': { - 'query': query, - 'fields': [ - 'title^3', - 'description^2', - 'tags' - ] - } - } - - # Apply function score for personalization - function_score = { - 'function_score': { - 'query': base_query, - 'functions': [] - } - } - - # Boost recent products - function_score['function_score']['functions'].append({ - 'gauss': { - 'created_at': { - 'origin': 'now', - 'scale': '30d', - 'decay': 0.5 - } - }, - 'weight': 1.5 - }) - - # Boost highly rated products - function_score['function_score']['functions'].append({ - 'field_value_factor': { - 'field': 'rating', - 'factor': 1.2, - 'modifier': 'sqrt', - 'missing': 1 - } - }) - - # User preference boosting - if user_context and 'preferred_categories' in user_context: - for category in user_context['preferred_categories']: - function_score['function_score']['functions'].append({ - 'filter': {'term': {'category': category}}, - 'weight': 2.0 - }) - - # Combine scores - function_score['function_score']['score_mode'] = 'sum' - function_score['function_score']['boost_mode'] = 'multiply' - - return function_score -``` - -## Performance Optimization - -### Query Caching -```python -from functools import lru_cache -import hashlib -import json - -class ElasticsearchCache: - """Cache Elasticsearch queries for performance.""" - - def __init__(self, es_client: ElasticsearchClient): - self.es = es_client.es - self._cache = {} - - @lru_cache(maxsize=100) - def _get_cache_key(self, query_str: str): - """Generate cache key from query.""" - return hashlib.md5(query_str.encode()).hexdigest() - - def search_with_cache(self, index: str, body: dict, cache_ttl: int = 300): - """Execute search with caching.""" - - # Generate cache key - query_str = json.dumps(body, sort_keys=True) - cache_key = self._get_cache_key(query_str) - - # Check cache - if cache_key in self._cache: - cached_result, timestamp = self._cache[cache_key] - if time.time() - timestamp < cache_ttl: - return cached_result - - # Execute query - result = self.es.search(index=index, body=body) - - # Cache result - self._cache[cache_key] = (result, time.time()) - - return result -``` - -### Scroll API for Large Results -```python -class ScrollSearcher: - """Handle large result sets with scroll API.""" - - def __init__(self, es_client: ElasticsearchClient): - self.es = es_client.es - - def scroll_all_products(self, query: dict = None, batch_size: int = 1000): - """Scroll through all matching products.""" - - if query is None: - query = {'match_all': {}} - - # Initialize scroll - response = self.es.search( - index='products', - body={'query': query, 'size': batch_size}, - scroll='2m' # Keep scroll context for 2 minutes - ) - - scroll_id = response['_scroll_id'] - results = response['hits']['hits'] - - # Yield first batch - yield results - - # Continue scrolling - while len(results) > 0: - response = self.es.scroll( - scroll_id=scroll_id, - scroll='2m' - ) - - scroll_id = response['_scroll_id'] - results = response['hits']['hits'] - - if results: - yield results - - # Clear scroll context - self.es.clear_scroll(scroll_id=scroll_id) -``` - -## Index Management - -### Reindexing Strategy -```python -class IndexManager: - """Manage index lifecycle and reindexing.""" - - def reindex_with_zero_downtime(self, old_index: str, new_index: str, new_mapping: dict): - """Reindex with zero downtime using aliases.""" - - # 1. Create new index with updated mapping - self.es.indices.create(index=new_index, body={'mappings': new_mapping}) - - # 2. Reindex data - self.es.reindex( - body={ - 'source': {'index': old_index}, - 'dest': {'index': new_index} - }, - wait_for_completion=False - ) - - # 3. Wait for reindex to complete - task_id = response['task'] - self._wait_for_task(task_id) - - # 4. Verify document count - old_count = self.es.count(index=old_index)['count'] - new_count = self.es.count(index=new_index)['count'] - - if old_count != new_count: - raise ValueError(f"Document count mismatch: {old_count} vs {new_count}") - - # 5. Switch alias atomically - self.es.indices.update_aliases( - body={ - 'actions': [ - {'remove': {'index': old_index, 'alias': 'products'}}, - {'add': {'index': new_index, 'alias': 'products'}} - ] - } - ) - - # 6. Delete old index (optional) - # self.es.indices.delete(index=old_index) - - logger.info(f"Successfully reindexed from {old_index} to {new_index}") -``` - -## Error Handling - -### Robust Search with Retries -```python -from elasticsearch.exceptions import ( - ConnectionError, - ConnectionTimeout, - TransportError -) -import time - -class RobustSearcher: - """Elasticsearch search with error handling and retries.""" - - def __init__(self, es_client: ElasticsearchClient): - self.es = es_client.es - self.max_retries = 3 - self.retry_delay = 1 # seconds - - def search_with_retry(self, index: str, body: dict): - """Execute search with automatic retry on failure.""" - - last_exception = None - - for attempt in range(self.max_retries): - try: - return self.es.search(index=index, body=body) - - except ConnectionTimeout as e: - logger.warning(f"Search timeout (attempt {attempt + 1}): {e}") - last_exception = e - time.sleep(self.retry_delay * (attempt + 1)) - - except ConnectionError as e: - logger.error(f"Connection error (attempt {attempt + 1}): {e}") - last_exception = e - time.sleep(self.retry_delay * (attempt + 1)) - - except TransportError as e: - if e.status_code == 429: # Too many requests - logger.warning(f"Rate limited, backing off...") - time.sleep(self.retry_delay * (attempt + 2)) - else: - logger.error(f"Transport error: {e}") - raise - - # All retries failed - raise last_exception -``` \ No newline at end of file diff --git a/.claude/skills/implementing-search-filter/references/filter-ui-patterns.md b/.claude/skills/implementing-search-filter/references/filter-ui-patterns.md deleted file mode 100644 index e8418e911..000000000 --- a/.claude/skills/implementing-search-filter/references/filter-ui-patterns.md +++ /dev/null @@ -1,633 +0,0 @@ -# Filter UI Patterns - - -## Table of Contents - -- [Checkbox Filters](#checkbox-filters) - - [Basic Multi-Select Filter](#basic-multi-select-filter) - - [Collapsible Filter Groups](#collapsible-filter-groups) -- [Range Filters](#range-filters) - - [Price Range Slider](#price-range-slider) - - [Date Range Picker](#date-range-picker) -- [Dropdown Filters](#dropdown-filters) - - [Single Select Dropdown](#single-select-dropdown) - - [Searchable Dropdown with Downshift](#searchable-dropdown-with-downshift) -- [Filter Chips](#filter-chips) - - [Active Filter Display](#active-filter-display) -- [Faceted Search](#faceted-search) - - [Dynamic Count Updates](#dynamic-count-updates) -- [Mobile Filter Patterns](#mobile-filter-patterns) - - [Filter Drawer](#filter-drawer) -- [Sort Options](#sort-options) - - [Sort Dropdown](#sort-dropdown) -- [Filter State Management](#filter-state-management) - - [Using URL Parameters](#using-url-parameters) -- [Accessibility Considerations](#accessibility-considerations) - - [Filter Region ARIA](#filter-region-aria) - - [Keyboard Navigation](#keyboard-navigation) - -## Checkbox Filters - -### Basic Multi-Select Filter -```tsx -interface FilterOption { - id: string; - label: string; - count?: number; -} - -interface CheckboxFilterProps { - title: string; - options: FilterOption[]; - selected: string[]; - onChange: (selected: string[]) => void; -} - -export function CheckboxFilter({ - title, - options, - selected, - onChange -}: CheckboxFilterProps) { - const handleToggle = (optionId: string) => { - if (selected.includes(optionId)) { - onChange(selected.filter(id => id !== optionId)); - } else { - onChange([...selected, optionId]); - } - }; - - const handleSelectAll = () => { - if (selected.length === options.length) { - onChange([]); - } else { - onChange(options.map(opt => opt.id)); - } - }; - - return ( -
    -

    {title}

    - - - - {options.map(option => ( - - ))} -
    - ); -} -``` - -### Collapsible Filter Groups -```tsx -import { ChevronDown, ChevronUp } from 'lucide-react'; - -function CollapsibleFilter({ title, children, defaultOpen = true }) { - const [isOpen, setIsOpen] = useState(defaultOpen); - - return ( -
    - - - {isOpen && ( -
    - {children} -
    - )} -
    - ); -} -``` - -## Range Filters - -### Price Range Slider -```tsx -interface RangeFilterProps { - min: number; - max: number; - value: [number, number]; - onChange: (value: [number, number]) => void; - step?: number; - prefix?: string; -} - -export function RangeFilter({ - min, - max, - value, - onChange, - step = 1, - prefix = '$' -}: RangeFilterProps) { - const [localValue, setLocalValue] = useState(value); - - useEffect(() => { - const timeoutId = setTimeout(() => { - onChange(localValue); - }, 500); // Debounce - - return () => clearTimeout(timeoutId); - }, [localValue]); - - return ( -
    -
    - setLocalValue([+e.target.value, localValue[1]])} - min={min} - max={localValue[1]} - aria-label="Minimum price" - /> - to - setLocalValue([localValue[0], +e.target.value])} - min={localValue[0]} - max={max} - aria-label="Maximum price" - /> -
    - -
    - setLocalValue([+e.target.value, localValue[1]])} - step={step} - /> - setLocalValue([localValue[0], +e.target.value])} - step={step} - /> -
    - -
    - {prefix}{min} - {prefix}{max} -
    -
    - ); -} -``` - -### Date Range Picker -```tsx -import { Calendar } from 'lucide-react'; - -function DateRangeFilter({ value, onChange }) { - const [startDate, endDate] = value; - - return ( -
    -
    - - onChange([e.target.value, endDate])} - aria-label="Start date" - /> -
    - - to - -
    - - onChange([startDate, e.target.value])} - min={startDate} - aria-label="End date" - /> -
    -
    - ); -} -``` - -## Dropdown Filters - -### Single Select Dropdown -```tsx -interface DropdownFilterProps { - label: string; - options: { value: string; label: string }[]; - value: string; - onChange: (value: string) => void; - placeholder?: string; -} - -export function DropdownFilter({ - label, - options, - value, - onChange, - placeholder = 'Select...' -}: DropdownFilterProps) { - return ( -
    - - -
    - ); -} -``` - -### Searchable Dropdown with Downshift -```tsx -import { useCombobox } from 'downshift'; - -function SearchableDropdown({ items, onSelect, placeholder }) { - const [inputItems, setInputItems] = useState(items); - - const { - isOpen, - getToggleButtonProps, - getLabelProps, - getMenuProps, - getInputProps, - highlightedIndex, - getItemProps, - selectedItem, - } = useCombobox({ - items: inputItems, - onInputValueChange: ({ inputValue }) => { - setInputItems( - items.filter(item => - item.toLowerCase().includes(inputValue.toLowerCase()) - ) - ); - }, - onSelectedItemChange: ({ selectedItem }) => { - onSelect(selectedItem); - }, - }); - - return ( -
    - - -
    - - -
    - -
      - {isOpen && - inputItems.map((item, index) => ( -
    • - {item} -
    • - ))} -
    -
    - ); -} -``` - -## Filter Chips - -### Active Filter Display -```tsx -import { X } from 'lucide-react'; - -interface FilterChip { - id: string; - label: string; - value: string; -} - -interface ActiveFiltersProps { - filters: FilterChip[]; - onRemove: (filterId: string) => void; - onClearAll: () => void; -} - -export function ActiveFilters({ - filters, - onRemove, - onClearAll -}: ActiveFiltersProps) { - if (filters.length === 0) return null; - - return ( -
    - Active filters: - - {filters.map(filter => ( -
    - {filter.label}: {filter.value} - -
    - ))} - - -
    - ); -} -``` - -## Faceted Search - -### Dynamic Count Updates -```tsx -interface FacetedSearchProps { - facets: { - category: string; - options: Array<{ - value: string; - label: string; - count: number; - disabled?: boolean; - }>; - }[]; - selected: Record; - onChange: (category: string, values: string[]) => void; -} - -export function FacetedSearch({ - facets, - selected, - onChange -}: FacetedSearchProps) { - return ( -
    - {facets.map(facet => ( -
    -

    {facet.category}

    - - {facet.options.map(option => ( - - ))} -
    - ))} -
    - ); -} -``` - -## Mobile Filter Patterns - -### Filter Drawer -```tsx -import { Filter, X } from 'lucide-react'; - -function MobileFilterDrawer({ children, filterCount = 0 }) { - const [isOpen, setIsOpen] = useState(false); - - return ( - <> - - - {isOpen && ( - <> -
    setIsOpen(false)} - /> - -
    -
    -

    Filters

    - -
    - -
    - {children} -
    - -
    - -
    -
    - - )} - - ); -} -``` - -## Sort Options - -### Sort Dropdown -```tsx -interface SortOption { - value: string; - label: string; -} - -const sortOptions: SortOption[] = [ - { value: 'relevance', label: 'Most Relevant' }, - { value: 'price-asc', label: 'Price: Low to High' }, - { value: 'price-desc', label: 'Price: High to Low' }, - { value: 'rating', label: 'Highest Rated' }, - { value: 'newest', label: 'Newest First' }, -]; - -export function SortDropdown({ value, onChange }) { - return ( -
    - - -
    - ); -} -``` - -## Filter State Management - -### Using URL Parameters -```tsx -import { useSearchParams } from 'react-router-dom'; - -function useFilterState() { - const [searchParams, setSearchParams] = useSearchParams(); - - const getFilters = () => { - const filters: Record = {}; - - searchParams.forEach((value, key) => { - if (!filters[key]) { - filters[key] = []; - } - filters[key].push(value); - }); - - return filters; - }; - - const updateFilter = (key: string, values: string[]) => { - const newParams = new URLSearchParams(searchParams); - - // Remove existing - newParams.delete(key); - - // Add new values - values.forEach(value => { - newParams.append(key, value); - }); - - setSearchParams(newParams); - }; - - const clearFilters = () => { - setSearchParams(new URLSearchParams()); - }; - - return { - filters: getFilters(), - updateFilter, - clearFilters, - }; -} -``` - -## Accessibility Considerations - -### Filter Region ARIA -```tsx -
    -

    Filter Products

    - -
    - {/* Filter groups */} -
    - -
    - {resultCount} products found -
    -
    -``` - -### Keyboard Navigation -```tsx -// Ensure all interactive elements are keyboard accessible -// Tab order should be logical -// Provide skip links for long filter lists - - - Skip to results - -``` \ No newline at end of file diff --git a/.claude/skills/implementing-search-filter/references/library-comparison.md b/.claude/skills/implementing-search-filter/references/library-comparison.md deleted file mode 100644 index b2ed7e64e..000000000 --- a/.claude/skills/implementing-search-filter/references/library-comparison.md +++ /dev/null @@ -1,314 +0,0 @@ -# Search & Filter Library Comparison - - -## Table of Contents - -- [Frontend Libraries](#frontend-libraries) - - [Autocomplete/Combobox Libraries](#autocompletecombobox-libraries) - - [Search UI Frameworks](#search-ui-frameworks) -- [Backend Search Technologies](#backend-search-technologies) - - [Database Full-Text Search](#database-full-text-search) - - [Dedicated Search Engines](#dedicated-search-engines) -- [Python Search Libraries](#python-search-libraries) - - [ORM Integration](#orm-integration) - - [Elasticsearch Clients](#elasticsearch-clients) -- [Detailed Library Analysis](#detailed-library-analysis) - - [Downshift (Recommended for Autocomplete)](#downshift-recommended-for-autocomplete) - - [React Select (Alternative for Quick Implementation)](#react-select-alternative-for-quick-implementation) - - [Elasticsearch vs Alternatives](#elasticsearch-vs-alternatives) -- [Decision Matrix](#decision-matrix) - - [Choose Downshift when:](#choose-downshift-when) - - [Choose React Select when:](#choose-react-select-when) - - [Choose PostgreSQL FTS when:](#choose-postgresql-fts-when) - - [Choose Elasticsearch when:](#choose-elasticsearch-when) - - [Choose Algolia when:](#choose-algolia-when) - - [Choose MeiliSearch when:](#choose-meilisearch-when) -- [Performance Benchmarks](#performance-benchmarks) - - [Autocomplete Response Times](#autocomplete-response-times) - - [Search Engine Query Times](#search-engine-query-times) -- [Migration Paths](#migration-paths) - - [From React Autosuggest to Downshift](#from-react-autosuggest-to-downshift) - - [From PostgreSQL to Elasticsearch](#from-postgresql-to-elasticsearch) -- [Recommendations by Project Type](#recommendations-by-project-type) - - [Small E-commerce (< 10K products)](#small-e-commerce-10k-products) - - [Medium E-commerce (10K - 100K products)](#medium-e-commerce-10k-100k-products) - - [Large E-commerce (> 100K products)](#large-e-commerce-100k-products) - - [Internal Dashboard](#internal-dashboard) - - [SaaS Application](#saas-application) - -## Frontend Libraries - -### Autocomplete/Combobox Libraries - -| Library | Bundle Size | TypeScript | Accessibility | Key Features | Best For | -|---------|------------|------------|---------------|--------------|----------| -| **Downshift** | 40KB | ✅ Excellent | ⭐⭐⭐⭐⭐ WAI-ARIA | Headless, flexible, hooks | Custom designs | -| **React Select** | 160KB | ✅ Native | ⭐⭐⭐⭐ Good | Feature-rich, styled | Quick implementation | -| **React Autosuggest** | 14KB | ✅ Good | ⭐⭐⭐⭐ Good | Lightweight, simple | Basic autocomplete | -| **@reach/combobox** | 20KB | ✅ Native | ⭐⭐⭐⭐⭐ Excellent | Accessible, minimal | Accessibility focus | -| **Headless UI** | 25KB | ✅ Native | ⭐⭐⭐⭐⭐ Excellent | Tailwind integration | Tailwind projects | - -### Search UI Frameworks - -| Framework | Use Case | Learning Curve | Flexibility | Performance | -|-----------|----------|----------------|-------------|-------------| -| **InstantSearch** (Algolia) | Algolia search | Low | Medium | ⭐⭐⭐⭐⭐ | -| **SearchKit** | Elasticsearch | Medium | High | ⭐⭐⭐⭐ | -| **Reactive Search** | Elasticsearch | Low | Medium | ⭐⭐⭐⭐ | -| **MeiliSearch UI** | MeiliSearch | Low | Medium | ⭐⭐⭐⭐⭐ | - -## Backend Search Technologies - -### Database Full-Text Search - -| Database | Setup Complexity | Performance | Features | Best For | -|----------|-----------------|-------------|----------|----------| -| **PostgreSQL FTS** | Low | ⭐⭐⭐⭐ Good | Decent, built-in | Small-medium datasets | -| **MySQL FULLTEXT** | Low | ⭐⭐⭐ Moderate | Basic | Simple searches | -| **MongoDB Text** | Low | ⭐⭐⭐ Moderate | Basic text search | Document stores | -| **SQLite FTS5** | Low | ⭐⭐⭐ Good | Surprisingly capable | Embedded/mobile | - -### Dedicated Search Engines - -| Engine | Performance | Scalability | Setup | Cost | Best For | -|--------|------------|-------------|-------|------|----------| -| **Elasticsearch** | ⭐⭐⭐⭐⭐ | ⭐⭐⭐⭐⭐ | Complex | High | Enterprise search | -| **Algolia** | ⭐⭐⭐⭐⭐ | ⭐⭐⭐⭐⭐ | Simple | $$/month | SaaS, instant search | -| **MeiliSearch** | ⭐⭐⭐⭐⭐ | ⭐⭐⭐⭐ | Simple | Free/OSS | Modern alternative | -| **Typesense** | ⭐⭐⭐⭐⭐ | ⭐⭐⭐⭐ | Simple | Free/OSS | Typo-tolerant search | -| **Sonic** | ⭐⭐⭐⭐⭐ | ⭐⭐⭐ | Simple | Free/OSS | Lightweight, fast | - -## Python Search Libraries - -### ORM Integration - -| Library | ORM | Features | Performance | Use Case | -|---------|-----|----------|-------------|----------| -| **Django Filter** | Django | Declarative filters | ⭐⭐⭐⭐ | Django REST APIs | -| **SQLAlchemy-Searchable** | SQLAlchemy | PostgreSQL FTS | ⭐⭐⭐⭐ | Flask/FastAPI | -| **Django Haystack** | Django | Multi-backend | ⭐⭐⭐ | Django + ES/Solr | -| **Whoosh** | Any | Pure Python | ⭐⭐⭐ | Small projects | - -### Elasticsearch Clients - -| Client | Abstraction Level | Learning Curve | Features | -|--------|------------------|----------------|----------| -| **elasticsearch-py** | Low | High | Full API access | -| **elasticsearch-dsl** | Medium | Medium | Pythonic queries | -| **elastic-apm** | N/A | Low | Performance monitoring | - -## Detailed Library Analysis - -### Downshift (Recommended for Autocomplete) - -**Pros:** -- Fully accessible (WAI-ARIA compliant) -- Headless - complete control over styling -- Excellent TypeScript support -- Hooks-based API -- Small bundle size for features offered -- Active maintenance - -**Cons:** -- Requires more setup than pre-styled solutions -- Need to implement visual design -- Learning curve for advanced features - -**Installation:** -```bash -npm install downshift -``` - -**Basic Example:** -```tsx -import { useCombobox } from 'downshift'; - -function Autocomplete({ items, onSelect }) { - const { - isOpen, - getToggleButtonProps, - getMenuProps, - getInputProps, - highlightedIndex, - getItemProps, - } = useCombobox({ - items, - onSelectedItemChange: ({ selectedItem }) => onSelect(selectedItem) - }); - - // Render UI with spread props -} -``` - -### React Select (Alternative for Quick Implementation) - -**Pros:** -- Feature-rich out of the box -- Pre-styled with theming support -- Async/creatable/multi-select variants -- Good documentation -- Large community - -**Cons:** -- Large bundle size (160KB) -- Opinionated styling -- Harder to customize deeply -- Some accessibility issues in edge cases - -**Installation:** -```bash -npm install react-select -``` - -### Elasticsearch vs Alternatives - -**Elasticsearch:** -- ✅ Industry standard -- ✅ Powerful query DSL -- ✅ Excellent performance -- ❌ Resource intensive -- ❌ Complex setup and maintenance -- ❌ Expensive at scale - -**MeiliSearch:** -- ✅ Simple setup -- ✅ Typo-tolerant by default -- ✅ Fast indexing -- ✅ Lower resource usage -- ❌ Fewer advanced features -- ❌ Smaller ecosystem - -**Algolia:** -- ✅ Fastest search responses -- ✅ Zero infrastructure -- ✅ Excellent developer experience -- ❌ Expensive for large datasets -- ❌ Vendor lock-in -- ❌ Data leaves your infrastructure - -## Decision Matrix - -### Choose Downshift when: -- Accessibility is critical -- Need full control over UI -- Building a design system -- Want minimal bundle size -- Using TypeScript - -### Choose React Select when: -- Need quick implementation -- OK with larger bundle -- Want pre-built features -- Don't need deep customization - -### Choose PostgreSQL FTS when: -- Data already in PostgreSQL -- < 1 million searchable records -- Simple search requirements -- Want to avoid additional infrastructure - -### Choose Elasticsearch when: -- > 1 million records -- Need complex search features -- Multi-language support required -- Faceted search is critical -- Have DevOps resources - -### Choose Algolia when: -- Need instant global search -- SaaS/e-commerce application -- Can afford the pricing -- Want zero infrastructure - -### Choose MeiliSearch when: -- Want Algolia-like experience -- Need on-premise solution -- Cost is a concern -- Moderate scale (< 10M records) - -## Performance Benchmarks - -### Autocomplete Response Times -| Library | First Render | Typing Lag | 1K Items | 10K Items | -|---------|--------------|------------|----------|-----------| -| Downshift | 15ms | <5ms | 20ms | 150ms* | -| React Select | 45ms | 10ms | 35ms | 400ms | -| Native datalist | 5ms | 0ms | 50ms | 500ms | - -*With virtualization - -### Search Engine Query Times -| Engine | Simple Query | Complex Query | Faceted Search | 1M Records | -|--------|--------------|---------------|----------------|------------| -| PostgreSQL | 10ms | 50ms | 100ms | 200ms | -| Elasticsearch | 5ms | 15ms | 20ms | 25ms | -| MeiliSearch | 3ms | 10ms | 15ms | 20ms | -| Algolia | 2ms | 5ms | 8ms | 10ms | - -## Migration Paths - -### From React Autosuggest to Downshift -```tsx -// React Autosuggest - - -// Downshift equivalent -const {...props} = useCombobox({ - items: suggestions, - onInputValueChange: ({ inputValue }) => onFetch(inputValue), - itemToString: getValue -}); -// Custom render with props -``` - -### From PostgreSQL to Elasticsearch -```python -# PostgreSQL FTS -query = session.query(Product).filter( - func.to_tsvector('english', Product.title).match(search_term) -) - -# Elasticsearch equivalent -results = es.search( - index='products', - body={ - 'query': { - 'match': { - 'title': search_term - } - } - } -) -``` - -## Recommendations by Project Type - -### Small E-commerce (< 10K products) -- **Frontend**: Downshift + React Query -- **Backend**: PostgreSQL FTS -- **API**: REST with query parameters - -### Medium E-commerce (10K - 100K products) -- **Frontend**: Downshift + SWR -- **Backend**: MeiliSearch or PostgreSQL with indexes -- **API**: GraphQL or REST with pagination - -### Large E-commerce (> 100K products) -- **Frontend**: InstantSearch or custom with Downshift -- **Backend**: Elasticsearch or Algolia -- **API**: REST with CDN caching - -### Internal Dashboard -- **Frontend**: React Select (faster development) -- **Backend**: Database full-text search -- **API**: Simple REST - -### SaaS Application -- **Frontend**: Downshift with custom design -- **Backend**: MeiliSearch or Typesense -- **API**: REST with rate limiting \ No newline at end of file diff --git a/.claude/skills/implementing-search-filter/references/performance-optimization.md b/.claude/skills/implementing-search-filter/references/performance-optimization.md deleted file mode 100644 index d64163e45..000000000 --- a/.claude/skills/implementing-search-filter/references/performance-optimization.md +++ /dev/null @@ -1,733 +0,0 @@ -# Search Performance Optimization - - -## Table of Contents - -- [Frontend Performance](#frontend-performance) - - [Debouncing and Throttling](#debouncing-and-throttling) - - [Request Cancellation](#request-cancellation) - - [Result Caching](#result-caching) -- [Backend Performance](#backend-performance) - - [Database Index Optimization](#database-index-optimization) - - [Query Optimization Patterns](#query-optimization-patterns) - - [Elasticsearch Performance Tuning](#elasticsearch-performance-tuning) -- [Monitoring and Metrics](#monitoring-and-metrics) - - [Performance Tracking](#performance-tracking) - -## Frontend Performance - -### Debouncing and Throttling -```typescript -import { useCallback, useRef } from 'react'; - -/** - * Custom debounce hook with cancellation - */ -export function useDebounce any>( - callback: T, - delay: number -): [T, () => void] { - const timeoutRef = useRef(); - - const cancel = useCallback(() => { - if (timeoutRef.current) { - clearTimeout(timeoutRef.current); - } - }, []); - - const debouncedCallback = useCallback( - (...args: Parameters) => { - cancel(); - timeoutRef.current = setTimeout(() => { - callback(...args); - }, delay); - }, - [callback, delay, cancel] - ) as T; - - return [debouncedCallback, cancel]; -} - -/** - * Custom throttle hook - */ -export function useThrottle any>( - callback: T, - limit: number -): T { - const inThrottle = useRef(false); - - const throttledCallback = useCallback( - (...args: Parameters) => { - if (!inThrottle.current) { - callback(...args); - inThrottle.current = true; - setTimeout(() => { - inThrottle.current = false; - }, limit); - } - }, - [callback, limit] - ) as T; - - return throttledCallback; -} - -// Adaptive debouncing based on input speed -export function useAdaptiveDebounce( - callback: (value: string) => void, - minDelay = 200, - maxDelay = 500 -) { - const lastInputTime = useRef(Date.now()); - const inputSpeed = useRef([]); - const timeoutRef = useRef(); - - const calculateDelay = () => { - if (inputSpeed.current.length < 2) return maxDelay; - - const avgSpeed = inputSpeed.current.reduce((a, b) => a + b, 0) / inputSpeed.current.length; - - // Faster typing = shorter delay - if (avgSpeed < 100) return minDelay; - if (avgSpeed < 200) return (minDelay + maxDelay) / 2; - return maxDelay; - }; - - return useCallback((value: string) => { - const now = Date.now(); - const timeSinceLastInput = now - lastInputTime.current; - lastInputTime.current = now; - - // Track input speed - inputSpeed.current.push(timeSinceLastInput); - if (inputSpeed.current.length > 5) { - inputSpeed.current.shift(); - } - - // Clear existing timeout - if (timeoutRef.current) { - clearTimeout(timeoutRef.current); - } - - // Set new timeout with adaptive delay - const delay = calculateDelay(); - timeoutRef.current = setTimeout(() => { - callback(value); - }, delay); - }, [callback, minDelay, maxDelay]); -} -``` - -### Request Cancellation -```typescript -class SearchRequestManager { - private abortController: AbortController | null = null; - - /** - * Execute search with automatic cancellation of previous requests - */ - async search(query: string, filters: any): Promise { - // Cancel previous request - this.cancel(); - - // Create new abort controller - this.abortController = new AbortController(); - - try { - const response = await fetch('/api/search', { - method: 'POST', - headers: { 'Content-Type': 'application/json' }, - body: JSON.stringify({ query, filters }), - signal: this.abortController.signal - }); - - if (!response.ok) { - throw new Error(`Search failed: ${response.status}`); - } - - return await response.json(); - } catch (error) { - if (error.name === 'AbortError') { - // Request was cancelled, return null - return null; - } - throw error; - } - } - - /** - * Cancel current request - */ - cancel(): void { - if (this.abortController) { - this.abortController.abort(); - this.abortController = null; - } - } -} - -// React hook for request management -export function useSearchRequest() { - const requestManager = useRef(new SearchRequestManager()); - - useEffect(() => { - return () => { - // Cancel any pending request on unmount - requestManager.current.cancel(); - }; - }, []); - - const search = useCallback(async (query: string, filters: any) => { - return requestManager.current.search(query, filters); - }, []); - - const cancel = useCallback(() => { - requestManager.current.cancel(); - }, []); - - return { search, cancel }; -} -``` - -### Result Caching -```typescript -interface CacheEntry { - data: T; - timestamp: number; - hits: number; -} - -class SearchCache { - private cache = new Map>(); - private maxSize: number; - private ttl: number; // Time to live in milliseconds - - constructor(maxSize = 50, ttl = 5 * 60 * 1000) { - this.maxSize = maxSize; - this.ttl = ttl; - } - - /** - * Generate cache key from search parameters - */ - private getCacheKey(params: any): string { - return JSON.stringify(params, Object.keys(params).sort()); - } - - /** - * Get cached result if available and not expired - */ - get(params: any): T | null { - const key = this.getCacheKey(params); - const entry = this.cache.get(key); - - if (!entry) return null; - - // Check if expired - if (Date.now() - entry.timestamp > this.ttl) { - this.cache.delete(key); - return null; - } - - // Update hit count for LRU - entry.hits++; - return entry.data; - } - - /** - * Store result in cache - */ - set(params: any, data: T): void { - const key = this.getCacheKey(params); - - // Check cache size and evict if needed - if (this.cache.size >= this.maxSize && !this.cache.has(key)) { - this.evictLRU(); - } - - this.cache.set(key, { - data, - timestamp: Date.now(), - hits: 0 - }); - } - - /** - * Evict least recently used entry - */ - private evictLRU(): void { - let minHits = Infinity; - let lruKey = ''; - - for (const [key, entry] of this.cache) { - if (entry.hits < minHits) { - minHits = entry.hits; - lruKey = key; - } - } - - if (lruKey) { - this.cache.delete(lruKey); - } - } - - /** - * Clear entire cache - */ - clear(): void { - this.cache.clear(); - } - - /** - * Get cache statistics - */ - getStats() { - return { - size: this.cache.size, - maxSize: this.maxSize, - entries: Array.from(this.cache.entries()).map(([key, entry]) => ({ - key, - age: Date.now() - entry.timestamp, - hits: entry.hits - })) - }; - } -} - -// React hook for cached search -export function useCachedSearch() { - const cache = useRef(new SearchCache()); - const [stats, setStats] = useState(cache.current.getStats()); - - const search = useCallback(async ( - params: any, - fetcher: () => Promise - ): Promise => { - // Check cache first - const cached = cache.current.get(params); - if (cached !== null) { - console.log('Cache hit for:', params); - return cached; - } - - // Fetch and cache - console.log('Cache miss, fetching:', params); - const result = await fetcher(); - cache.current.set(params, result); - - // Update stats - setStats(cache.current.getStats()); - - return result; - }, []); - - const clearCache = useCallback(() => { - cache.current.clear(); - setStats(cache.current.getStats()); - }, []); - - return { search, clearCache, stats }; -} -``` - -## Backend Performance - -### Database Index Optimization -```sql --- PostgreSQL indexes for search optimization - --- Single column indexes -CREATE INDEX idx_products_title ON products USING gin(to_tsvector('english', title)); -CREATE INDEX idx_products_description ON products USING gin(to_tsvector('english', description)); -CREATE INDEX idx_products_category ON products(category); -CREATE INDEX idx_products_brand ON products(brand); -CREATE INDEX idx_products_price ON products(price); -CREATE INDEX idx_products_created_at ON products(created_at DESC); -CREATE INDEX idx_products_rating ON products(rating DESC); - --- Composite indexes for common filter combinations -CREATE INDEX idx_products_category_price ON products(category, price); -CREATE INDEX idx_products_brand_price ON products(brand, price); -CREATE INDEX idx_products_category_brand ON products(category, brand); - --- Partial indexes for common conditions -CREATE INDEX idx_products_in_stock ON products(id) WHERE in_stock = true; -CREATE INDEX idx_products_featured ON products(id) WHERE featured = true; -CREATE INDEX idx_products_on_sale ON products(id) WHERE sale_price IS NOT NULL; - --- Full-text search index -CREATE INDEX idx_products_search_vector ON products -USING gin(( - setweight(to_tsvector('english', coalesce(title, '')), 'A') || - setweight(to_tsvector('english', coalesce(description, '')), 'B') || - setweight(to_tsvector('english', coalesce(tags, '')), 'C') -)); - --- BRIN index for time-series data -CREATE INDEX idx_products_created_brin ON products USING brin(created_at); -``` - -### Query Optimization Patterns -```python -from sqlalchemy import select, func, and_, or_ -from sqlalchemy.orm import selectinload, joinedload - -class OptimizedSearchQueries: - """Optimized database query patterns.""" - - @staticmethod - def search_with_pagination(session, query, filters, page=1, per_page=20): - """Optimized search with count query separation.""" - - # Build base query - base_query = session.query(Product) - - # Apply filters - if query: - base_query = base_query.filter( - or_( - Product.title.ilike(f'%{query}%'), - Product.description.ilike(f'%{query}%') - ) - ) - - if filters.get('category'): - base_query = base_query.filter(Product.category.in_(filters['category'])) - - if filters.get('min_price'): - base_query = base_query.filter(Product.price >= filters['min_price']) - - # Separate count query (without joins/eager loading) - count_query = base_query.with_entities(func.count(Product.id)) - total = count_query.scalar() - - # Main query with eager loading - results_query = base_query.options( - selectinload(Product.images), - selectinload(Product.reviews) - ) - - # Apply pagination - offset = (page - 1) * per_page - results = results_query.offset(offset).limit(per_page).all() - - return { - 'results': results, - 'total': total, - 'page': page, - 'per_page': per_page - } - - @staticmethod - def get_facet_counts(session, base_filters=None): - """Get facet counts with single query using window functions.""" - - # Use CTE for base filtered results - base_query = session.query(Product.id) - - if base_filters: - # Apply base filters - pass - - base_cte = base_query.cte('base_products') - - # Get all facets in single query using UNION ALL - facets_query = session.query( - literal('category').label('facet_type'), - Product.category.label('facet_value'), - func.count(Product.id).label('count') - ).join( - base_cte, Product.id == base_cte.c.id - ).group_by(Product.category) - - # Add more facets - facets_query = facets_query.union_all( - session.query( - literal('brand'), - Product.brand, - func.count(Product.id) - ).join( - base_cte, Product.id == base_cte.c.id - ).group_by(Product.brand) - ) - - results = facets_query.all() - - # Group by facet type - facets = {} - for facet_type, facet_value, count in results: - if facet_type not in facets: - facets[facet_type] = [] - facets[facet_type].append({ - 'value': facet_value, - 'count': count - }) - - return facets -``` - -### Elasticsearch Performance Tuning -```python -class ElasticsearchOptimization: - """Elasticsearch performance optimization strategies.""" - - @staticmethod - def create_optimized_mapping(): - """Create mapping optimized for search performance.""" - return { - 'settings': { - 'number_of_shards': 2, - 'number_of_replicas': 1, - 'index': { - 'refresh_interval': '5s', # Reduce refresh frequency - 'max_result_window': 10000, # Limit deep pagination - 'max_inner_result_window': 100, - 'search': { - 'slowlog': { - 'threshold': { - 'query': { - 'warn': '10s', - 'info': '5s' - } - } - } - } - }, - 'analysis': { - 'analyzer': { - 'search_analyzer': { - 'type': 'custom', - 'tokenizer': 'standard', - 'filter': [ - 'lowercase', - 'stop', - 'snowball', - 'synonym_filter' - ] - } - }, - 'filter': { - 'synonym_filter': { - 'type': 'synonym', - 'synonyms': [ - 'laptop,notebook', - 'phone,mobile,cell', - 'tv,television' - ] - } - } - } - }, - 'mappings': { - 'properties': { - 'title': { - 'type': 'text', - 'analyzer': 'search_analyzer', - 'search_analyzer': 'search_analyzer', - 'fields': { - 'keyword': { - 'type': 'keyword', - 'ignore_above': 256 - }, - 'ngram': { - 'type': 'text', - 'analyzer': 'ngram_analyzer' - } - } - }, - 'description': { - 'type': 'text', - 'analyzer': 'search_analyzer', - 'index_options': 'offsets' # For highlighting - }, - 'category': { - 'type': 'keyword', - 'eager_global_ordinals': True # For aggregations - }, - 'price': { - 'type': 'scaled_float', - 'scaling_factor': 100 # Store as cents - }, - 'suggest': { - 'type': 'completion', # For autocomplete - 'analyzer': 'simple' - } - } - } - } - - @staticmethod - def search_with_request_cache(es, query, use_cache=True): - """Use request cache for aggregations.""" - body = { - 'query': query, - 'aggs': { - 'categories': { - 'terms': { - 'field': 'category', - 'size': 20 - } - } - }, - 'request_cache': use_cache # Enable request cache - } - - return es.search(index='products', body=body) - - @staticmethod - def bulk_index_optimized(es, documents, batch_size=500): - """Optimized bulk indexing.""" - from elasticsearch.helpers import bulk, parallel_bulk - - def generate_actions(): - for doc in documents: - yield { - '_index': 'products', - '_id': doc['id'], - '_source': doc, - '_op_type': 'index' # Use 'create' to avoid updates - } - - # Use parallel bulk for large datasets - if len(documents) > 10000: - for success, info in parallel_bulk( - es, - generate_actions(), - chunk_size=batch_size, - thread_count=4, - raise_on_error=False - ): - if not success: - print(f"Failed to index: {info}") - else: - bulk(es, generate_actions(), chunk_size=batch_size) -``` - -## Monitoring and Metrics - -### Performance Tracking -```typescript -class PerformanceMonitor { - private metrics: Map = new Map(); - - /** - * Measure operation performance - */ - async measure( - operation: string, - fn: () => Promise - ): Promise { - const start = performance.now(); - - try { - const result = await fn(); - const duration = performance.now() - start; - - this.recordMetric(operation, duration); - - // Log slow operations - if (duration > 1000) { - console.warn(`Slow operation: ${operation} took ${duration.toFixed(2)}ms`); - } - - return result; - } catch (error) { - const duration = performance.now() - start; - this.recordMetric(`${operation}_error`, duration); - throw error; - } - } - - /** - * Record metric - */ - private recordMetric(operation: string, duration: number): void { - if (!this.metrics.has(operation)) { - this.metrics.set(operation, []); - } - - const values = this.metrics.get(operation)!; - values.push(duration); - - // Keep only last 100 measurements - if (values.length > 100) { - values.shift(); - } - } - - /** - * Get performance statistics - */ - getStats(operation?: string): any { - if (operation) { - const values = this.metrics.get(operation); - if (!values || values.length === 0) { - return null; - } - - return this.calculateStats(values); - } - - // Get stats for all operations - const allStats: any = {}; - for (const [op, values] of this.metrics) { - allStats[op] = this.calculateStats(values); - } - - return allStats; - } - - /** - * Calculate statistics from values - */ - private calculateStats(values: number[]) { - const sorted = [...values].sort((a, b) => a - b); - const sum = values.reduce((a, b) => a + b, 0); - - return { - count: values.length, - mean: sum / values.length, - median: sorted[Math.floor(sorted.length / 2)], - min: sorted[0], - max: sorted[sorted.length - 1], - p95: sorted[Math.floor(sorted.length * 0.95)], - p99: sorted[Math.floor(sorted.length * 0.99)] - }; - } - - /** - * Send metrics to analytics service - */ - report(): void { - const stats = this.getStats(); - - // Send to analytics service - if (typeof window !== 'undefined' && window.gtag) { - Object.entries(stats).forEach(([operation, metrics]: [string, any]) => { - window.gtag('event', 'performance', { - event_category: 'search', - event_label: operation, - value: Math.round(metrics.mean) - }); - }); - } - } -} - -// Usage -const monitor = new PerformanceMonitor(); - -export async function searchWithMonitoring(query: string, filters: any) { - return monitor.measure('search_request', async () => { - const response = await fetch('/api/search', { - method: 'POST', - body: JSON.stringify({ query, filters }) - }); - - return monitor.measure('search_parse', async () => { - return response.json(); - }); - }); -} -``` \ No newline at end of file diff --git a/.claude/skills/implementing-search-filter/references/query-parameter-management.md b/.claude/skills/implementing-search-filter/references/query-parameter-management.md deleted file mode 100644 index 1b71a9acc..000000000 --- a/.claude/skills/implementing-search-filter/references/query-parameter-management.md +++ /dev/null @@ -1,679 +0,0 @@ -# Query Parameter Management - - -## Table of Contents - -- [URL State Synchronization](#url-state-synchronization) - - [React Router Integration](#react-router-integration) - - [Next.js URL Management](#nextjs-url-management) -- [Complex Query Compression](#complex-query-compression) - - [Base64 Encoding for Complex Filters](#base64-encoding-for-complex-filters) -- [Shareable Search URLs](#shareable-search-urls) - - [Creating Shareable Links](#creating-shareable-links) -- [History Management](#history-management) - - [Search History with Local Storage](#search-history-with-local-storage) -- [Deep Linking Support](#deep-linking-support) - - [Handling Deep Links](#handling-deep-links) -- [Validation and Sanitization](#validation-and-sanitization) - - [URL Parameter Validation](#url-parameter-validation) - -## URL State Synchronization - -### React Router Integration -```tsx -import { useSearchParams, useNavigate } from 'react-router-dom'; -import { useEffect, useState } from 'react'; - -interface FilterState { - query?: string; - categories?: string[]; - minPrice?: number; - maxPrice?: number; - sortBy?: string; - page?: number; -} - -export function useUrlFilters() { - const [searchParams, setSearchParams] = useSearchParams(); - const navigate = useNavigate(); - - // Parse URL parameters to filter state - const getFiltersFromUrl = (): FilterState => { - const filters: FilterState = {}; - - // Parse query - const query = searchParams.get('q'); - if (query) filters.query = query; - - // Parse array parameters - const categories = searchParams.getAll('category'); - if (categories.length > 0) filters.categories = categories; - - // Parse number parameters - const minPrice = searchParams.get('min_price'); - if (minPrice) filters.minPrice = parseFloat(minPrice); - - const maxPrice = searchParams.get('max_price'); - if (maxPrice) filters.maxPrice = parseFloat(maxPrice); - - // Parse other parameters - const sortBy = searchParams.get('sort'); - if (sortBy) filters.sortBy = sortBy; - - const page = searchParams.get('page'); - if (page) filters.page = parseInt(page, 10); - - return filters; - }; - - // Update URL with new filters - const setFiltersToUrl = (filters: FilterState, replace = false) => { - const params = new URLSearchParams(); - - // Add query - if (filters.query) { - params.set('q', filters.query); - } - - // Add array parameters - if (filters.categories && filters.categories.length > 0) { - filters.categories.forEach(cat => params.append('category', cat)); - } - - // Add number parameters - if (filters.minPrice !== undefined) { - params.set('min_price', filters.minPrice.toString()); - } - - if (filters.maxPrice !== undefined) { - params.set('max_price', filters.maxPrice.toString()); - } - - // Add other parameters - if (filters.sortBy) { - params.set('sort', filters.sortBy); - } - - if (filters.page && filters.page > 1) { - params.set('page', filters.page.toString()); - } - - // Update URL - if (replace) { - navigate({ search: params.toString() }, { replace: true }); - } else { - setSearchParams(params); - } - }; - - // Update single filter - const updateFilter = (key: keyof FilterState, value: any) => { - const currentFilters = getFiltersFromUrl(); - const newFilters = { ...currentFilters }; - - if (value === null || value === undefined || - (Array.isArray(value) && value.length === 0)) { - delete newFilters[key]; - } else { - newFilters[key] = value; - } - - // Reset page when filters change - if (key !== 'page') { - delete newFilters.page; - } - - setFiltersToUrl(newFilters); - }; - - // Clear all filters - const clearFilters = () => { - setSearchParams(new URLSearchParams()); - }; - - return { - filters: getFiltersFromUrl(), - updateFilter, - setFilters: setFiltersToUrl, - clearFilters - }; -} -``` - -### Next.js URL Management -```tsx -import { useRouter } from 'next/router'; -import { ParsedUrlQuery } from 'querystring'; - -export function useNextUrlFilters() { - const router = useRouter(); - - // Parse query object to filter state - const parseQuery = (query: ParsedUrlQuery): FilterState => { - const filters: FilterState = {}; - - if (query.q && typeof query.q === 'string') { - filters.query = query.q; - } - - if (query.category) { - filters.categories = Array.isArray(query.category) - ? query.category - : [query.category]; - } - - if (query.min_price && typeof query.min_price === 'string') { - filters.minPrice = parseFloat(query.min_price); - } - - if (query.max_price && typeof query.max_price === 'string') { - filters.maxPrice = parseFloat(query.max_price); - } - - if (query.sort && typeof query.sort === 'string') { - filters.sortBy = query.sort; - } - - if (query.page && typeof query.page === 'string') { - filters.page = parseInt(query.page, 10); - } - - return filters; - }; - - // Build query object from filters - const buildQuery = (filters: FilterState): ParsedUrlQuery => { - const query: ParsedUrlQuery = {}; - - if (filters.query) query.q = filters.query; - if (filters.categories && filters.categories.length > 0) { - query.category = filters.categories; - } - if (filters.minPrice !== undefined) { - query.min_price = filters.minPrice.toString(); - } - if (filters.maxPrice !== undefined) { - query.max_price = filters.maxPrice.toString(); - } - if (filters.sortBy) query.sort = filters.sortBy; - if (filters.page && filters.page > 1) { - query.page = filters.page.toString(); - } - - return query; - }; - - // Update URL with new filters - const updateFilters = (filters: FilterState, options?: { shallow?: boolean }) => { - const query = buildQuery(filters); - - router.push( - { - pathname: router.pathname, - query - }, - undefined, - { shallow: options?.shallow ?? true } - ); - }; - - return { - filters: parseQuery(router.query), - updateFilters, - clearFilters: () => updateFilters({}) - }; -} -``` - -## Complex Query Compression - -### Base64 Encoding for Complex Filters -```typescript -interface ComplexFilter { - query?: string; - filters?: { - [key: string]: any; - }; - advanced?: { - must?: string[]; - should?: string[]; - mustNot?: string[]; - }; - dateRange?: { - start: Date; - end: Date; - }; -} - -class QueryCompressor { - /** - * Compress complex filter object to URL-safe string - */ - static compress(filters: ComplexFilter): string { - try { - // Convert to JSON string - const jsonString = JSON.stringify(filters); - - // Compress using base64 - const base64 = btoa(encodeURIComponent(jsonString)); - - // Make URL-safe - return base64 - .replace(/\+/g, '-') - .replace(/\//g, '_') - .replace(/=/g, ''); - } catch (error) { - console.error('Failed to compress filters:', error); - return ''; - } - } - - /** - * Decompress URL string back to filter object - */ - static decompress(compressed: string): ComplexFilter | null { - try { - // Restore base64 padding - const padding = '='.repeat((4 - (compressed.length % 4)) % 4); - const base64 = compressed - .replace(/-/g, '+') - .replace(/_/g, '/') - + padding; - - // Decode from base64 - const jsonString = decodeURIComponent(atob(base64)); - - // Parse JSON - return JSON.parse(jsonString); - } catch (error) { - console.error('Failed to decompress filters:', error); - return null; - } - } -} - -// Usage with React hook -export function useCompressedFilters() { - const [searchParams, setSearchParams] = useSearchParams(); - - const getFilters = (): ComplexFilter => { - const compressed = searchParams.get('f'); - if (!compressed) return {}; - - return QueryCompressor.decompress(compressed) || {}; - }; - - const setFilters = (filters: ComplexFilter) => { - const compressed = QueryCompressor.compress(filters); - const params = new URLSearchParams(); - - if (compressed) { - params.set('f', compressed); - } - - setSearchParams(params); - }; - - return { filters: getFilters(), setFilters }; -} -``` - -## Shareable Search URLs - -### Creating Shareable Links -```tsx -interface ShareableSearchProps { - filters: FilterState; - baseUrl?: string; -} - -export function ShareableSearch({ filters, baseUrl = window.location.origin }: ShareableSearchProps) { - const [shareUrl, setShareUrl] = useState(''); - const [copied, setCopied] = useState(false); - - // Generate shareable URL - const generateShareUrl = () => { - const params = new URLSearchParams(); - - // Add all active filters - Object.entries(filters).forEach(([key, value]) => { - if (value !== undefined && value !== null) { - if (Array.isArray(value)) { - value.forEach(v => params.append(key, v.toString())); - } else { - params.set(key, value.toString()); - } - } - }); - - const url = `${baseUrl}/search?${params.toString()}`; - setShareUrl(url); - return url; - }; - - // Copy to clipboard - const copyToClipboard = async () => { - const url = generateShareUrl(); - - try { - await navigator.clipboard.writeText(url); - setCopied(true); - setTimeout(() => setCopied(false), 2000); - } catch (error) { - console.error('Failed to copy:', error); - } - }; - - // Share via Web Share API - const share = async () => { - const url = generateShareUrl(); - - if (navigator.share) { - try { - await navigator.share({ - title: 'Search Results', - text: 'Check out these search results', - url - }); - } catch (error) { - console.error('Share failed:', error); - } - } else { - copyToClipboard(); - } - }; - - return ( -
    - - - - - {shareUrl && ( - - )} -
    - ); -} -``` - -## History Management - -### Search History with Local Storage -```tsx -interface SearchHistoryEntry { - id: string; - query: string; - filters: FilterState; - timestamp: Date; - resultCount?: number; -} - -class SearchHistory { - private static readonly STORAGE_KEY = 'search_history'; - private static readonly MAX_ENTRIES = 20; - - /** - * Save search to history - */ - static save(entry: Omit): void { - const history = this.getAll(); - - const newEntry: SearchHistoryEntry = { - ...entry, - id: Date.now().toString(), - timestamp: new Date() - }; - - // Add to beginning of array - history.unshift(newEntry); - - // Limit history size - if (history.length > this.MAX_ENTRIES) { - history.pop(); - } - - // Save to local storage - localStorage.setItem(this.STORAGE_KEY, JSON.stringify(history)); - } - - /** - * Get all history entries - */ - static getAll(): SearchHistoryEntry[] { - try { - const stored = localStorage.getItem(this.STORAGE_KEY); - if (!stored) return []; - - const history = JSON.parse(stored); - // Parse dates - return history.map((entry: any) => ({ - ...entry, - timestamp: new Date(entry.timestamp) - })); - } catch (error) { - console.error('Failed to load search history:', error); - return []; - } - } - - /** - * Get recent searches - */ - static getRecent(count = 5): SearchHistoryEntry[] { - return this.getAll().slice(0, count); - } - - /** - * Clear all history - */ - static clear(): void { - localStorage.removeItem(this.STORAGE_KEY); - } - - /** - * Remove specific entry - */ - static remove(id: string): void { - const history = this.getAll().filter(entry => entry.id !== id); - localStorage.setItem(this.STORAGE_KEY, JSON.stringify(history)); - } -} - -// React hook for search history -export function useSearchHistory() { - const [history, setHistory] = useState([]); - - useEffect(() => { - setHistory(SearchHistory.getAll()); - }, []); - - const saveSearch = (query: string, filters: FilterState, resultCount?: number) => { - SearchHistory.save({ query, filters, resultCount }); - setHistory(SearchHistory.getAll()); - }; - - const clearHistory = () => { - SearchHistory.clear(); - setHistory([]); - }; - - const removeEntry = (id: string) => { - SearchHistory.remove(id); - setHistory(SearchHistory.getAll()); - }; - - return { - history, - recentSearches: SearchHistory.getRecent(), - saveSearch, - clearHistory, - removeEntry - }; -} -``` - -## Deep Linking Support - -### Handling Deep Links -```tsx -import { useEffect } from 'react'; -import { useLocation } from 'react-router-dom'; - -export function useDeepLinking(onSearch: (filters: FilterState) => void) { - const location = useLocation(); - - useEffect(() => { - // Parse deep link parameters on mount - const params = new URLSearchParams(location.search); - - if (params.toString()) { - const filters: FilterState = {}; - - // Parse all parameters - params.forEach((value, key) => { - switch (key) { - case 'q': - filters.query = value; - break; - case 'category': - if (!filters.categories) filters.categories = []; - filters.categories.push(value); - break; - case 'min_price': - filters.minPrice = parseFloat(value); - break; - case 'max_price': - filters.maxPrice = parseFloat(value); - break; - case 'sort': - filters.sortBy = value; - break; - case 'page': - filters.page = parseInt(value, 10); - break; - } - }); - - // Execute search with deep link parameters - onSearch(filters); - } - }, [location.search, onSearch]); -} -``` - -## Validation and Sanitization - -### URL Parameter Validation -```typescript -class UrlValidator { - /** - * Validate and sanitize search query - */ - static validateQuery(query: string): string { - // Remove special characters that could break URLs - const sanitized = query - .replace(/[<>]/g, '') // Remove HTML tags - .replace(/[^\w\s-.,]/g, '') // Keep only safe characters - .trim() - .substring(0, 200); // Limit length - - return sanitized; - } - - /** - * Validate numeric range - */ - static validateRange(min?: number, max?: number): { min?: number; max?: number } { - const result: { min?: number; max?: number } = {}; - - if (min !== undefined && !isNaN(min) && min >= 0) { - result.min = min; - } - - if (max !== undefined && !isNaN(max) && max >= 0) { - result.max = max; - } - - // Ensure min <= max - if (result.min !== undefined && result.max !== undefined && result.min > result.max) { - [result.min, result.max] = [result.max, result.min]; - } - - return result; - } - - /** - * Validate sort parameter - */ - static validateSort(sort: string, allowedValues: string[]): string | undefined { - return allowedValues.includes(sort) ? sort : undefined; - } - - /** - * Validate page number - */ - static validatePage(page: any): number { - const parsed = parseInt(page, 10); - return isNaN(parsed) || parsed < 1 ? 1 : Math.min(parsed, 100); - } -} - -// Usage in component -export function useValidatedUrlFilters() { - const [searchParams, setSearchParams] = useSearchParams(); - - const getValidatedFilters = (): FilterState => { - const filters: FilterState = {}; - - // Validate query - const query = searchParams.get('q'); - if (query) { - filters.query = UrlValidator.validateQuery(query); - } - - // Validate price range - const minPrice = searchParams.get('min_price'); - const maxPrice = searchParams.get('max_price'); - const range = UrlValidator.validateRange( - minPrice ? parseFloat(minPrice) : undefined, - maxPrice ? parseFloat(maxPrice) : undefined - ); - - if (range.min !== undefined) filters.minPrice = range.min; - if (range.max !== undefined) filters.maxPrice = range.max; - - // Validate sort - const sort = searchParams.get('sort'); - if (sort) { - const validSort = UrlValidator.validateSort(sort, [ - 'relevance', - 'price_asc', - 'price_desc', - 'newest', - 'rating' - ]); - if (validSort) filters.sortBy = validSort; - } - - // Validate page - const page = searchParams.get('page'); - if (page) { - filters.page = UrlValidator.validatePage(page); - } - - return filters; - }; - - return getValidatedFilters(); -} -``` \ No newline at end of file diff --git a/.claude/skills/implementing-search-filter/references/search-input-patterns.md b/.claude/skills/implementing-search-filter/references/search-input-patterns.md deleted file mode 100644 index 6ae469966..000000000 --- a/.claude/skills/implementing-search-filter/references/search-input-patterns.md +++ /dev/null @@ -1,436 +0,0 @@ -# Search Input Patterns - - -## Table of Contents - -- [Basic Search Input](#basic-search-input) - - [Minimal Implementation](#minimal-implementation) -- [Advanced Search Input](#advanced-search-input) - - [With Clear Button and Loading State](#with-clear-button-and-loading-state) -- [Search with Keyboard Shortcuts](#search-with-keyboard-shortcuts) - - [Global Search Hotkey (Cmd/Ctrl + K)](#global-search-hotkey-cmdctrl-k) -- [Debouncing Strategies](#debouncing-strategies) - - [Custom Debounce Hook](#custom-debounce-hook) - - [Cancellable Search Requests](#cancellable-search-requests) -- [Search Input States](#search-input-states) - - [Visual States](#visual-states) -- [Mobile Search Patterns](#mobile-search-patterns) - - [Expandable Search](#expandable-search) - - [Full-Screen Search Modal](#full-screen-search-modal) -- [Accessibility Patterns](#accessibility-patterns) - - [ARIA Attributes](#aria-attributes) - - [Announcing Results](#announcing-results) -- [Performance Metrics](#performance-metrics) - - [Optimal Debounce Timing](#optimal-debounce-timing) - - [Search Latency Targets](#search-latency-targets) -- [Error Handling](#error-handling) - - [User-Friendly Error Messages](#user-friendly-error-messages) - -## Basic Search Input - -### Minimal Implementation -```tsx -import { useState, useCallback } from 'react'; -import { debounce } from 'lodash'; - -function SearchInput({ onSearch }) { - const [value, setValue] = useState(''); - - const debouncedSearch = useCallback( - debounce((query) => onSearch(query), 300), - [onSearch] - ); - - const handleChange = (e) => { - const newValue = e.target.value; - setValue(newValue); - debouncedSearch(newValue); - }; - - return ( -
    - -
    - ); -} -``` - -## Advanced Search Input - -### With Clear Button and Loading State -```tsx -import { useState, useCallback } from 'react'; -import { Search, X, Loader2 } from 'lucide-react'; - -interface SearchInputProps { - onSearch: (query: string) => void; - isLoading?: boolean; - placeholder?: string; -} - -export function SearchInput({ - onSearch, - isLoading = false, - placeholder = "Search products..." -}: SearchInputProps) { - const [value, setValue] = useState(''); - const [isFocused, setIsFocused] = useState(false); - - const handleClear = () => { - setValue(''); - onSearch(''); - }; - - return ( -
    -
    - {isLoading ? ( - - ) : ( - - )} -
    - - setValue(e.target.value)} - onFocus={() => setIsFocused(true)} - onBlur={() => setIsFocused(false)} - placeholder={placeholder} - className="search-input" - aria-label="Search" - aria-busy={isLoading} - /> - - {value && ( - - )} -
    - ); -} -``` - -## Search with Keyboard Shortcuts - -### Global Search Hotkey (Cmd/Ctrl + K) -```tsx -import { useEffect, useRef } from 'react'; - -export function GlobalSearch() { - const inputRef = useRef(null); - const [isOpen, setIsOpen] = useState(false); - - useEffect(() => { - const handleKeyDown = (e: KeyboardEvent) => { - // Cmd+K (Mac) or Ctrl+K (Windows/Linux) - if ((e.metaKey || e.ctrlKey) && e.key === 'k') { - e.preventDefault(); - setIsOpen(true); - inputRef.current?.focus(); - } - - // Escape to close - if (e.key === 'Escape') { - setIsOpen(false); - } - }; - - window.addEventListener('keydown', handleKeyDown); - return () => window.removeEventListener('keydown', handleKeyDown); - }, []); - - if (!isOpen) return null; - - return ( -
    - -
    - ); -} -``` - -## Debouncing Strategies - -### Custom Debounce Hook -```tsx -import { useEffect, useState } from 'react'; - -function useDebounce(value: T, delay: number): T { - const [debouncedValue, setDebouncedValue] = useState(value); - - useEffect(() => { - const handler = setTimeout(() => { - setDebouncedValue(value); - }, delay); - - return () => clearTimeout(handler); - }, [value, delay]); - - return debouncedValue; -} - -// Usage -function SearchComponent() { - const [searchTerm, setSearchTerm] = useState(''); - const debouncedSearchTerm = useDebounce(searchTerm, 300); - - useEffect(() => { - if (debouncedSearchTerm) { - // Perform search - performSearch(debouncedSearchTerm); - } - }, [debouncedSearchTerm]); -} -``` - -### Cancellable Search Requests -```tsx -import { useRef, useCallback } from 'react'; - -function useSearchAPI() { - const abortControllerRef = useRef(null); - - const search = useCallback(async (query: string) => { - // Cancel previous request - if (abortControllerRef.current) { - abortControllerRef.current.abort(); - } - - // Create new abort controller - abortControllerRef.current = new AbortController(); - - try { - const response = await fetch(`/api/search?q=${query}`, { - signal: abortControllerRef.current.signal - }); - - if (!response.ok) throw new Error('Search failed'); - - return await response.json(); - } catch (error) { - if (error.name === 'AbortError') { - // Request was cancelled, ignore - return null; - } - throw error; - } - }, []); - - return { search }; -} -``` - -## Search Input States - -### Visual States -```css -/* Base state */ -.search-input { - border: 1px solid var(--search-input-border); - background: var(--search-input-bg); - padding: var(--search-padding); - border-radius: var(--search-border-radius); - transition: all 0.2s ease; -} - -/* Focus state */ -.search-input:focus { - outline: none; - border-color: var(--search-input-focus-border); - box-shadow: 0 0 0 3px var(--search-input-focus-ring); -} - -/* Loading state */ -.search-input[aria-busy="true"] { - background-image: url('data:image/svg+xml;...'); - background-position: right 12px center; - background-repeat: no-repeat; -} - -/* Empty state */ -.search-input:placeholder-shown { - color: var(--search-placeholder-color); -} - -/* Error state */ -.search-input[aria-invalid="true"] { - border-color: var(--color-error); -} -``` - -## Mobile Search Patterns - -### Expandable Search -```tsx -function MobileSearch() { - const [isExpanded, setIsExpanded] = useState(false); - - return ( -
    - - - {isExpanded && ( - setIsExpanded(false)} - /> - )} -
    - ); -} -``` - -### Full-Screen Search Modal -```tsx -function FullScreenSearch() { - const [isOpen, setIsOpen] = useState(false); - - return ( - <> - - - {isOpen && ( -
    -
    - - -
    - -
    - {/* Recent searches, trending, etc */} -
    -
    - )} - - ); -} -``` - -## Accessibility Patterns - -### ARIA Attributes -```tsx -
    - - - - - - Type to search, use arrow keys to navigate suggestions - - -
    - {/* Results */} -
    -
    -``` - -### Announcing Results -```tsx -function SearchResults({ results, query }) { - return ( -
    -
    - {results.length > 0 - ? `${results.length} results found for ${query}` - : `No results found for ${query}` - } -
    - - {results.map(result => ( - - ))} -
    - ); -} -``` - -## Performance Metrics - -### Optimal Debounce Timing -- **Fast typists**: 200-250ms -- **Average typists**: 300-350ms -- **Slow typists**: 400-500ms -- **Mobile users**: 500-750ms - -### Search Latency Targets -- **Autocomplete**: <100ms -- **Instant search**: <200ms -- **Full search**: <500ms -- **Complex search**: <1000ms - -## Error Handling - -### User-Friendly Error Messages -```tsx -function SearchError({ error, query }) { - const getErrorMessage = () => { - switch(error.type) { - case 'NETWORK': - return 'Unable to search. Please check your connection.'; - case 'TIMEOUT': - return 'Search is taking longer than expected...'; - case 'INVALID_QUERY': - return 'Please enter a valid search term.'; - case 'NO_RESULTS': - return `No results found for "${query}". Try different keywords.`; - default: - return 'Something went wrong. Please try again.'; - } - }; - - return ( -
    - {getErrorMessage()} -
    - ); -} -``` \ No newline at end of file diff --git a/.claude/skills/implementing-search-filter/scripts/debounce_calculator.js b/.claude/skills/implementing-search-filter/scripts/debounce_calculator.js deleted file mode 100644 index da2affc26..000000000 --- a/.claude/skills/implementing-search-filter/scripts/debounce_calculator.js +++ /dev/null @@ -1,422 +0,0 @@ -#!/usr/bin/env node - -/** - * Calculate optimal debounce timing based on user behavior and network conditions. - * - * This script analyzes typing speed, network latency, and device type - * to recommend the best debounce delay for search inputs. - */ - -class DebounceCalculator { - constructor() { - // Default configurations - this.config = { - // Base delays by user type (milliseconds) - userTypeDelays: { - fastTypist: 200, // 60+ WPM - averageTypist: 300, // 40-60 WPM - slowTypist: 400, // <40 WPM - mobileUser: 500 // Touch typing - }, - - // Network latency adjustments - networkAdjustments: { - fast: 0, // <50ms latency - moderate: 50, // 50-150ms latency - slow: 100, // 150-300ms latency - verySlow: 200 // >300ms latency - }, - - // Search complexity adjustments - complexityAdjustments: { - simple: 0, // Basic text search - moderate: 50, // Filters + text - complex: 100, // Multiple filters, facets - heavy: 150 // Aggregations, analytics - } - }; - } - - /** - * Calculate typing speed from keystroke timings - * @param {number[]} keystrokeDelays - Array of delays between keystrokes (ms) - * @returns {number} Words per minute estimate - */ - calculateTypingSpeed(keystrokeDelays) { - if (!keystrokeDelays || keystrokeDelays.length < 5) { - return 40; // Default to average - } - - // Calculate average delay between keystrokes - const avgDelay = keystrokeDelays.reduce((a, b) => a + b, 0) / keystrokeDelays.length; - - // Convert to characters per minute (assuming average 5 chars per word) - const charsPerMinute = (60000 / avgDelay); - const wordsPerMinute = charsPerMinute / 5; - - return Math.round(wordsPerMinute); - } - - /** - * Determine user type based on typing speed - * @param {number} wpm - Words per minute - * @returns {string} User type category - */ - getUserType(wpm) { - if (wpm >= 60) return 'fastTypist'; - if (wpm >= 40) return 'averageTypist'; - return 'slowTypist'; - } - - /** - * Measure network latency category - * @param {number} latencyMs - Average network latency in milliseconds - * @returns {string} Network speed category - */ - getNetworkSpeed(latencyMs) { - if (latencyMs < 50) return 'fast'; - if (latencyMs < 150) return 'moderate'; - if (latencyMs < 300) return 'slow'; - return 'verySlow'; - } - - /** - * Determine search complexity - * @param {Object} searchParams - Search parameters object - * @returns {string} Complexity level - */ - getSearchComplexity(searchParams) { - const filterCount = Object.keys(searchParams.filters || {}).length; - const hasAggregations = searchParams.includeFacets || false; - const hasFullText = !!searchParams.query; - - if (filterCount > 5 || hasAggregations) return 'heavy'; - if (filterCount > 2 || (hasFullText && filterCount > 0)) return 'complex'; - if (filterCount > 0 || hasFullText) return 'moderate'; - return 'simple'; - } - - /** - * Calculate optimal debounce delay - * @param {Object} params - Calculation parameters - * @returns {Object} Recommended delays and analysis - */ - calculateOptimalDebounce(params = {}) { - const { - keystrokeDelays = [], - networkLatency = 100, - searchParams = {}, - deviceType = 'desktop', - adaptiveMode = true - } = params; - - // Calculate typing speed - const typingSpeed = this.calculateTypingSpeed(keystrokeDelays); - const userType = deviceType === 'mobile' ? 'mobileUser' : this.getUserType(typingSpeed); - - // Get base delay - let baseDelay = this.config.userTypeDelays[userType]; - - // Adjust for network - const networkSpeed = this.getNetworkSpeed(networkLatency); - const networkAdjustment = this.config.networkAdjustments[networkSpeed]; - - // Adjust for search complexity - const complexity = this.getSearchComplexity(searchParams); - const complexityAdjustment = this.config.complexityAdjustments[complexity]; - - // Calculate final delay - let optimalDelay = baseDelay + networkAdjustment + complexityAdjustment; - - // Apply bounds - const minDelay = 150; // Minimum to avoid excessive requests - const maxDelay = 750; // Maximum to maintain responsiveness - - optimalDelay = Math.max(minDelay, Math.min(maxDelay, optimalDelay)); - - // Calculate adaptive delays for different scenarios - const adaptiveDelays = adaptiveMode ? { - initial: optimalDelay * 1.5, // First keystroke - subsequent: optimalDelay, // Following keystrokes - idle: optimalDelay * 2, // After pause in typing - burst: optimalDelay * 0.75 // Rapid typing detected - } : null; - - return { - optimal: Math.round(optimalDelay), - adaptive: adaptiveDelays, - analysis: { - typingSpeed, - userType, - networkSpeed, - complexity, - adjustments: { - network: networkAdjustment, - complexity: complexityAdjustment - } - }, - recommendations: this.getRecommendations(optimalDelay, params) - }; - } - - /** - * Generate recommendations based on analysis - * @param {number} delay - Calculated delay - * @param {Object} params - Input parameters - * @returns {string[]} List of recommendations - */ - getRecommendations(delay, params) { - const recommendations = []; - - if (delay > 500) { - recommendations.push('Consider implementing a loading indicator for better UX'); - recommendations.push('Cache recent searches to improve perceived performance'); - } - - if (params.networkLatency > 200) { - recommendations.push('Implement request cancellation for outdated queries'); - recommendations.push('Consider using a CDN or edge computing for search'); - } - - if (params.deviceType === 'mobile') { - recommendations.push('Implement touch-friendly autocomplete UI'); - recommendations.push('Consider voice search as an alternative'); - } - - const complexity = this.getSearchComplexity(params.searchParams || {}); - if (complexity === 'heavy') { - recommendations.push('Consider server-side caching for complex queries'); - recommendations.push('Implement progressive loading for large result sets'); - } - - return recommendations; - } - - /** - * Simulate typing patterns for testing - * @param {string} pattern - Typing pattern (fast, average, slow, burst) - * @returns {number[]} Array of keystroke delays - */ - simulateTypingPattern(pattern = 'average') { - const patterns = { - fast: () => 80 + Math.random() * 40, // 80-120ms - average: () => 150 + Math.random() * 100, // 150-250ms - slow: () => 300 + Math.random() * 200, // 300-500ms - burst: () => { - // Simulate burst typing with pauses - return Math.random() < 0.8 ? 60 + Math.random() * 40 : 500 + Math.random() * 500; - } - }; - - const generator = patterns[pattern] || patterns.average; - const delays = []; - - // Generate 20 keystrokes - for (let i = 0; i < 20; i++) { - delays.push(generator()); - } - - return delays; - } - - /** - * Benchmark different debounce delays - * @param {Object} testParams - Test parameters - * @returns {Object} Benchmark results - */ - benchmark(testParams = {}) { - const delays = [100, 200, 300, 400, 500, 600]; - const results = {}; - - delays.forEach(delay => { - const metrics = this.calculateMetrics(delay, testParams); - results[delay] = metrics; - }); - - // Find optimal based on score - let optimal = null; - let bestScore = -Infinity; - - Object.entries(results).forEach(([delay, metrics]) => { - if (metrics.score > bestScore) { - bestScore = metrics.score; - optimal = parseInt(delay); - } - }); - - return { - results, - optimal, - analysis: this.analyzeBenchmark(results) - }; - } - - /** - * Calculate metrics for a given delay - * @param {number} delay - Debounce delay to test - * @param {Object} params - Test parameters - * @returns {Object} Calculated metrics - */ - calculateMetrics(delay, params) { - const { - avgSessionLength = 30000, // 30 seconds - avgQueryLength = 10, // 10 characters - typingSpeed = 40, // WPM - networkLatency = 100 // ms - } = params; - - // Calculate requests saved - const keystrokesPerSession = (avgSessionLength / (60000 / (typingSpeed * 5))); - const requestsWithoutDebounce = keystrokesPerSession; - const requestsWithDebounce = Math.ceil(keystrokesPerSession / (delay / 100)); - const requestsSaved = requestsWithoutDebounce - requestsWithDebounce; - - // Calculate perceived latency - const perceivedLatency = delay + networkLatency; - - // Calculate responsiveness score (lower delay = higher score) - const responsivenessScore = 1000 / (delay + 50); - - // Calculate efficiency score (more requests saved = higher score) - const efficiencyScore = requestsSaved / requestsWithoutDebounce * 100; - - // Combined score (weighted) - const score = (responsivenessScore * 0.6) + (efficiencyScore * 0.4); - - return { - requestsSaved, - perceivedLatency, - responsivenessScore: Math.round(responsivenessScore * 10) / 10, - efficiencyScore: Math.round(efficiencyScore), - score: Math.round(score * 10) / 10 - }; - } - - /** - * Analyze benchmark results - * @param {Object} results - Benchmark results - * @returns {Object} Analysis summary - */ - analyzeBenchmark(results) { - const delays = Object.keys(results).map(Number); - const scores = delays.map(d => results[d].score); - const latencies = delays.map(d => results[d].perceivedLatency); - - return { - bestScore: Math.max(...scores), - worstScore: Math.min(...scores), - avgPerceivedLatency: Math.round(latencies.reduce((a, b) => a + b, 0) / latencies.length), - recommendation: this.getBenchmarkRecommendation(results) - }; - } - - /** - * Get recommendation from benchmark - * @param {Object} results - Benchmark results - * @returns {string} Recommendation text - */ - getBenchmarkRecommendation(results) { - const optimal = Object.entries(results) - .sort(([, a], [, b]) => b.score - a.score)[0]; - - const [delay, metrics] = optimal; - - if (metrics.perceivedLatency < 300) { - return `Use ${delay}ms for optimal balance of performance and UX`; - } else if (metrics.efficiencyScore > 80) { - return `Use ${delay}ms to minimize server load despite higher latency`; - } else { - return `Consider adaptive debouncing starting at ${delay}ms`; - } - } -} - -// Command-line interface -if (require.main === module) { - const calculator = new DebounceCalculator(); - - // Parse command line arguments - const args = process.argv.slice(2); - const mode = args[0] || 'calculate'; - - if (mode === 'calculate') { - // Example calculation - const params = { - keystrokeDelays: calculator.simulateTypingPattern('average'), - networkLatency: 100, - searchParams: { - query: 'laptop', - filters: { - category: ['Electronics'], - priceRange: [500, 1500] - }, - includeFacets: true - }, - deviceType: 'desktop', - adaptiveMode: true - }; - - const result = calculator.calculateOptimalDebounce(params); - - console.log('Debounce Calculation Results:'); - console.log('============================'); - console.log(`Optimal Delay: ${result.optimal}ms`); - console.log('\nAdaptive Delays:'); - if (result.adaptive) { - Object.entries(result.adaptive).forEach(([key, value]) => { - console.log(` ${key}: ${Math.round(value)}ms`); - }); - } - console.log('\nAnalysis:'); - Object.entries(result.analysis).forEach(([key, value]) => { - if (typeof value === 'object') { - console.log(` ${key}:`); - Object.entries(value).forEach(([k, v]) => { - console.log(` ${k}: ${v}`); - }); - } else { - console.log(` ${key}: ${value}`); - } - }); - console.log('\nRecommendations:'); - result.recommendations.forEach(rec => { - console.log(` • ${rec}`); - }); - - } else if (mode === 'benchmark') { - // Run benchmark - const testParams = { - avgSessionLength: 30000, - avgQueryLength: 10, - typingSpeed: 40, - networkLatency: 100 - }; - - const benchmark = calculator.benchmark(testParams); - - console.log('Debounce Benchmark Results:'); - console.log('=========================='); - console.log(`Optimal Delay: ${benchmark.optimal}ms\n`); - - console.log('Delay Comparison:'); - Object.entries(benchmark.results).forEach(([delay, metrics]) => { - console.log(`${delay}ms:`); - console.log(` Score: ${metrics.score}`); - console.log(` Requests Saved: ${metrics.requestsSaved}`); - console.log(` Perceived Latency: ${metrics.perceivedLatency}ms`); - console.log(` Responsiveness: ${metrics.responsivenessScore}`); - console.log(` Efficiency: ${metrics.efficiencyScore}%\n`); - }); - - console.log('Analysis:'); - console.log(` Best Score: ${benchmark.analysis.bestScore}`); - console.log(` Worst Score: ${benchmark.analysis.worstScore}`); - console.log(` Avg Perceived Latency: ${benchmark.analysis.avgPerceivedLatency}ms`); - console.log(` Recommendation: ${benchmark.analysis.recommendation}`); - - } else { - console.log('Usage: node debounce_calculator.js [calculate|benchmark]'); - } -} - -module.exports = DebounceCalculator; \ No newline at end of file diff --git a/.claude/skills/implementing-search-filter/scripts/generate_filter_query.py b/.claude/skills/implementing-search-filter/scripts/generate_filter_query.py deleted file mode 100644 index 397356b82..000000000 --- a/.claude/skills/implementing-search-filter/scripts/generate_filter_query.py +++ /dev/null @@ -1,381 +0,0 @@ -#!/usr/bin/env python3 -""" -Generate optimized SQL and Elasticsearch queries from filter parameters. - -This script generates database queries dynamically based on search filters, -handling both SQL (PostgreSQL/MySQL) and Elasticsearch query generation. -""" - -import json -import argparse -from typing import Dict, List, Any, Optional -from datetime import datetime, timedelta - - -class SQLQueryBuilder: - """Build SQL queries dynamically from filter parameters.""" - - def __init__(self, dialect: str = 'postgresql'): - self.dialect = dialect - self.query_parts = { - 'select': [], - 'from': '', - 'join': [], - 'where': [], - 'group_by': [], - 'having': [], - 'order_by': [], - 'limit': None, - 'offset': None - } - - def build_search_query(self, filters: Dict[str, Any]) -> str: - """Build a complete search query from filters.""" - - # Base query - self.query_parts['select'] = ['p.*'] - self.query_parts['from'] = 'products p' - - # Text search - if filters.get('query'): - self._add_text_search(filters['query']) - - # Category filter - if filters.get('categories'): - self._add_category_filter(filters['categories']) - - # Price range - if filters.get('min_price') or filters.get('max_price'): - self._add_price_filter( - filters.get('min_price'), - filters.get('max_price') - ) - - # Brand filter - if filters.get('brands'): - self._add_brand_filter(filters['brands']) - - # Stock filter - if filters.get('in_stock'): - self.query_parts['where'].append('p.in_stock = TRUE') - - # Date range - if filters.get('date_from') or filters.get('date_to'): - self._add_date_filter( - filters.get('date_from'), - filters.get('date_to') - ) - - # Sorting - self._add_sorting(filters.get('sort_by', 'relevance')) - - # Pagination - self._add_pagination( - filters.get('page', 1), - filters.get('per_page', 20) - ) - - return self._build_query_string() - - def _add_text_search(self, query: str): - """Add full-text search condition.""" - if self.dialect == 'postgresql': - # PostgreSQL full-text search - search_vector = """ - to_tsvector('english', COALESCE(p.title, '') || ' ' || - COALESCE(p.description, '') || ' ' || - COALESCE(p.tags, '')) - """ - self.query_parts['where'].append( - f"{search_vector} @@ plainto_tsquery('english', '{query}')" - ) - - # Add relevance score - self.query_parts['select'].append( - f"ts_rank({search_vector}, plainto_tsquery('english', '{query}')) AS relevance" - ) - else: - # MySQL FULLTEXT - self.query_parts['where'].append( - f"MATCH(p.title, p.description) AGAINST('{query}' IN NATURAL LANGUAGE MODE)" - ) - - def _add_category_filter(self, categories: List[str]): - """Add category filter.""" - placeholders = ', '.join([f"'{cat}'" for cat in categories]) - self.query_parts['where'].append(f"p.category IN ({placeholders})") - - def _add_price_filter(self, min_price: Optional[float], max_price: Optional[float]): - """Add price range filter.""" - if min_price is not None: - self.query_parts['where'].append(f"p.price >= {min_price}") - if max_price is not None: - self.query_parts['where'].append(f"p.price <= {max_price}") - - def _add_brand_filter(self, brands: List[str]): - """Add brand filter.""" - placeholders = ', '.join([f"'{brand}'" for brand in brands]) - self.query_parts['where'].append(f"p.brand IN ({placeholders})") - - def _add_date_filter(self, date_from: Optional[str], date_to: Optional[str]): - """Add date range filter.""" - if date_from: - self.query_parts['where'].append(f"p.created_at >= '{date_from}'") - if date_to: - self.query_parts['where'].append(f"p.created_at <= '{date_to}'") - - def _add_sorting(self, sort_by: str): - """Add sorting clause.""" - sort_options = { - 'relevance': 'relevance DESC' if 'relevance' in str(self.query_parts['select']) else 'p.created_at DESC', - 'price_asc': 'p.price ASC', - 'price_desc': 'p.price DESC', - 'newest': 'p.created_at DESC', - 'oldest': 'p.created_at ASC', - 'rating': 'p.rating DESC', - 'popularity': 'p.view_count DESC' - } - - self.query_parts['order_by'] = [sort_options.get(sort_by, 'p.created_at DESC')] - - def _add_pagination(self, page: int, per_page: int): - """Add pagination.""" - self.query_parts['limit'] = per_page - self.query_parts['offset'] = (page - 1) * per_page - - def _build_query_string(self) -> str: - """Build final SQL query string.""" - query = f"SELECT {', '.join(self.query_parts['select'])}\n" - query += f"FROM {self.query_parts['from']}\n" - - if self.query_parts['join']: - query += '\n'.join(self.query_parts['join']) + '\n' - - if self.query_parts['where']: - query += f"WHERE {' AND '.join(self.query_parts['where'])}\n" - - if self.query_parts['group_by']: - query += f"GROUP BY {', '.join(self.query_parts['group_by'])}\n" - - if self.query_parts['having']: - query += f"HAVING {' AND '.join(self.query_parts['having'])}\n" - - if self.query_parts['order_by']: - query += f"ORDER BY {', '.join(self.query_parts['order_by'])}\n" - - if self.query_parts['limit']: - query += f"LIMIT {self.query_parts['limit']}\n" - - if self.query_parts['offset']: - query += f"OFFSET {self.query_parts['offset']}\n" - - return query - - -class ElasticsearchQueryBuilder: - """Build Elasticsearch queries from filter parameters.""" - - def build_search_query(self, filters: Dict[str, Any]) -> Dict: - """Build Elasticsearch query DSL from filters.""" - - query = { - 'query': { - 'bool': { - 'must': [], - 'filter': [], - 'should': [], - 'must_not': [] - } - } - } - - # Text search - if filters.get('query'): - query['query']['bool']['must'].append({ - 'multi_match': { - 'query': filters['query'], - 'fields': ['title^3', 'description^2', 'tags'], - 'type': 'best_fields', - 'fuzziness': 'AUTO' - } - }) - - # Category filter - if filters.get('categories'): - query['query']['bool']['filter'].append({ - 'terms': {'category.keyword': filters['categories']} - }) - - # Price range - if filters.get('min_price') or filters.get('max_price'): - price_range = {} - if filters.get('min_price'): - price_range['gte'] = filters['min_price'] - if filters.get('max_price'): - price_range['lte'] = filters['max_price'] - - query['query']['bool']['filter'].append({ - 'range': {'price': price_range} - }) - - # Brand filter - if filters.get('brands'): - query['query']['bool']['filter'].append({ - 'terms': {'brand.keyword': filters['brands']} - }) - - # Stock filter - if filters.get('in_stock'): - query['query']['bool']['filter'].append({ - 'term': {'in_stock': True} - }) - - # Date range - if filters.get('date_from') or filters.get('date_to'): - date_range = {} - if filters.get('date_from'): - date_range['gte'] = filters['date_from'] - if filters.get('date_to'): - date_range['lte'] = filters['date_to'] - - query['query']['bool']['filter'].append({ - 'range': {'created_at': date_range} - }) - - # Sorting - query['sort'] = self._get_sort_clause(filters.get('sort_by', 'relevance')) - - # Pagination - page = filters.get('page', 1) - per_page = filters.get('per_page', 20) - query['from'] = (page - 1) * per_page - query['size'] = per_page - - # Aggregations for facets - if filters.get('include_facets', True): - query['aggs'] = self._build_aggregations() - - # Clean up empty sections - if not query['query']['bool']['must']: - del query['query']['bool']['must'] - if not query['query']['bool']['filter']: - del query['query']['bool']['filter'] - if not query['query']['bool']['should']: - del query['query']['bool']['should'] - if not query['query']['bool']['must_not']: - del query['query']['bool']['must_not'] - - # If no conditions, use match_all - if not query['query']['bool']: - query['query'] = {'match_all': {}} - - return query - - def _get_sort_clause(self, sort_by: str) -> List[Dict]: - """Get Elasticsearch sort clause.""" - sort_options = { - 'relevance': [{'_score': 'desc'}], - 'price_asc': [{'price': 'asc'}], - 'price_desc': [{'price': 'desc'}], - 'newest': [{'created_at': 'desc'}], - 'oldest': [{'created_at': 'asc'}], - 'rating': [{'rating': 'desc'}], - 'popularity': [{'view_count': 'desc'}] - } - - return sort_options.get(sort_by, [{'_score': 'desc'}]) - - def _build_aggregations(self) -> Dict: - """Build aggregations for faceted search.""" - return { - 'categories': { - 'terms': { - 'field': 'category.keyword', - 'size': 20 - } - }, - 'brands': { - 'terms': { - 'field': 'brand.keyword', - 'size': 20 - } - }, - 'price_ranges': { - 'range': { - 'field': 'price', - 'ranges': [ - {'key': 'Under $50', 'to': 50}, - {'key': '$50-$100', 'from': 50, 'to': 100}, - {'key': '$100-$200', 'from': 100, 'to': 200}, - {'key': 'Over $200', 'from': 200} - ] - } - }, - 'avg_price': { - 'avg': {'field': 'price'} - }, - 'in_stock_count': { - 'filter': {'term': {'in_stock': True}} - } - } - - -def main(): - """Main function to generate queries from command line.""" - parser = argparse.ArgumentParser( - description='Generate search queries from filter parameters' - ) - - parser.add_argument( - '--type', - choices=['sql', 'elasticsearch'], - default='sql', - help='Query type to generate' - ) - - parser.add_argument( - '--dialect', - choices=['postgresql', 'mysql'], - default='postgresql', - help='SQL dialect (for SQL queries)' - ) - - parser.add_argument( - '--filters', - type=str, - required=True, - help='JSON string of filter parameters' - ) - - parser.add_argument( - '--pretty', - action='store_true', - help='Pretty print output' - ) - - args = parser.parse_args() - - try: - filters = json.loads(args.filters) - except json.JSONDecodeError as e: - print(f"Error parsing filters JSON: {e}") - return 1 - - if args.type == 'sql': - builder = SQLQueryBuilder(dialect=args.dialect) - query = builder.build_search_query(filters) - print(query) - else: - builder = ElasticsearchQueryBuilder() - query = builder.build_search_query(filters) - - if args.pretty: - print(json.dumps(query, indent=2)) - else: - print(json.dumps(query)) - - return 0 - - -if __name__ == '__main__': - exit(main()) \ No newline at end of file diff --git a/.claude/skills/implementing-search-filter/scripts/validate_search_params.py b/.claude/skills/implementing-search-filter/scripts/validate_search_params.py deleted file mode 100644 index c1f99fa9e..000000000 --- a/.claude/skills/implementing-search-filter/scripts/validate_search_params.py +++ /dev/null @@ -1,424 +0,0 @@ -#!/usr/bin/env python3 -""" -Validate and sanitize search parameters to prevent injection attacks and ensure data integrity. - -This script validates search inputs, filters, and pagination parameters -to ensure they meet security and business logic requirements. -""" - -import re -import json -import argparse -from typing import Dict, List, Any, Optional, Tuple -from datetime import datetime, date - - -class SearchParamValidator: - """Validate and sanitize search parameters.""" - - # Define validation rules - RULES = { - 'query': { - 'type': str, - 'min_length': 0, - 'max_length': 200, - 'pattern': r'^[a-zA-Z0-9\s\-\.\,\!\?\'\"\&]+$', # Alphanumeric + common punctuation - 'sanitize': True - }, - 'categories': { - 'type': list, - 'max_items': 20, - 'item_type': str, - 'allowed_values': None # Will be set in __init__ if needed - }, - 'brands': { - 'type': list, - 'max_items': 20, - 'item_type': str - }, - 'min_price': { - 'type': (int, float), - 'min_value': 0, - 'max_value': 1000000 - }, - 'max_price': { - 'type': (int, float), - 'min_value': 0, - 'max_value': 1000000 - }, - 'sort_by': { - 'type': str, - 'allowed_values': [ - 'relevance', 'price_asc', 'price_desc', - 'newest', 'oldest', 'rating', 'popularity' - ] - }, - 'page': { - 'type': int, - 'min_value': 1, - 'max_value': 100 - }, - 'per_page': { - 'type': int, - 'min_value': 1, - 'max_value': 100, - 'default': 20 - }, - 'in_stock': { - 'type': bool - }, - 'date_from': { - 'type': str, - 'date_format': '%Y-%m-%d' - }, - 'date_to': { - 'type': str, - 'date_format': '%Y-%m-%d' - } - } - - # SQL injection patterns to block - SQL_INJECTION_PATTERNS = [ - r'(\b(SELECT|INSERT|UPDATE|DELETE|DROP|CREATE|ALTER|EXEC|EXECUTE)\b)', - r'(--|\/\*|\*\/|xp_|sp_|@@)', - r'(\bunion\b.*\bselect\b)', - r'(;.*\b(SELECT|INSERT|UPDATE|DELETE)\b)', - r'(\bOR\b.*=.*)', - r"('.*\bOR\b.*'=')", - ] - - def __init__(self, allowed_categories: Optional[List[str]] = None): - """Initialize validator with optional allowed categories.""" - if allowed_categories: - self.RULES['categories']['allowed_values'] = allowed_categories - - def validate(self, params: Dict[str, Any]) -> Tuple[bool, Dict[str, Any], List[str]]: - """ - Validate search parameters. - - Returns: - Tuple of (is_valid, cleaned_params, error_messages) - """ - cleaned = {} - errors = [] - - for param_name, param_value in params.items(): - if param_value is None: - continue - - if param_name not in self.RULES: - # Unknown parameter - skip but log warning - errors.append(f"Unknown parameter: {param_name}") - continue - - rule = self.RULES[param_name] - result = self._validate_param(param_name, param_value, rule) - - if result['valid']: - cleaned[param_name] = result['value'] - else: - errors.extend(result['errors']) - - # Additional cross-field validation - cross_errors = self._cross_validate(cleaned) - errors.extend(cross_errors) - - # Apply defaults for missing required params - cleaned = self._apply_defaults(cleaned) - - return len(errors) == 0, cleaned, errors - - def _validate_param(self, name: str, value: Any, rule: Dict) -> Dict: - """Validate a single parameter.""" - result = {'valid': True, 'value': value, 'errors': []} - - # Type validation - expected_type = rule.get('type') - if expected_type and not isinstance(value, expected_type): - result['valid'] = False - result['errors'].append( - f"{name}: Expected {expected_type.__name__}, got {type(value).__name__}" - ) - return result - - # String validation - if isinstance(value, str): - validated = self._validate_string(name, value, rule) - result.update(validated) - - # List validation - elif isinstance(value, list): - validated = self._validate_list(name, value, rule) - result.update(validated) - - # Number validation - elif isinstance(value, (int, float)): - validated = self._validate_number(name, value, rule) - result.update(validated) - - # Boolean validation - elif isinstance(value, bool): - result['value'] = value - - # Date validation - if rule.get('date_format'): - validated = self._validate_date(name, str(value), rule['date_format']) - result.update(validated) - - return result - - def _validate_string(self, name: str, value: str, rule: Dict) -> Dict: - """Validate string parameter.""" - result = {'valid': True, 'value': value, 'errors': []} - - # Check for SQL injection attempts - for pattern in self.SQL_INJECTION_PATTERNS: - if re.search(pattern, value, re.IGNORECASE): - result['valid'] = False - result['errors'].append( - f"{name}: Potential SQL injection detected" - ) - return result - - # Length validation - min_len = rule.get('min_length', 0) - max_len = rule.get('max_length', float('inf')) - - if len(value) < min_len: - result['valid'] = False - result['errors'].append( - f"{name}: Must be at least {min_len} characters" - ) - - if len(value) > max_len: - result['valid'] = False - result['errors'].append( - f"{name}: Must be at most {max_len} characters" - ) - - # Pattern validation - pattern = rule.get('pattern') - if pattern and not re.match(pattern, value): - result['valid'] = False - result['errors'].append( - f"{name}: Contains invalid characters" - ) - - # Allowed values validation - allowed = rule.get('allowed_values') - if allowed and value not in allowed: - result['valid'] = False - result['errors'].append( - f"{name}: Must be one of {allowed}" - ) - - # Sanitization - if rule.get('sanitize') and result['valid']: - result['value'] = self._sanitize_string(value) - - return result - - def _validate_list(self, name: str, value: List, rule: Dict) -> Dict: - """Validate list parameter.""" - result = {'valid': True, 'value': value, 'errors': []} - - # Max items check - max_items = rule.get('max_items', float('inf')) - if len(value) > max_items: - result['valid'] = False - result['errors'].append( - f"{name}: Cannot have more than {max_items} items" - ) - - # Item type validation - item_type = rule.get('item_type') - if item_type: - for i, item in enumerate(value): - if not isinstance(item, item_type): - result['valid'] = False - result['errors'].append( - f"{name}[{i}]: Expected {item_type.__name__}" - ) - - # Allowed values for items - allowed = rule.get('allowed_values') - if allowed: - invalid_items = [item for item in value if item not in allowed] - if invalid_items: - result['valid'] = False - result['errors'].append( - f"{name}: Invalid items: {invalid_items}" - ) - - # Sanitize string items - if item_type == str and result['valid']: - result['value'] = [self._sanitize_string(item) for item in value] - - return result - - def _validate_number(self, name: str, value: float, rule: Dict) -> Dict: - """Validate numeric parameter.""" - result = {'valid': True, 'value': value, 'errors': []} - - min_val = rule.get('min_value', float('-inf')) - max_val = rule.get('max_value', float('inf')) - - if value < min_val: - result['valid'] = False - result['errors'].append( - f"{name}: Must be at least {min_val}" - ) - - if value > max_val: - result['valid'] = False - result['errors'].append( - f"{name}: Must be at most {max_val}" - ) - - return result - - def _validate_date(self, name: str, value: str, date_format: str) -> Dict: - """Validate date parameter.""" - result = {'valid': True, 'value': value, 'errors': []} - - try: - parsed_date = datetime.strptime(value, date_format) - result['value'] = parsed_date.strftime(date_format) - - # Check if date is not in future (for most cases) - if parsed_date.date() > date.today(): - result['errors'].append( - f"{name}: Date cannot be in the future" - ) - except ValueError: - result['valid'] = False - result['errors'].append( - f"{name}: Invalid date format (expected {date_format})" - ) - - return result - - def _sanitize_string(self, value: str) -> str: - """Sanitize string to prevent XSS and injection.""" - # Remove HTML tags - value = re.sub(r'<[^>]+>', '', value) - - # Escape special characters - value = value.replace('&', '&') - value = value.replace('<', '<') - value = value.replace('>', '>') - value = value.replace('"', '"') - value = value.replace("'", ''') - - # Normalize whitespace - value = ' '.join(value.split()) - - return value.strip() - - def _cross_validate(self, params: Dict) -> List[str]: - """Perform cross-field validation.""" - errors = [] - - # Price range validation - min_price = params.get('min_price') - max_price = params.get('max_price') - - if min_price is not None and max_price is not None: - if min_price > max_price: - errors.append("min_price cannot be greater than max_price") - - # Date range validation - date_from = params.get('date_from') - date_to = params.get('date_to') - - if date_from and date_to: - try: - from_date = datetime.strptime(date_from, '%Y-%m-%d') - to_date = datetime.strptime(date_to, '%Y-%m-%d') - - if from_date > to_date: - errors.append("date_from cannot be after date_to") - except ValueError: - pass # Already handled in individual validation - - return errors - - def _apply_defaults(self, params: Dict) -> Dict: - """Apply default values for missing parameters.""" - defaults = { - 'page': 1, - 'per_page': 20, - 'sort_by': 'relevance' - } - - for key, default_value in defaults.items(): - if key not in params: - rule = self.RULES.get(key, {}) - if 'default' in rule: - params[key] = rule['default'] - elif key in defaults: - params[key] = default_value - - return params - - -def main(): - """Main function for command-line usage.""" - parser = argparse.ArgumentParser( - description='Validate search parameters' - ) - - parser.add_argument( - '--params', - type=str, - required=True, - help='JSON string of search parameters' - ) - - parser.add_argument( - '--categories', - type=str, - help='Comma-separated list of allowed categories' - ) - - parser.add_argument( - '--strict', - action='store_true', - help='Fail on any validation error' - ) - - args = parser.parse_args() - - try: - params = json.loads(args.params) - except json.JSONDecodeError as e: - print(f"Error parsing parameters JSON: {e}") - return 1 - - # Parse allowed categories if provided - allowed_categories = None - if args.categories: - allowed_categories = [c.strip() for c in args.categories.split(',')] - - # Validate parameters - validator = SearchParamValidator(allowed_categories) - is_valid, cleaned_params, errors = validator.validate(params) - - # Output results - result = { - 'valid': is_valid, - 'cleaned_params': cleaned_params, - 'errors': errors - } - - print(json.dumps(result, indent=2, default=str)) - - # Exit code based on validation result - if args.strict and not is_valid: - return 1 - - return 0 - - -if __name__ == '__main__': - exit(main()) \ No newline at end of file diff --git a/.claude/skills/multi-reviewer-patterns/SKILL.md b/.claude/skills/multi-reviewer-patterns/SKILL.md deleted file mode 100644 index 282bd5d9f..000000000 --- a/.claude/skills/multi-reviewer-patterns/SKILL.md +++ /dev/null @@ -1,127 +0,0 @@ ---- -name: multi-reviewer-patterns -description: Coordinate parallel code reviews across multiple quality dimensions with finding deduplication, severity calibration, and consolidated reporting. Use this skill when organizing multi-reviewer code reviews, calibrating finding severity, or consolidating review results. -version: 1.0.2 ---- - -# Multi-Reviewer Patterns - -Patterns for coordinating parallel code reviews across multiple quality dimensions, deduplicating findings, calibrating severity, and producing consolidated reports. - -## When to Use This Skill - -- Organizing a multi-dimensional code review -- Deciding which review dimensions to assign -- Deduplicating findings from multiple reviewers -- Calibrating severity ratings consistently -- Producing a consolidated review report - -## Review Dimension Allocation - -### Available Dimensions - -| Dimension | Focus | When to Include | -| ----------------- | --------------------------------------- | ------------------------------------------- | -| **Security** | Vulnerabilities, auth, input validation | Always for code handling user input or auth | -| **Performance** | Query efficiency, memory, caching | When changing data access or hot paths | -| **Architecture** | SOLID, coupling, patterns | For structural changes or new modules | -| **Testing** | Coverage, quality, edge cases | When adding new functionality | -| **Accessibility** | WCAG, ARIA, keyboard nav | For UI/frontend changes | - -### Recommended Combinations - -| Scenario | Dimensions | -| ---------------------- | -------------------------------------------- | -| API endpoint changes | Security, Performance, Architecture | -| Frontend component | Architecture, Testing, Accessibility | -| Database migration | Performance, Architecture | -| Authentication changes | Security, Testing | -| Full feature review | Security, Performance, Architecture, Testing | - -## Finding Deduplication - -When multiple reviewers report issues at the same location: - -### Merge Rules - -1. **Same file:line, same issue** — Merge into one finding, credit all reviewers -2. **Same file:line, different issues** — Keep as separate findings -3. **Same issue, different locations** — Keep separate but cross-reference -4. **Conflicting severity** — Use the higher severity rating -5. **Conflicting recommendations** — Include both with reviewer attribution - -### Deduplication Process - -``` -For each finding in all reviewer reports: - 1. Check if another finding references the same file:line - 2. If yes, check if they describe the same issue - 3. If same issue: merge, keeping the more detailed description - 4. If different issue: keep both, tag as "co-located" - 5. Use highest severity among merged findings -``` - -## Severity Calibration - -### Severity Criteria - -| Severity | Impact | Likelihood | Examples | -| ------------ | --------------------------------------------- | ---------------------- | -------------------------------------------- | -| **Critical** | Data loss, security breach, complete failure | Certain or very likely | SQL injection, auth bypass, data corruption | -| **High** | Significant functionality impact, degradation | Likely | Memory leak, missing validation, broken flow | -| **Medium** | Partial impact, workaround exists | Possible | N+1 query, missing edge case, unclear error | -| **Low** | Minimal impact, cosmetic | Unlikely | Style issue, minor optimization, naming | - -### Calibration Rules - -- Security vulnerabilities exploitable by external users: always Critical or High -- Performance issues in hot paths: at least Medium -- Missing tests for critical paths: at least Medium -- Accessibility violations for core functionality: at least Medium -- Code style issues with no functional impact: Low - -## Consolidated Report Template - -```markdown -## Code Review Report - -**Target**: {files/PR/directory} -**Reviewers**: {dimension-1}, {dimension-2}, {dimension-3} -**Date**: {date} -**Files Reviewed**: {count} - -### Critical Findings ({count}) - -#### [CR-001] {Title} - -**Location**: `{file}:{line}` -**Dimension**: {Security/Performance/etc.} -**Description**: {what was found} -**Impact**: {what could happen} -**Fix**: {recommended remediation} - -### High Findings ({count}) - -... - -### Medium Findings ({count}) - -... - -### Low Findings ({count}) - -... - -### Summary - -| Dimension | Critical | High | Medium | Low | Total | -| ------------ | -------- | ----- | ------ | ----- | ------ | -| Security | 1 | 2 | 3 | 0 | 6 | -| Performance | 0 | 1 | 4 | 2 | 7 | -| Architecture | 0 | 0 | 2 | 3 | 5 | -| **Total** | **1** | **3** | **9** | **5** | **18** | - -### Recommendation - -{Overall assessment and prioritized action items} -``` diff --git a/.claude/skills/multi-reviewer-patterns/references/review-dimensions.md b/.claude/skills/multi-reviewer-patterns/references/review-dimensions.md deleted file mode 100644 index a7d95f1be..000000000 --- a/.claude/skills/multi-reviewer-patterns/references/review-dimensions.md +++ /dev/null @@ -1,127 +0,0 @@ -# Review Dimension Checklists - -Detailed checklists for each review dimension that reviewers follow during parallel code review. - -## Security Review Checklist - -### Input Handling - -- [ ] All user inputs are validated and sanitized -- [ ] SQL queries use parameterized statements (no string concatenation) -- [ ] HTML output is properly escaped to prevent XSS -- [ ] File paths are validated to prevent path traversal -- [ ] Request size limits are enforced - -### Authentication & Authorization - -- [ ] Authentication is required for all protected endpoints -- [ ] Authorization checks verify user has permission for the action -- [ ] JWT tokens are validated (signature, expiry, issuer) -- [ ] Password hashing uses bcrypt/argon2 (not MD5/SHA) -- [ ] Session management follows best practices - -### Secrets & Configuration - -- [ ] No hardcoded secrets, API keys, or passwords -- [ ] Secrets are loaded from environment variables or secret manager -- [ ] .gitignore includes sensitive file patterns -- [ ] Debug/development endpoints are disabled in production - -### Dependencies - -- [ ] No known CVEs in direct dependencies -- [ ] Dependencies are pinned to specific versions -- [ ] No unnecessary dependencies that increase attack surface - -## Performance Review Checklist - -### Database - -- [ ] No N+1 query patterns -- [ ] Queries use appropriate indexes -- [ ] No SELECT \* on large tables -- [ ] Pagination is implemented for list endpoints -- [ ] Connection pooling is configured - -### Memory & Resources - -- [ ] No memory leaks (event listeners cleaned up, streams closed) -- [ ] Large data sets are streamed, not loaded entirely into memory -- [ ] File handles and connections are properly closed -- [ ] Caching is used for expensive operations - -### Computation - -- [ ] No unnecessary re-computation or redundant operations -- [ ] Appropriate algorithm complexity for the data size -- [ ] Async operations used where I/O bound -- [ ] No blocking operations on the main thread - -## Architecture Review Checklist - -### Design Principles - -- [ ] Single Responsibility: each module/class has one reason to change -- [ ] Open/Closed: extensible without modification -- [ ] Dependency Inversion: depends on abstractions, not concretions -- [ ] No circular dependencies between modules - -### Structure - -- [ ] Clear separation of concerns (UI, business logic, data) -- [ ] Consistent error handling strategy across the codebase -- [ ] Configuration is externalized, not hardcoded -- [ ] API contracts are well-defined and versioned - -### Patterns - -- [ ] Consistent patterns used throughout (no pattern mixing) -- [ ] Abstractions are at the right level (not over/under-engineered) -- [ ] Module boundaries align with domain boundaries -- [ ] Shared utilities are actually shared (no duplication) - -## Testing Review Checklist - -### Coverage - -- [ ] Critical paths have test coverage -- [ ] Edge cases are tested (empty input, null, boundary values) -- [ ] Error paths are tested (what happens when things fail) -- [ ] Integration points have integration tests - -### Quality - -- [ ] Tests are deterministic (no flaky tests) -- [ ] Tests are isolated (no shared state between tests) -- [ ] Assertions are specific (not just "no error thrown") -- [ ] Test names clearly describe what is being tested - -### Maintainability - -- [ ] Tests don't duplicate implementation logic -- [ ] Mocks/stubs are minimal and accurate -- [ ] Test data is clear and relevant -- [ ] Tests are easy to understand without reading the implementation - -## Accessibility Review Checklist - -### Structure - -- [ ] Semantic HTML elements used (nav, main, article, button) -- [ ] Heading hierarchy is logical (h1 → h2 → h3) -- [ ] ARIA roles and properties used correctly -- [ ] Landmarks identify page regions - -### Interaction - -- [ ] All functionality accessible via keyboard -- [ ] Focus order is logical and visible -- [ ] No keyboard traps -- [ ] Touch targets are at least 44x44px - -### Content - -- [ ] Images have meaningful alt text -- [ ] Color is not the only means of conveying information -- [ ] Text has sufficient contrast ratio (4.5:1 for normal, 3:1 for large) -- [ ] Content is readable at 200% zoom diff --git a/.claude/skills/oiloil-ui-ux-guide/LICENSE.txt b/.claude/skills/oiloil-ui-ux-guide/LICENSE.txt deleted file mode 100644 index d64569567..000000000 --- a/.claude/skills/oiloil-ui-ux-guide/LICENSE.txt +++ /dev/null @@ -1,202 +0,0 @@ - - Apache License - Version 2.0, January 2004 - http://www.apache.org/licenses/ - - TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION - - 1. Definitions. - - "License" shall mean the terms and conditions for use, reproduction, - and distribution as defined by Sections 1 through 9 of this document. - - "Licensor" shall mean the copyright owner or entity authorized by - the copyright owner that is granting the License. - - "Legal Entity" shall mean the union of the acting entity and all - other entities that control, are controlled by, or are under common - control with that entity. For the purposes of this definition, - "control" means (i) the power, direct or indirect, to cause the - direction or management of such entity, whether by contract or - otherwise, or (ii) ownership of fifty percent (50%) or more of the - outstanding shares, or (iii) beneficial ownership of such entity. - - "You" (or "Your") shall mean an individual or Legal Entity - exercising permissions granted by this License. - - "Source" form shall mean the preferred form for making modifications, - including but not limited to software source code, documentation - source, and configuration files. - - "Object" form shall mean any form resulting from mechanical - transformation or translation of a Source form, including but - not limited to compiled object code, generated documentation, - and conversions to other media types. - - "Work" shall mean the work of authorship, whether in Source or - Object form, made available under the License, as indicated by a - copyright notice that is included in or attached to the work - (an example is provided in the Appendix below). - - "Derivative Works" shall mean any work, whether in Source or Object - form, that is based on (or derived from) the Work and for which the - editorial revisions, annotations, elaborations, or other modifications - represent, as a whole, an original work of authorship. For the purposes - of this License, Derivative Works shall not include works that remain - separable from, or merely link (or bind by name) to the interfaces of, - the Work and Derivative Works thereof. - - "Contribution" shall mean any work of authorship, including - the original version of the Work and any modifications or additions - to that Work or Derivative Works thereof, that is intentionally - submitted to Licensor for inclusion in the Work by the copyright owner - or by an individual or Legal Entity authorized to submit on behalf of - the copyright owner. For the purposes of this definition, "submitted" - means any form of electronic, verbal, or written communication sent - to the Licensor or its representatives, including but not limited to - communication on electronic mailing lists, source code control systems, - and issue tracking systems that are managed by, or on behalf of, the - Licensor for the purpose of discussing and improving the Work, but - excluding communication that is conspicuously marked or otherwise - designated in writing by the copyright owner as "Not a Contribution." - - "Contributor" shall mean Licensor and any individual or Legal Entity - on behalf of whom a Contribution has been received by Licensor and - subsequently incorporated within the Work. - - 2. Grant of Copyright License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - copyright license to reproduce, prepare Derivative Works of, - publicly display, publicly perform, sublicense, and distribute the - Work and such Derivative Works in Source or Object form. - - 3. Grant of Patent License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - (except as stated in this section) patent license to make, have made, - use, offer to sell, sell, import, and otherwise transfer the Work, - where such license applies only to those patent claims licensable - by such Contributor that are necessarily infringed by their - Contribution(s) alone or by combination of their Contribution(s) - with the Work to which such Contribution(s) was submitted. If You - institute patent litigation against any entity (including a - cross-claim or counterclaim in a lawsuit) alleging that the Work - or a Contribution incorporated within the Work constitutes direct - or contributory patent infringement, then any patent licenses - granted to You under this License for that Work shall terminate - as of the date such litigation is filed. - - 4. Redistribution. You may reproduce and distribute copies of the - Work or Derivative Works thereof in any medium, with or without - modifications, and in Source or Object form, provided that You - meet the following conditions: - - (a) You must give any other recipients of the Work or - Derivative Works a copy of this License; and - - (b) You must cause any modified files to carry prominent notices - stating that You changed the files; and - - (c) You must retain, in the Source form of any Derivative Works - that You distribute, all copyright, patent, trademark, and - attribution notices from the Source form of the Work, - excluding those notices that do not pertain to any part of - the Derivative Works; and - - (d) If the Work includes a "NOTICE" text file as part of its - distribution, then any Derivative Works that You distribute must - include a readable copy of the attribution notices contained - within such NOTICE file, excluding those notices that do not - pertain to any part of the Derivative Works, in at least one - of the following places: within a NOTICE text file distributed - as part of the Derivative Works; within the Source form or - documentation, if provided along with the Derivative Works; or, - within a display generated by the Derivative Works, if and - wherever such third-party notices normally appear. The contents - of the NOTICE file are for informational purposes only and - do not modify the License. You may add Your own attribution - notices within Derivative Works that You distribute, alongside - or as an addendum to the NOTICE text from the Work, provided - that such additional attribution notices cannot be construed - as modifying the License. - - You may add Your own copyright statement to Your modifications and - may provide additional or different license terms and conditions - for use, reproduction, or distribution of Your modifications, or - for any such Derivative Works as a whole, provided Your use, - reproduction, and distribution of the Work otherwise complies with - the conditions stated in this License. - - 5. Submission of Contributions. Unless You explicitly state otherwise, - any Contribution intentionally submitted for inclusion in the Work - by You to the Licensor shall be under the terms and conditions of - this License, without any additional terms or conditions. - Notwithstanding the above, nothing herein shall supersede or modify - the terms of any separate license agreement you may have executed - with Licensor regarding such Contributions. - - 6. Trademarks. This License does not grant permission to use the trade - names, trademarks, service marks, or product names of the Licensor, - except as required for reasonable and customary use in describing the - origin of the Work and reproducing the content of the NOTICE file. - - 7. Disclaimer of Warranty. Unless required by applicable law or - agreed to in writing, Licensor provides the Work (and each - Contributor provides its Contributions) on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or - implied, including, without limitation, any warranties or conditions - of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A - PARTICULAR PURPOSE. You are solely responsible for determining the - appropriateness of using or redistributing the Work and assume any - risks associated with Your exercise of permissions under this License. - - 8. Limitation of Liability. In no event and under no legal theory, - whether in tort (including negligence), contract, or otherwise, - unless required by applicable law (such as deliberate and grossly - negligent acts) or agreed to in writing, shall any Contributor be - liable to You for damages, including any direct, indirect, special, - incidental, or consequential damages of any character arising as a - result of this License or out of the use or inability to use the - Work (including but not limited to damages for loss of goodwill, - work stoppage, computer failure or malfunction, or any and all - other commercial damages or losses), even if such Contributor - has been advised of the possibility of such damages. - - 9. Accepting Warranty or Additional Liability. While redistributing - the Work or Derivative Works thereof, You may choose to offer, - and charge a fee for, acceptance of support, warranty, indemnity, - or other liability obligations and/or rights consistent with this - License. However, in accepting such obligations, You may act only - on Your own behalf and on Your sole responsibility, not on behalf - of any other Contributor, and only if You agree to indemnify, - defend, and hold each Contributor harmless for any liability - incurred by, or claims asserted against, such Contributor by reason - of your accepting any such warranty or additional liability. - - END OF TERMS AND CONDITIONS - - APPENDIX: How to apply the Apache License to your work. - - To apply the Apache License to your work, attach the following - boilerplate notice, with the fields enclosed by brackets "[]" - replaced with your own identifying information. (Don't include - the brackets!) The text should be enclosed in the appropriate - comment syntax for the file format. We also recommend that a - file or class name and description of purpose be included on the - same "printed page" as the copyright notice for easier - identification within third-party archives. - - Copyright [yyyy] [name of copyright owner] - - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. diff --git a/.claude/skills/oiloil-ui-ux-guide/SKILL.md b/.claude/skills/oiloil-ui-ux-guide/SKILL.md deleted file mode 100644 index 004745576..000000000 --- a/.claude/skills/oiloil-ui-ux-guide/SKILL.md +++ /dev/null @@ -1,192 +0,0 @@ ---- -name: oiloil-ui-ux-guide -description: Modern, clean UI/UX guidance + review skill. Use when you need actionable UX/UI recommendations, design principles, or a design review checklist for new features or existing systems (web/app). Focus on CRAP (Contrast/Repetition/Alignment/Proximity) plus task-first UX, information architecture, feedback & system status, consistency, affordances, error prevention/recovery, and cognitive load. Enforce a modern minimal style (clean, spacious, typography-led), reduce unnecessary copy, forbid emoji as icons, and recommend intuitive refined icons from a consistent icon set. ---- - -# OilOil UI/UX Guide (Modern Minimal) - -Use this skill in two modes: - -- `guide`: Provide compact principles and concrete do/don't rules for modern clean UI/UX. -- `review`: Review an existing UI (screenshot / mock / HTML / PR) and output prioritized, actionable fixes. - -Keep outputs concise. Prefer bullets, not long paragraphs. - -## Workflow (pick one) - -### 1) `guide` workflow -1. Identify the surface: marketing page / dashboard / settings / creation flow / list-detail / form. -2. Identify the primary user task and primary CTA. -3. Apply the system-level guiding principles first (mental model and interaction logic). -4. Then apply the core principles below (start from UX, then refine with CRAP). -5. If icons are involved: apply `references/icons.md`. - -### 2) `review` workflow -1. State assumptions (platform, target user, primary task). -2. List findings as `P0/P1/P2` (blocker / important / polish) with short evidence. -3. For each major issue, label the diagnosis: execution vs evaluation gulf; slip vs mistake (see `references/design-psych.md`). -4. Propose fixes that are implementable (layout, hierarchy, components, copy, states). -5. End with a short checklist to verify changes. - -Use `references/review-template.md` when you need a stable output format. - -## Non-negotiables (hard rules) -- No emoji used as icons (or as UI decoration). If an emoji appears, replace it with a proper icon. -- Icons must be intuitive and refined. Use a single consistent icon set for the product (avoid mixing styles). -- Minimize copy by default. Add explanatory text only when it prevents errors, reduces ambiguity, or improves trust. - -## System-Level Guiding Principles (cross-system, high-level) - -Use these as first-order constraints before choosing specific components or page patterns. - -- Concept constancy: - - Definition: The same business concept keeps the same name, meaning, and interaction semantics across the system. - - Review question: If a user learns this concept in one place, can they transfer that understanding everywhere else? -- Primary task focus: - - Definition: Each screen has one dominant objective with the highest visual and interaction priority. - - Review question: Can users identify the most important action within 3 seconds? -- UI copy source discipline (for product development): - - Definition: visible UI copy should come from business content, not from implementation constraints or generation instructions. - - Preferred copy sources: - - User task: what the user is trying to do. - - System state: what is happening now (loading, empty, error, success, permission). - - Result + next step: what changed and what users can do next. - - Risk/trust context: only when it prevents mistakes or improves confidence. - - Internal-only sources (do not render directly in product UI by default): - - Visual/style constraints (e.g., "minimal", "black-and-white", "modern"). - - Technical constraints and implementation notes. - - Prompt instructions, review rubrics, and generation meta text. - - User-facing copy framing heuristic (general, not title-specific): - - Applies to any prominent UI copy: titles, section headers, callouts, badges, CTA labels, and empty states. - - Prefer user-outcome framing: describe the user's goal and the result they get. - - Avoid self-referential/process framing for end-user product UI (e.g., "to showcase", "this page demonstrates", "showing the skill's value"). - - Exception: if the surface is explicitly a demo/tutorial/spec page for builders, self-referential/process copy can be acceptable when it improves understanding. - - State perceptibility (high-level, cross-system): - - Problem: users make errors when an important internal state is not perceivable (mode, scope, selection, unsaved changes, environment, permission). - - Principle: make state visible using the lowest-noise signal that reliably changes behavior. - - Preferred signals (in order): - - Structural change: the layout/components clearly switch (read -> edit; list -> selection; view -> compare). - - Control state: the control that changes behavior shows its state (tabs, toggles, segmented controls). - - Inline signifiers: local cues near the affected area (selection count, scope chip, disabled reason). - - Post-action feedback: clear results + next step (reduces evaluation gulf). - - Only if needed: persistent banners/labels for high-risk, sticky modes. - - Avoid: redundant "status labels" that restate what the structure already makes obvious (they add noise but not clarity). - - Practical workflow: - - First build a content model (task/state/result/risk). - - Then apply visual constraints through layout, hierarchy, and component styling. - - Run a final copy pass: if a sentence does not help task completion, state understanding, or trust, move it to internal notes. - - Review question: is each visible sentence useful for end users, or only useful for builders/reviewers? -- Help text layering (avoid "hint sprawl"): - - Problem this prevents: dumping all tips onto the UI feels "safe", but it destroys hierarchy and increases scanning cost. - - Placement heuristic: - - L0 (Always visible): only information needed to complete the task correctly. - - L1 (Nearby): short guidance for high-risk / high-ambiguity inputs. - - L2 (On demand): examples, advanced details, "learn more". - - L3 (After action): result, error, recovery, and next step. - - Copy budget heuristic: - - Prefer one clear helper line over multiple repetitive hints. - - If a page needs many persistent hints, improve IA or defaults first. -- Feedback loop closure: - - Definition: Every user action must complete a full loop: received, in progress, result, and clear next step. - - Review question: At any moment, can users tell what the system is doing and what they should do next? -- Prevention first + recoverability: - - Definition: Reduce error probability before submission, and provide recovery paths for high-risk outcomes. - - Review question: Is the path designed to be easy to do right and safe to recover when wrong? -- Progressive complexity: - - Definition: Show minimum-required controls by default; reveal advanced capability only when context requires it. - - Review question: Can novices complete the core task quickly without limiting expert throughput? -- Action perceptibility (affordance + signifiers): - - Definition: Interactive targets and likely outcomes are perceivable from structure and visual cues, without guesswork. - - Review question: Without reading help text, can users predict what is actionable and what will happen? -- Cognitive load budget: - - Definition: Limit new rules, terms, and interaction modes per screen; prioritize reuse over novelty. - - Review question: As information grows, does comprehension cost stay stable? -- Evolution with semantic continuity: - - Definition: Introduce new components/patterns only when existing ones cannot solve the problem, and keep semantic compatibility. - - Review question: Is this necessary innovation or avoidable interaction drift? - -## Core Principles (minimal set) - -### A) Task-first UX -- Make the primary task obvious in <3 seconds. -- Allow exactly one primary CTA per screen/section. -- Optimize the happy path; hide advanced controls behind progressive disclosure. - -### B) Information architecture (grouping & findability) -- Group by user mental model (goal/object/time/status), not by backend fields. -- Use clear section titles; keep navigation patterns stable across similar screens. -- When item count grows: add search/filter/sort early, not late. - -### C) Feedback & system status -- Always show: loading, empty, error, success, and permission states. -- After any action, answer: "did it work?" + "what changed?" + "what can I do next?" -- Prefer inline, contextual feedback over global toasts (except for cross-page actions). - -### D) Consistency & predictability -- Same interaction = same component + same wording + same placement. -- Use a small, stable set of component variants; avoid one-off styles. - -### E) Affordance / 示能性 + Signifiers / 指示符 (make actions obvious) -- People should see **what can be done** and **how to do it** without guessing. -- Clickable things must look clickable (button/link styling + hover/focus + cursor); avoid “mystery meat” UI. - - Web: if you implement clickability on non-native elements (e.g. `div` with `onClick`), ensure `cursor: pointer` and proper focus styles. -- Do not hide primary actions behind unlabeled icons. If an icon can be misunderstood, add a short label. -- Prefer **natural mapping**: control placement mirrors the thing it controls (layout, direction, grouping). -- Forms: show constraints before submit (format, units, examples, required), not only after errors. - -### F) Error prevention & recovery -- Prevent errors with constraints, defaults, and inline validation. -- Make destructive actions reversible when possible; otherwise require deliberate confirmation. -- Error messages must be actionable (what happened + how to fix). - -### G) Cognitive load control -- Reduce choices: sensible defaults, presets, and progressive disclosure. -- Break long tasks into steps only when it reduces thinking (not just to look "enterprise"). -- Keep visual noise low: fewer borders, fewer colors, fewer competing highlights. - -### H) CRAP (visual hierarchy & layout) -- Contrast: emphasize the few things that matter (CTA, current state, key numbers). -- Repetition: tokens/components/spacing follow a scale; avoid “almost the same” styles. -- Alignment: align to a clear grid; fix 2px drift; align baselines where text matters. -- Proximity: tight within a group, loose between groups; spacing is the primary grouping tool. - -## Spacing & layout discipline (compact rule set) - -Use this when implementing or reviewing layouts. Keep it short, but enforce it strictly. - -- Rule 1 - One spacing scale: - - Base unit: 4px. - - Allowed spacing set (recommended): 4 / 8 / 12 / 16 / 24 / 32 / 40 / 48. - - New gaps/padding should use this set; off-scale values need a clear reason. -- Rule 2 - Repetition first: - - Same component type keeps the same internal spacing (cards, list rows, form groups, section blocks). - - Components with the same visual role should not have different spacing patterns. -- Rule 3 - Alignment + grouping: - - Align to one grid and fix 1-2px drift. - - Tight spacing within a group, looser spacing between groups. -- Rule 4 - No decorative nesting: - - Extra wrappers must add real function (grouping, state, scroll, affordance). - - If a wrapper only adds border/background, remove it and group with spacing instead. -- Quick review pass: - - Any off-scale spacing values? - - Any baseline/edge misalignment? - - Any wrapper layer removable without losing meaning? - -## Modern minimal style guidance (taste with rules) -- Use whitespace + typography to create hierarchy; avoid decoration-first design. -- Prefer subtle surfaces (light elevation, low-contrast borders). Avoid heavy shadows. -- Keep color palette small; use one accent color for primary actions and key states. -- Copy: short, direct labels; add helper text only when it reduces mistakes or increases trust. - -## Motion (animation) guidance (content/creator-friendly, not flashy) -- Motion explains **hierarchy** (what is a layer/panel) and **state change** (what just happened). Avoid motion as decoration. -- Default motion vocabulary: fade; then small translate+fade; allow tiny scale+fade for overlays. Avoid big bouncy motion. -- Keep the canvas/content area stable. Panels/overlays can move; the work surface should not “float.” -- Prefer consistency over variety: same component type uses the same motion pattern. -- Avoid layout jumps. Use placeholders/skeletons to keep layout stable while loading. - -## References -- Icon rules and “intuitive refined” guidance: `references/icons.md` -- Review output template and scoring: `references/review-template.md` -- Expanded checklists (use when needed): `references/checklists.md` -- Design Psychology (示能性、指示符、映射、约束、错误类型、概念模型): `references/design-psych.md` diff --git a/.claude/skills/oiloil-ui-ux-guide/agents/openai.yaml b/.claude/skills/oiloil-ui-ux-guide/agents/openai.yaml deleted file mode 100644 index 88a8983fa..000000000 --- a/.claude/skills/oiloil-ui-ux-guide/agents/openai.yaml +++ /dev/null @@ -1,4 +0,0 @@ -interface: - display_name: "OilOil UI/UX Guide" - short_description: "Modern clean UI/UX review checklist" - default_prompt: "你是一名现代极简风格的 UI/UX 评审助手。请输出简洁、可执行的指导建议,或按优先级给出评审结果(P0/P1/P2)。重点关注 CRAP 以及核心 UX 原则(任务优先、信息架构、反馈、一致性、示能性、错误预防、认知负荷)。默认减少文案,避免提示信息平铺:使用分层提示(L0 仅保留完成任务必需信息;L1 仅对高风险输入给一行就近提示;L2 使用按需展开说明;L3 在操作后反馈结果),并严格控制文案预算。禁止使用 emoji 图标;推荐来自同一图标集的直观、精致图标。输出必须可直接落地。" diff --git a/.claude/skills/oiloil-ui-ux-guide/index.html b/.claude/skills/oiloil-ui-ux-guide/index.html deleted file mode 100644 index 9c804680d..000000000 --- a/.claude/skills/oiloil-ui-ux-guide/index.html +++ /dev/null @@ -1,2117 +0,0 @@ - - - - - - OilOil UI/UX Guide Skill - - - - - - - -
    -
    -
    -
    OilOil UI/UX Guide Skill
    -
    guide / review
    -
    - GitHub -
    - -
    -
    -

    让 UI 评审快速收敛,并直接落地到界面

    -

    - `guide` 给你一套清晰的设计规则,`review` 给你带优先级的修复清单。下面三个示例用同一业务目标的“前后界面”对比, - 把差异做成可视化结果:主任务更清晰、提示更克制、状态更完整、操作更可发现。 -

    - -
    -
    -

    快速触发

    -

    直接点名 Skill + 说清页面目标,输出会更稳定。

    -
    -
    -
    guide
    -
    请使用 $oiloil-ui-ux-guide 的 guide 模式。
    -页面类型:B 端设置页。
    -输出:简洁的“该做 / 不该做”规则。
    -
    -
    -
    review
    -
    请使用 $oiloil-ui-ux-guide 的 review 模式。
    -目标:提升首次配置完成率。
    -输出:P0/P1/P2 + 修复方案 + 验收标准。
    -
    -
    -
    - -
    -

    你会得到什么

    -
      -
    • 优先级P0/P1/P2 分层,先解决最影响结果的问题。
    • -
    • 可落地建议落到布局/组件/文案/状态,能直接改代码。
    • -
    • 可验收每项修复配验收检查点,避免“改了但没变好”。
    • -
    -
    -
    - -
    -
    关注点:任务、状态、结果、风险
    -
    约束:少文案 + 提示分层
    -
    图标:禁止 emoji,统一图标集
    -
    输出:简洁要点,避免长段落
    -
    -
    - -
    -
    -

    规范集(简版)

    -

    不追求写很多,只保留最能稳定结果的规则:先过这 6 条,再做视觉润色。

    -
    -
    -
    -
    -
    开发与评审共用的 6 条规则
    -
    -
    -
    -
    1) 主任务优先3 秒可识别主操作
    -
    2) 提示分层先就近必要信息,再按需展开
    -
    3) 状态闭环加载/空/错/成/权限都给下一步
    -
    4) 间距上尺度4/8/12/16/24/32/40/48
    -
    5) 重复与规整同类组件间距一致,不漂移
    -
    6) 少嵌套容器必须有功能,不做装饰叠层
    -
    -
    如果一条规则需要很多例外,优先检查信息架构和默认值设计,而不是继续加提示文案。
    -
    -
    建议顺序:先过规范集,再做设计心理学诊断,最后补齐状态与细节(图标/动效)。
    -
    -
    -
    - -
    -
    -

    示例 1:设置表单

    -

    同样是“完成首次配置”,差异主要来自提示分层与主操作的层级。

    -
    -
    -
    -
    -
    界面
    -
    - - -
    -
    -
    -
    -
    通知策略设置未保存
    -
    -
    邮箱
    -
    name@example.com
    -
    请填写常用邮箱,格式错误会导致通知失败。
    -
    邮箱用于告警、系统消息与账号安全提醒。
    -
    -
    -
    发送频率
    -
    立即发送
    -
    建议立即发送。
    -
    也可以选择每小时或每天汇总。
    -
    -
    - - -
    -
    - -
    -
    通知策略设置步骤 1 / 1
    -
    -
    邮箱(必填)
    -
    name@example.com
    -
    仅用于告警和账号安全通知。
    -
    -
    -
    发送频率
    -
    - 立即 - 每小时 - 每天 -
    -
    -
    - - -
    - -
    -
    -
    直观变化:主 CTA 更清晰;提示更克制;保存后有明确结果与下一步。
    -
    - -
    -
    -
    评审输出(节选)
    -
    - - -
    -
    -
    -
    -
    P0 / P1 / P2
    -
    -
    -
    -
    P0
    -

    主操作与次级按钮同权重,用户容易误点或犹豫。

    -
    -
    -
    P1
    -

    提示重复且过多,扫描成本高,关键约束不突出。

    -
    -
    -
    P2
    -

    缺少保存后的结果反馈,用户不知道是否生效。

    -
    -
    -
    修复建议会落到:按钮层级、提示分层、成功/失败状态。
    -
    -
    - -
    -
    验收检查点
    -
    -
    -
    -
    主操作
    -

    3 秒内可识别“保存策略”为主操作;取消是次级操作。

    -
    -
    -
    提示
    -

    只有一条就近提示覆盖关键风险,不出现重复解释。

    -
    -
    -
    反馈
    -

    保存后出现结果与下一步入口(例如查看通知历史)。

    -
    -
    -
    这类检查点能帮助你快速判断“改造是否真的变好”。
    -
    -
    -
    -
    直观收益:评审从“泛建议”变成“优先级 + 可验收改法”。
    -
    -
    -
    - -
    -
    -

    示例 2:仪表盘

    -

    同样是“看数据”,差异来自主次层级与关键风险的可见性。

    -
    -
    -
    -
    -
    界面
    -
    - - -
    -
    -
    -
    -
    运营概览过去 7 天
    -
    -
    活跃用户24,120
    -
    新注册1,632
    -
    支付率5.1%
    -
    流失率3.2%
    -
    工单数482
    -
    告警数19
    -
    -
    - -
    -
    运营概览过去 7 天
    -
    -
    核心健康度91
    -
    活跃用户24,120
    -
    新注册1,632
    -
    支付率5.1%
    -
    告警数19
    -
    工单数482
    -
    - -
    -
    -
    直观变化:先看到关键状态;风险有明确入口;下一步动作更清晰。
    -
    - -
    -
    -
    清单式规则(guide 输出片段)
    -
    - - -
    -
    -
    -
    -
    该做 / 不该做
    -
    -
    -
    -
    该做
    -

    只保留一个主指标做视觉主点,其他指标降权作为支撑信息。

    -
    -
    -
    该做
    -

    关键风险要可见,并提供可点击的下一步入口。

    -
    -
    -
    不该做
    -

    不要让所有数字同权重铺满首屏,用户难以判断重点。

    -
    -
    -
    -
    - -
    -
    验收检查点
    -
    -
    -
    -
    3 秒判断
    -

    用户能在 3 秒内说出“现在状态是否健康”。

    -
    -
    -
    风险入口
    -

    告警/异常有清晰入口,点击后能看到原因与处理路径。

    -
    -
    -
    层级稳定
    -

    同类指标在不同页面保持一致呈现,不出现随机风格。

    -
    -
    -
    -
    -
    -
    直观收益:同一套规则能复用到多个仪表盘页面,避免“每页一套”。
    -
    -
    -
    - -
    -
    -

    示例 3:列表页

    -

    同样是“打开项目”,差异来自操作可发现性与一致的行动入口。

    -
    -
    -
    -
    -
    界面
    -
    - - -
    -
    -
    -
    -
    项目列表32 个项目
    -
    -
    支付链路监控...
    -
    增长实验看板...
    -
    客服自动分配...
    -
    -
    - -
    -
    项目列表32 个项目
    -
    -
    支付链路监控
    -
    增长实验看板
    -
    客服自动分配
    -
    -
    - 批量编辑 - 导出报表 - 更多操作 -
    -
    -
    -
    直观变化:主操作显式;长尾操作后置;列表行为一致。
    -
    - -
    -
    -
    触发示例
    -
    - - -
    -
    -
    -
    -
    提示词
    -
    -
    请使用 $oiloil-ui-ux-guide 的 review 模式。
    -页面:项目列表(列表-详情)。
    -目标:减少误点,提高“查看项目”的可发现性。
    -输出:P0/P1/P2 + 修复方案 + 验收检查点。
    -
    -
    -
    -
    输出节选
    -
    -
    -
    -
    P0
    -

    主要操作藏在“...”中,用户不确定能做什么。

    -
    -
    -
    P1
    -

    列表缺少一致入口,行内操作与点击行为不一致。

    -
    -
    -
    P2
    -

    长尾操作应后置到“更多操作”,避免首屏噪音。

    -
    -
    -
    修复会落到:按钮/链接的可发现性、行交互一致性、长尾操作收纳。
    -
    -
    -
    -
    直观收益:同一个问题能被稳定拆成优先级与可落地动作。
    -
    -
    -
    - -
    -
    -

    设计心理学(怎么落到界面细节)

    -

    把抽象概念变成可检查的界面信号:可做什么、怎么做、做完发生了什么、错了怎么恢复。

    -
    - -
    -
    -
    可视化标注(点击开关看变化)
    -
    -
    -
    -

    设计手段

    -
    - - - - - - - -
    - -

    诊断标签

    -
    - - - -
    - -
    - - -
    -
    - -
    -
    -
    -
    - 通知策略设置 -
    - - -
    -
    -
    -
    邮箱(必填)
    -
    name@example.com
    -
    仅用于告警和账号安全通知。
    -
    -
    -
    发送频率
    -
    - 立即 - 每小时 - 每天 -
    -
    -
    - - -
    - -
    - -
    -
    示能性:主操作清晰可见,用户知道“能保存”。
    -
    -
    -
    指示符:按钮文案是动作,且视觉层级明确。
    -
    -
    -
    约束:关键风险就近提示,减少填写/理解错误。
    -
    -
    -
    知识外显:把约束放在决策点,用户不需要记忆。
    -
    -
    -
    反馈:结果 + 下一步,减少“是否生效”的不确定。
    -
    -
    -
    映射:控制项紧贴它控制的内容,减少找位置成本。
    -
    -
    -
    模式:用控制状态表达(查看/编辑切换),避免额外的“状态标签”噪音。
    -
    -
    -
    执行鸿沟:如果主 CTA 不突出,用户会犹豫“点哪里”。
    -
    -
    -
    评估鸿沟:如果缺少结果反馈,用户会重复点击或放弃。
    -
    -
    -
    概念模型:标题与字段命名一致,帮助用户建立正确理解。
    -
    -
    -
    提示:开关越少,越能看清“最关键的信号”是什么。
    -
    -
    -
    -
    - -
    -
    Slip vs Mistake(两种错误要用两种修复)
    -
    -
    -
    -
    Slip
    -

    目标是对的,但点错了/手滑了。修复方向:可撤销、更安全的触达、二次确认。

    -
    已删除“支付链路监控”。
    -
    撤销删除
    -
    -
    -
    Mistake
    -

    理解错了:以为“删除”只是移出列表。修复方向:命名、后果、概念模型要更明确。

    -
    删除是不可逆操作,会移除项目与数据。
    -
    了解差异:停用 vs 删除
    -
    -
    -
    Skill 在 review 模式会要求标注这类诊断,避免只给“把按钮做大点”这种泛建议。
    -
    -
    -
    -
    - -
    -
    -

    系统状态(加载 / 空 / 错误 / 成功 / 权限)

    -

    同一个界面,在不同状态下也要“可理解、可操作、可恢复”。

    -
    - -
    -
    -
    -
    状态切换
    -
    - - - - - -
    -
    -
    -
    -
    项目列表加载中
    -
    -
    -
    -
    -
    -
    要点:保持布局稳定,避免跳动;必要时禁用重复提交。
    -
    - -
    -
    项目列表
    -
    - 这里还没有项目 -

    你可以先创建一个项目,或导入已有配置。

    -
    - - -
    -
    -
    要点:解释“空”是什么意思,并给下一步动作。
    -
    - -
    -
    项目列表失败
    -
    - 获取失败:无法连接服务器。请检查网络后重试,或稍后再试。 -
    -
    - - -
    -
    要点:错误信息要可执行(发生了什么 + 怎么修)。
    -
    - -
    -
    通知策略设置已保存
    -
    保存成功:策略已生效。你可以去“通知历史”验证投递结果。
    -
    - - -
    -
    要点:成功也要回答“变化了什么 + 下一步做什么”。
    -
    - -
    -
    告警规则无权限
    -
    你没有访问权限。需要“告警管理员”角色才能查看和编辑规则。
    -
    - - -
    -
    要点:解释原因 + 给出获取权限的路径。
    -
    -
    -
    直观收益:状态齐全意味着更少疑惑、更少重复点击、更少“卡住”。
    -
    - -
    -
    -
    review 输出会覆盖哪些状态
    -
    - - -
    -
    -
    -
    -
    检查清单
    -
    -
    -
    -
    必须有
    -

    加载 / 空 / 错误 / 成功 / 权限状态,且每个都有可执行下一步。

    -
    -
    -
    避免
    -

    只用全局 toast 代替关键页面状态;状态和内容混在一起导致难读。

    -
    -
    -
    加分项
    -

    骨架屏保持布局稳定;失败后尽量保留用户输入。

    -
    -
    -
    -
    -
    -
    常见缺口
    -
    -
    -
    -
    评估鸿沟
    -

    保存后没有“已生效/未生效”的反馈,用户反复点击。

    -
    -
    -
    空状态
    -

    只写“暂无数据”,不解释原因,也不给下一步。

    -
    -
    -
    权限
    -

    只写“403”,不说明需要什么权限、去哪申请。

    -
    -
    -
    -
    -
    -
    直观收益:把“状态缺口”变成可检查项,评审更稳定。
    -
    -
    -
    - -
    -
    -

    细节规则补齐:图标与动效

    -

    少量关键细节决定“看起来专业”,也决定“用起来不费劲”。

    -
    - -
    -
    -
    图标(统一图标集 + 避免只靠图标传达主操作)
    -
    -
    -
    -
    更安全的做法
    -
    - - - -
    -
    主操作优先用文字或“文字+图标”,降低误解风险。
    -
    - -
    -
    风险更高的做法
    -
    - - - -
    -
    icon-only 适合通用动作(搜索/关闭/更多),不适合承载主操作语义。
    -
    -
    -
    -
    - -
    -
    动效(解释层级与状态变化)
    -
    -
    -
    -
    -
    打开详情面板
    -
    动效只用来解释“层级/状态”,不做装饰。
    -
    - -
    - -
    - -
    -
    编辑器画布稳定不漂浮
    -
    -
    内容
    -
    这里是你的主要工作区域(保持稳定)。
    -
    -
    - - -
    -
    - - -
    -
    -
    -
    -
    -
    - -
    - OilOil UI/UX Guide Skill · Apache 2.0 - Focus on task, state, result, risk. -
    -
    -
    - - - - diff --git a/.claude/skills/oiloil-ui-ux-guide/references/checklists.md b/.claude/skills/oiloil-ui-ux-guide/references/checklists.md deleted file mode 100644 index 8e267426e..000000000 --- a/.claude/skills/oiloil-ui-ux-guide/references/checklists.md +++ /dev/null @@ -1,111 +0,0 @@ -# Expanded Checklists (Load Only When Needed) - -Use these checklists when the task needs more detail than the SKILL.md minimal principles. - -## Universal states - -- Loading: - - Avoid layout jumps (skeleton/placeholder with stable height) - - Prevent double-submit; show progress when waiting is noticeable -- Empty: - - Explain what “empty” means - - Provide a next step (create/import/change filters) -- Error: - - Message: what happened + why (if safe) + what to do - - Preserve user input where possible -- Success: - - Confirm outcome + provide next action (view, undo, share) -- Permission: - - Explain why access is blocked + where to request access - -## Affordance (示能性) & signifiers (指示符) - -- Primary actions look like actions: - - Use a real primary button; label with a verb (avoid OK/Done). - - Icon-only is reserved for universally-known actions (search/close/more/settings). -- Links look like links: - - Ensure a clear link signifier (underline or strong hover/contrast), not color-only subtlety. -- Clickable surfaces communicate clickability: - - Web: for custom clickable surfaces (non-`button`/`a`), use `cursor: pointer` and a visible focus style. - - Card/list rows that open should have hover + chevron/affordance cue (or a clear “View” action). - - Do not make plain body text behave like a button. -- Controls match outcomes (mapping): - - Place controls near what they affect; keep directionality intuitive. - - Group controls with the content they control (filters above list; section actions in section header). - -## Lists (table / cards) - -- Scannability: - - One primary column/field; secondary details visually muted - - Consistent row height and alignment; avoid jagged columns -- Controls: - - Search/filter/sort appear before the list, not after - - Selected filters are visible and removable -- Row actions: - - Keep high-frequency actions visible - - Hide long-tail actions under a “more” menu (but not the primary action) - -## Detail pages - -- Clear page title that matches the object -- Key facts near the top; secondary info below or collapsed -- Actions grouped by intent (primary, secondary, destructive) -- Related items and history: grouped and titled (avoid endless scroll dumps) - -## Forms (create/edit/config) - -- Reduce thinking: - - Use defaults and reasonable prefill - - Use presets when choices are complex -- Prevent errors: - - Inline validation; format hints before submit - - Don’t require users to memorize constraints -- Layout: - - Group fields by meaning; use headings (not just spacing) - - Keep labels consistent (position + style) across the product -- Submission: - - One primary submit action - - Disabled state and clear error placement - -## Settings / Preferences - -- Group by mental model (account, security, notifications, integrations, appearance) -- For each setting: clear label + short value explanation only if needed -- Destructive actions separated and clearly labeled; never hide them among benign toggles - -## Motion (animation) review checklist (modern, clean, creator-friendly) - -- Purpose: - - Each animation explains hierarchy (panel/overlay) or state change (feedback). If not, remove or downgrade. -- Vocabulary: - - Prefer fade; then small translate+fade; allow tiny scale+fade for overlays. Avoid “showy” motion. -- Canvas stability: - - Keep the work surface stable (canvas/editor area). Move panels/overlays, not the core content. -- Responsiveness: - - Interaction feedback (hover/pressed) feels immediate; UI never makes users wait for animation to proceed. -- Consistency: - - Same component type uses the same motion pattern across the product. - - Enter/exit feel related (no random directions or mixed styles). -- Stability: - - No layout shift/jank during loading or transitions; use skeleton/placeholder to preserve layout. -- Red flags (avoid): - - Continuous decorative motion (breathing backgrounds, floating cards). - - Large bouncy/elastic overshoot that steals attention. - - Big page-level transitions for routine navigation. - -## Dashboards - -- Decide the “story”: what decision should the user make here? -- Keep top KPI set small; avoid wall-of-numbers -- Make time range and filters obvious and persistent -- Provide drill-down paths (click-through) for every key metric - -## Copy rules (minimal style) - -- Prefer short labels over helper paragraphs. -- Use helper text only when it: - - prevents an error - - clarifies a non-obvious term - - explains consequences (especially destructive actions) - - builds trust (privacy, payment, external side effects) -- Replace vague verbs ("Do", "OK") with concrete actions ("Create", "Save", "Publish"). diff --git a/.claude/skills/oiloil-ui-ux-guide/references/design-psych.md b/.claude/skills/oiloil-ui-ux-guide/references/design-psych.md deleted file mode 100644 index cce3d3170..000000000 --- a/.claude/skills/oiloil-ui-ux-guide/references/design-psych.md +++ /dev/null @@ -1,97 +0,0 @@ -# Design Psychology (inspired by *The Design of Everyday Things*) - -Keep this as a compact reference. Use it when explaining *why* a design is confusing and how to fix it. -This is a paraphrased summary, not a verbatim excerpt. - -## Affordances (示能性 / 可供性) - -- An affordance is what an object *allows* a person to do. -- In UI, you mostly manage **perceived affordances**: what people *think* they can do. - -Practical rule: -- If an action is important, it must be discoverable without hover, tooltips, or prior training. - -## Signifiers (指示符) - -- Signifiers are the cues that indicate possible actions. - -Examples in UI: -- Button shape, link styling, icons + labels, hover/focus states, cursor changes, microcopy. - -Practical rule: -- Use the smallest signifier that removes ambiguity. Default to labels for non-obvious actions. - -## Mapping (映射) / Natural mapping - -- Mapping is the relationship between controls and their effects. -- Natural mapping means the layout/relationship mirrors the real-world mental model. - -Practical rules: -- Put controls near what they control. -- Use spatial grouping to show what belongs together. -- For multi-part objects, align actions with the part they affect (per-item actions next to the item). - -## Constraints (约束) - -- Constraints limit possible actions, preventing errors and reducing thinking. - -Types you can use in UI: -- Physical constraints (not literal in UI, but you can simulate via disabled states) -- Logical constraints (only valid combinations are allowed) -- Semantic constraints (meaning-based limits) -- Cultural constraints (conventions users expect) - -Practical rules: -- Prefer constraints + defaults over warnings. -- If you must block an action, explain the requirement and provide a path to satisfy it. - -## Conceptual model (概念模型) - -- Users form an internal model of how the system works. -- Your UI should make the correct model obvious. - -Practical rules: -- Use consistent nouns/labels for objects. -- Use consistent verbs for actions. -- Show cause-effect clearly (do X -> see Y change). - -## Feedback (反馈) - -- Feedback tells people what happened after an action. - -Practical rules: -- Always provide immediate feedback for interaction (press/hover/loading). -- If an operation takes time, show progress or a clear waiting state. -- After success/failure, clearly state the outcome and the next step. - -## Gulfs of execution & evaluation (执行鸿沟 / 评估鸿沟) - -- Execution gulf: user can’t figure out how to do what they want. -- Evaluation gulf: user can’t tell what happened or what state the system is in. - -Practical diagnostic: -- If users hesitate before acting: reduce execution gulf (clear CTA, clearer signifiers, simpler choices). -- If users repeat actions / rage-click: reduce evaluation gulf (loading, disabled, progress, clearer results). - -## Slips vs mistakes (失误 vs 错误) - -- Slip: the goal is correct, the action execution goes wrong (fat-finger, wrong click). -- Mistake: the mental model/goal is wrong (user thinks it works differently). - -Practical rules: -- Slips: add undo, confirmations for destructive actions, safer hit targets, better spacing. -- Mistakes: fix labeling, mapping, and conceptual model; add just-enough explanation. - -## Knowledge in the world vs in the head (外部知识 vs 头脑知识) - -- Good design puts knowledge in the world: visible options, clear labels, previews, examples. - -Practical rule: -- Don’t force users to remember constraints. Surface them at the point of decision. - -## Modes (模式) and mode errors - -- Modes mean the same action produces different results depending on state. - -Practical rule: -- Avoid modes; if unavoidable, make mode state extremely visible and easy to exit. diff --git a/.claude/skills/oiloil-ui-ux-guide/references/icons.md b/.claude/skills/oiloil-ui-ux-guide/references/icons.md deleted file mode 100644 index f315004ec..000000000 --- a/.claude/skills/oiloil-ui-ux-guide/references/icons.md +++ /dev/null @@ -1,41 +0,0 @@ -# Icons (No Emoji, Modern Minimal) - -## Hard rules - -- Do not use emoji as icons (or decoration). -- Use one icon family across the product. Do not mix outlined/filled/3D/emoji styles. -- Prefer obvious meanings over clever metaphors. If an icon can be misunderstood, add a text label. - -## “Intuitive + refined” checklist - -- **Style consistency**: same stroke weight (outline) or same fill style (filled). -- **Sizes**: standardize on 16/20/24 (or your system sizes); avoid random sizes per screen. -- **Optical alignment**: align visually (icon bounding boxes lie; nudge when needed). -- **Touch targets**: icon buttons still need adequate hit area; do not shrink interactive area to the glyph. -- **Labels**: primary actions should be text or text+icon; icon-only is reserved for universally-known actions. -- **Tooltips**: tooltips are support, not the primary way to understand an action. - -## Prefer text over icons when - -- The action is uncommon in your product. -- The icon is domain-specific (users won’t share the same mental model). -- The action is destructive or high-stakes (use explicit wording). - -## Suggested icon sets (pick one; do not mix) - -- Lucide / Feather-style outline icons (web-friendly) -- Material Symbols (outlined or rounded; pick one) -- SF Symbols (Apple platforms) - -## Common mappings (use cautiously) - -- Search: magnifier -- Filter: funnel -- Settings: gear -- More actions: kebab (vertical three dots) -- Close: x -- Back: left arrow -- Info: i in circle (use sparingly; don’t turn UI into a tooltip museum) - -If an icon is not instantly clear, prefer a short label instead of inventing a new icon metaphor. - diff --git a/.claude/skills/oiloil-ui-ux-guide/references/review-template.md b/.claude/skills/oiloil-ui-ux-guide/references/review-template.md deleted file mode 100644 index 603bd1bb2..000000000 --- a/.claude/skills/oiloil-ui-ux-guide/references/review-template.md +++ /dev/null @@ -1,60 +0,0 @@ -# Review Output Template (Concise) - -Use this template for `review` outputs. Keep each bullet short and implementable. - -## Context - -- Surface: (web/app) + page type (list/detail/form/dashboard/settings) -- Primary user task: -- Primary CTA: -- Constraints/assumptions: - -## Diagnosis (pick one per major issue) - -- Execution gulf (执行鸿沟): user can’t find *how* to do it (entry/signifier/IA/choices) -- Evaluation gulf (评估鸿沟): user can’t tell *what happened* (state/feedback/results) - -- Slip (失误): goal is correct, execution goes wrong (misclick, fat-finger, wrong target) -- Mistake (错误): mental model is wrong (labels/mapping/conceptual model misleads) - -## Findings (prioritized) - -### P0 (blocker) - -- Problem: - - Evidence: - - Diagnosis: execution gulf / evaluation gulf; slip / mistake - - Why it hurts: - - Fix (specific, implementable): - - Acceptance check: - -### P1 (important) - -- Problem: - - Evidence: - - Diagnosis: execution gulf / evaluation gulf; slip / mistake - - Fix: - - Acceptance check: - -### P2 (polish) - -- Problem: - - Diagnosis: execution gulf / evaluation gulf (optional) - - Fix: - -## Quick wins (optional) - -- 3 small changes that noticeably improve clarity or polish. - -## Checklist to verify (copy/paste) - -- Task clarity: primary CTA obvious and singular -- IA: groups and headings match mental model -- Feedback: loading/empty/error/success states present and helpful -- Consistency: components and wording stable across screens -- Affordance: clickable elements look clickable; icon-only is rare -- Errors: prevention + recovery + actionable messages -- Cognitive load: defaults and progressive disclosure reduce thinking -- CRAP: hierarchy, alignment, spacing, grouping feel intentional -- Modern minimal: restrained color, spacious layout, minimal copy -- Icons: no emoji; consistent set; labels where ambiguity exists diff --git a/.claude/skills/parallel-debugging/SKILL.md b/.claude/skills/parallel-debugging/SKILL.md deleted file mode 100644 index 8c84fa3b1..000000000 --- a/.claude/skills/parallel-debugging/SKILL.md +++ /dev/null @@ -1,133 +0,0 @@ ---- -name: parallel-debugging -description: Debug complex issues using competing hypotheses with parallel investigation, evidence collection, and root cause arbitration. Use this skill when debugging bugs with multiple potential causes, performing root cause analysis, or organizing parallel investigation workflows. -version: 1.0.2 ---- - -# Parallel Debugging - -Framework for debugging complex issues using the Analysis of Competing Hypotheses (ACH) methodology with parallel agent investigation. - -## When to Use This Skill - -- Bug has multiple plausible root causes -- Initial debugging attempts haven't identified the issue -- Issue spans multiple modules or components -- Need systematic root cause analysis with evidence -- Want to avoid confirmation bias in debugging - -## Hypothesis Generation Framework - -Generate hypotheses across 6 failure mode categories: - -### 1. Logic Error - -- Incorrect conditional logic (wrong operator, missing case) -- Off-by-one errors in loops or array access -- Missing edge case handling -- Incorrect algorithm implementation - -### 2. Data Issue - -- Invalid or unexpected input data -- Type mismatch or coercion error -- Null/undefined/None where value expected -- Encoding or serialization problem -- Data truncation or overflow - -### 3. State Problem - -- Race condition between concurrent operations -- Stale cache returning outdated data -- Incorrect initialization or default values -- Unintended mutation of shared state -- State machine transition error - -### 4. Integration Failure - -- API contract violation (request/response mismatch) -- Version incompatibility between components -- Configuration mismatch between environments -- Missing or incorrect environment variables -- Network timeout or connection failure - -### 5. Resource Issue - -- Memory leak causing gradual degradation -- Connection pool exhaustion -- File descriptor or handle leak -- Disk space or quota exceeded -- CPU saturation from inefficient processing - -### 6. Environment - -- Missing runtime dependency -- Wrong library or framework version -- Platform-specific behavior difference -- Permission or access control issue -- Timezone or locale-related behavior - -## Evidence Collection Standards - -### What Constitutes Evidence - -| Evidence Type | Strength | Example | -| ----------------- | -------- | --------------------------------------------------------------- | -| **Direct** | Strong | Code at `file.ts:42` shows `if (x > 0)` should be `if (x >= 0)` | -| **Correlational** | Medium | Error rate increased after commit `abc123` | -| **Testimonial** | Weak | "It works on my machine" | -| **Absence** | Variable | No null check found in the code path | - -### Citation Format - -Always cite evidence with file:line references: - -``` -**Evidence**: The validation function at `src/validators/user.ts:87` -does not check for empty strings, only null/undefined. This allows -empty email addresses to pass validation. -``` - -### Confidence Levels - -| Level | Criteria | -| ------------------- | ----------------------------------------------------------------------------------- | -| **High (>80%)** | Multiple direct evidence pieces, clear causal chain, no contradicting evidence | -| **Medium (50-80%)** | Some direct evidence, plausible causal chain, minor ambiguities | -| **Low (<50%)** | Mostly correlational evidence, incomplete causal chain, some contradicting evidence | - -## Result Arbitration Protocol - -After all investigators report: - -### Step 1: Categorize Results - -- **Confirmed**: High confidence, strong evidence, clear causal chain -- **Plausible**: Medium confidence, some evidence, reasonable causal chain -- **Falsified**: Evidence contradicts the hypothesis -- **Inconclusive**: Insufficient evidence to confirm or falsify - -### Step 2: Compare Confirmed Hypotheses - -If multiple hypotheses are confirmed, rank by: - -1. Confidence level -2. Number of supporting evidence pieces -3. Strength of causal chain -4. Absence of contradicting evidence - -### Step 3: Determine Root Cause - -- If one hypothesis clearly dominates: declare as root cause -- If multiple hypotheses are equally likely: may be compound issue (multiple contributing causes) -- If no hypotheses confirmed: generate new hypotheses based on evidence gathered - -### Step 4: Validate Fix - -Before declaring the bug fixed: - -- [ ] Fix addresses the identified root cause -- [ ] Fix doesn't introduce new issues -- [ ] Original reproduction case no longer fails -- [ ] Related edge cases are covered -- [ ] Relevant tests are added or updated diff --git a/.claude/skills/parallel-debugging/references/hypothesis-testing.md b/.claude/skills/parallel-debugging/references/hypothesis-testing.md deleted file mode 100644 index fe5da1363..000000000 --- a/.claude/skills/parallel-debugging/references/hypothesis-testing.md +++ /dev/null @@ -1,120 +0,0 @@ -# Hypothesis Testing Reference - -Task templates, evidence formats, and arbitration decision trees for parallel debugging. - -## Hypothesis Task Template - -```markdown -## Hypothesis Investigation: {Hypothesis Title} - -### Hypothesis Statement - -{Clear, falsifiable statement about the root cause} - -### Failure Mode Category - -{Logic Error | Data Issue | State Problem | Integration Failure | Resource Issue | Environment} - -### Investigation Scope - -- Files to examine: {file list or directory} -- Related tests: {test files} -- Git history: {relevant date range or commits} - -### Evidence Criteria - -**Confirming evidence** (if I find these, hypothesis is supported): - -1. {Observable condition 1} -2. {Observable condition 2} - -**Falsifying evidence** (if I find these, hypothesis is wrong): - -1. {Observable condition 1} -2. {Observable condition 2} - -### Report Format - -- Confidence: High/Medium/Low -- Evidence: list with file:line citations -- Causal chain: step-by-step from cause to symptom -- Recommended fix: if confirmed -``` - -## Evidence Report Template - -```markdown -## Investigation Report: {Hypothesis Title} - -### Verdict: {Confirmed | Falsified | Inconclusive} - -### Confidence: {High (>80%) | Medium (50-80%) | Low (<50%)} - -### Confirming Evidence - -1. `src/api/users.ts:47` — {description of what was found} -2. `src/middleware/auth.ts:23` — {description} - -### Contradicting Evidence - -1. `tests/api/users.test.ts:112` — {description of what contradicts} - -### Causal Chain (if confirmed) - -1. {First cause} → -2. {Intermediate effect} → -3. {Observable symptom} - -### Recommended Fix - -{Specific code change with location} - -### Additional Notes - -{Anything discovered that may be relevant to other hypotheses} -``` - -## Arbitration Decision Tree - -``` -All investigators reported? -├── NO → Wait for remaining reports -└── YES → Count confirmed hypotheses - ├── 0 confirmed - │ ├── Any medium confidence? → Investigate further - │ └── All low/falsified? → Generate new hypotheses - ├── 1 confirmed - │ └── High confidence? - │ ├── YES → Declare root cause, propose fix - │ └── NO → Flag as likely cause, recommend verification - └── 2+ confirmed - └── Are they related? - ├── YES → Compound issue (multiple contributing causes) - └── NO → Rank by confidence, declare highest as primary -``` - -## Common Hypothesis Patterns by Error Type - -### "500 Internal Server Error" - -1. Unhandled exception in request handler (Logic Error) -2. Database connection failure (Resource Issue) -3. Missing environment variable (Environment) - -### "Race condition / intermittent failure" - -1. Shared state mutation without locking (State Problem) -2. Async operation ordering assumption (Logic Error) -3. Cache staleness window (State Problem) - -### "Works locally, fails in production" - -1. Environment variable mismatch (Environment) -2. Different dependency version (Environment) -3. Resource limits (memory, connections) (Resource Issue) - -### "Regression after deploy" - -1. New code introduced bug (Logic Error) -2. Configuration change (Integration Failure) -3. Database migration issue (Data Issue) diff --git a/.claude/skills/playwright-cli/SKILL.md b/.claude/skills/playwright-cli/SKILL.md deleted file mode 100644 index 11bad2b87..000000000 --- a/.claude/skills/playwright-cli/SKILL.md +++ /dev/null @@ -1,278 +0,0 @@ ---- -name: playwright-cli -description: Automates browser interactions for web testing, form filling, screenshots, and data extraction. Use when the user needs to navigate websites, interact with web pages, fill forms, take screenshots, test web applications, or extract information from web pages. -allowed-tools: Bash(playwright-cli:*) ---- - -# Browser Automation with playwright-cli - -## Quick start - -```bash -# open new browser -playwright-cli open -# navigate to a page -playwright-cli goto https://playwright.dev -# interact with the page using refs from the snapshot -playwright-cli click e15 -playwright-cli type "page.click" -playwright-cli press Enter -# take a screenshot (rarely used, as snapshot is more common) -playwright-cli screenshot -# close the browser -playwright-cli close -``` - -## Commands - -### Core - -```bash -playwright-cli open -# open and navigate right away -playwright-cli open https://example.com/ -playwright-cli goto https://playwright.dev -playwright-cli type "search query" -playwright-cli click e3 -playwright-cli dblclick e7 -playwright-cli fill e5 "user@example.com" -playwright-cli drag e2 e8 -playwright-cli hover e4 -playwright-cli select e9 "option-value" -playwright-cli upload ./document.pdf -playwright-cli check e12 -playwright-cli uncheck e12 -playwright-cli snapshot -playwright-cli snapshot --filename=after-click.yaml -playwright-cli eval "document.title" -playwright-cli eval "el => el.textContent" e5 -playwright-cli dialog-accept -playwright-cli dialog-accept "confirmation text" -playwright-cli dialog-dismiss -playwright-cli resize 1920 1080 -playwright-cli close -``` - -### Navigation - -```bash -playwright-cli go-back -playwright-cli go-forward -playwright-cli reload -``` - -### Keyboard - -```bash -playwright-cli press Enter -playwright-cli press ArrowDown -playwright-cli keydown Shift -playwright-cli keyup Shift -``` - -### Mouse - -```bash -playwright-cli mousemove 150 300 -playwright-cli mousedown -playwright-cli mousedown right -playwright-cli mouseup -playwright-cli mouseup right -playwright-cli mousewheel 0 100 -``` - -### Save as - -```bash -playwright-cli screenshot -playwright-cli screenshot e5 -playwright-cli screenshot --filename=page.png -playwright-cli pdf --filename=page.pdf -``` - -### Tabs - -```bash -playwright-cli tab-list -playwright-cli tab-new -playwright-cli tab-new https://example.com/page -playwright-cli tab-close -playwright-cli tab-close 2 -playwright-cli tab-select 0 -``` - -### Storage - -```bash -playwright-cli state-save -playwright-cli state-save auth.json -playwright-cli state-load auth.json - -# Cookies -playwright-cli cookie-list -playwright-cli cookie-list --domain=example.com -playwright-cli cookie-get session_id -playwright-cli cookie-set session_id abc123 -playwright-cli cookie-set session_id abc123 --domain=example.com --httpOnly --secure -playwright-cli cookie-delete session_id -playwright-cli cookie-clear - -# LocalStorage -playwright-cli localstorage-list -playwright-cli localstorage-get theme -playwright-cli localstorage-set theme dark -playwright-cli localstorage-delete theme -playwright-cli localstorage-clear - -# SessionStorage -playwright-cli sessionstorage-list -playwright-cli sessionstorage-get step -playwright-cli sessionstorage-set step 3 -playwright-cli sessionstorage-delete step -playwright-cli sessionstorage-clear -``` - -### Network - -```bash -playwright-cli route "**/*.jpg" --status=404 -playwright-cli route "https://api.example.com/**" --body='{"mock": true}' -playwright-cli route-list -playwright-cli unroute "**/*.jpg" -playwright-cli unroute -``` - -### DevTools - -```bash -playwright-cli console -playwright-cli console warning -playwright-cli network -playwright-cli run-code "async page => await page.context().grantPermissions(['geolocation'])" -playwright-cli tracing-start -playwright-cli tracing-stop -playwright-cli video-start -playwright-cli video-stop video.webm -``` - -## Open parameters -```bash -# Use specific browser when creating session -playwright-cli open --browser=chrome -playwright-cli open --browser=firefox -playwright-cli open --browser=webkit -playwright-cli open --browser=msedge -# Connect to browser via extension -playwright-cli open --extension - -# Use persistent profile (by default profile is in-memory) -playwright-cli open --persistent -# Use persistent profile with custom directory -playwright-cli open --profile=/path/to/profile - -# Start with config file -playwright-cli open --config=my-config.json - -# Close the browser -playwright-cli close -# Delete user data for the default session -playwright-cli delete-data -``` - -## Snapshots - -After each command, playwright-cli provides a snapshot of the current browser state. - -```bash -> playwright-cli goto https://example.com -### Page -- Page URL: https://example.com/ -- Page Title: Example Domain -### Snapshot -[Snapshot](.playwright-cli/page-2026-02-14T19-22-42-679Z.yml) -``` - -You can also take a snapshot on demand using `playwright-cli snapshot` command. - -If `--filename` is not provided, a new snapshot file is created with a timestamp. Default to automatic file naming, use `--filename=` when artifact is a part of the workflow result. - -## Browser Sessions - -```bash -# create new browser session named "mysession" with persistent profile -playwright-cli -s=mysession open example.com --persistent -# same with manually specified profile directory (use when requested explicitly) -playwright-cli -s=mysession open example.com --profile=/path/to/profile -playwright-cli -s=mysession click e6 -playwright-cli -s=mysession close # stop a named browser -playwright-cli -s=mysession delete-data # delete user data for persistent session - -playwright-cli list -# Close all browsers -playwright-cli close-all -# Forcefully kill all browser processes -playwright-cli kill-all -``` - -## Local installation - -In some cases user might want to install playwright-cli locally. If running globally available `playwright-cli` binary fails, use `npx playwright-cli` to run the commands. For example: - -```bash -npx playwright-cli open https://example.com -npx playwright-cli click e1 -``` - -## Example: Form submission - -```bash -playwright-cli open https://example.com/form -playwright-cli snapshot - -playwright-cli fill e1 "user@example.com" -playwright-cli fill e2 "password123" -playwright-cli click e3 -playwright-cli snapshot -playwright-cli close -``` - -## Example: Multi-tab workflow - -```bash -playwright-cli open https://example.com -playwright-cli tab-new https://example.com/other -playwright-cli tab-list -playwright-cli tab-select 0 -playwright-cli snapshot -playwright-cli close -``` - -## Example: Debugging with DevTools - -```bash -playwright-cli open https://example.com -playwright-cli click e4 -playwright-cli fill e7 "test" -playwright-cli console -playwright-cli network -playwright-cli close -``` - -```bash -playwright-cli open https://example.com -playwright-cli tracing-start -playwright-cli click e4 -playwright-cli fill e7 "test" -playwright-cli tracing-stop -playwright-cli close -``` - -## Specific tasks - -* **Request mocking** [references/request-mocking.md](references/request-mocking.md) -* **Running Playwright code** [references/running-code.md](references/running-code.md) -* **Browser session management** [references/session-management.md](references/session-management.md) -* **Storage state (cookies, localStorage)** [references/storage-state.md](references/storage-state.md) -* **Test generation** [references/test-generation.md](references/test-generation.md) -* **Tracing** [references/tracing.md](references/tracing.md) -* **Video recording** [references/video-recording.md](references/video-recording.md) diff --git a/.claude/skills/playwright-cli/references/request-mocking.md b/.claude/skills/playwright-cli/references/request-mocking.md deleted file mode 100644 index 9005fda67..000000000 --- a/.claude/skills/playwright-cli/references/request-mocking.md +++ /dev/null @@ -1,87 +0,0 @@ -# Request Mocking - -Intercept, mock, modify, and block network requests. - -## CLI Route Commands - -```bash -# Mock with custom status -playwright-cli route "**/*.jpg" --status=404 - -# Mock with JSON body -playwright-cli route "**/api/users" --body='[{"id":1,"name":"Alice"}]' --content-type=application/json - -# Mock with custom headers -playwright-cli route "**/api/data" --body='{"ok":true}' --header="X-Custom: value" - -# Remove headers from requests -playwright-cli route "**/*" --remove-header=cookie,authorization - -# List active routes -playwright-cli route-list - -# Remove a route or all routes -playwright-cli unroute "**/*.jpg" -playwright-cli unroute -``` - -## URL Patterns - -``` -**/api/users - Exact path match -**/api/*/details - Wildcard in path -**/*.{png,jpg,jpeg} - Match file extensions -**/search?q=* - Match query parameters -``` - -## Advanced Mocking with run-code - -For conditional responses, request body inspection, response modification, or delays: - -### Conditional Response Based on Request - -```bash -playwright-cli run-code "async page => { - await page.route('**/api/login', route => { - const body = route.request().postDataJSON(); - if (body.username === 'admin') { - route.fulfill({ body: JSON.stringify({ token: 'mock-token' }) }); - } else { - route.fulfill({ status: 401, body: JSON.stringify({ error: 'Invalid' }) }); - } - }); -}" -``` - -### Modify Real Response - -```bash -playwright-cli run-code "async page => { - await page.route('**/api/user', async route => { - const response = await route.fetch(); - const json = await response.json(); - json.isPremium = true; - await route.fulfill({ response, json }); - }); -}" -``` - -### Simulate Network Failures - -```bash -playwright-cli run-code "async page => { - await page.route('**/api/offline', route => route.abort('internetdisconnected')); -}" -# Options: connectionrefused, timedout, connectionreset, internetdisconnected -``` - -### Delayed Response - -```bash -playwright-cli run-code "async page => { - await page.route('**/api/slow', async route => { - await new Promise(r => setTimeout(r, 3000)); - route.fulfill({ body: JSON.stringify({ data: 'loaded' }) }); - }); -}" -``` diff --git a/.claude/skills/playwright-cli/references/running-code.md b/.claude/skills/playwright-cli/references/running-code.md deleted file mode 100644 index 7d6d22fd0..000000000 --- a/.claude/skills/playwright-cli/references/running-code.md +++ /dev/null @@ -1,232 +0,0 @@ -# Running Custom Playwright Code - -Use `run-code` to execute arbitrary Playwright code for advanced scenarios not covered by CLI commands. - -## Syntax - -```bash -playwright-cli run-code "async page => { - // Your Playwright code here - // Access page.context() for browser context operations -}" -``` - -## Geolocation - -```bash -# Grant geolocation permission and set location -playwright-cli run-code "async page => { - await page.context().grantPermissions(['geolocation']); - await page.context().setGeolocation({ latitude: 37.7749, longitude: -122.4194 }); -}" - -# Set location to London -playwright-cli run-code "async page => { - await page.context().grantPermissions(['geolocation']); - await page.context().setGeolocation({ latitude: 51.5074, longitude: -0.1278 }); -}" - -# Clear geolocation override -playwright-cli run-code "async page => { - await page.context().clearPermissions(); -}" -``` - -## Permissions - -```bash -# Grant multiple permissions -playwright-cli run-code "async page => { - await page.context().grantPermissions([ - 'geolocation', - 'notifications', - 'camera', - 'microphone' - ]); -}" - -# Grant permissions for specific origin -playwright-cli run-code "async page => { - await page.context().grantPermissions(['clipboard-read'], { - origin: 'https://example.com' - }); -}" -``` - -## Media Emulation - -```bash -# Emulate dark color scheme -playwright-cli run-code "async page => { - await page.emulateMedia({ colorScheme: 'dark' }); -}" - -# Emulate light color scheme -playwright-cli run-code "async page => { - await page.emulateMedia({ colorScheme: 'light' }); -}" - -# Emulate reduced motion -playwright-cli run-code "async page => { - await page.emulateMedia({ reducedMotion: 'reduce' }); -}" - -# Emulate print media -playwright-cli run-code "async page => { - await page.emulateMedia({ media: 'print' }); -}" -``` - -## Wait Strategies - -```bash -# Wait for network idle -playwright-cli run-code "async page => { - await page.waitForLoadState('networkidle'); -}" - -# Wait for specific element -playwright-cli run-code "async page => { - await page.waitForSelector('.loading', { state: 'hidden' }); -}" - -# Wait for function to return true -playwright-cli run-code "async page => { - await page.waitForFunction(() => window.appReady === true); -}" - -# Wait with timeout -playwright-cli run-code "async page => { - await page.waitForSelector('.result', { timeout: 10000 }); -}" -``` - -## Frames and Iframes - -```bash -# Work with iframe -playwright-cli run-code "async page => { - const frame = page.locator('iframe#my-iframe').contentFrame(); - await frame.locator('button').click(); -}" - -# Get all frames -playwright-cli run-code "async page => { - const frames = page.frames(); - return frames.map(f => f.url()); -}" -``` - -## File Downloads - -```bash -# Handle file download -playwright-cli run-code "async page => { - const [download] = await Promise.all([ - page.waitForEvent('download'), - page.click('a.download-link') - ]); - await download.saveAs('./downloaded-file.pdf'); - return download.suggestedFilename(); -}" -``` - -## Clipboard - -```bash -# Read clipboard (requires permission) -playwright-cli run-code "async page => { - await page.context().grantPermissions(['clipboard-read']); - return await page.evaluate(() => navigator.clipboard.readText()); -}" - -# Write to clipboard -playwright-cli run-code "async page => { - await page.evaluate(text => navigator.clipboard.writeText(text), 'Hello clipboard!'); -}" -``` - -## Page Information - -```bash -# Get page title -playwright-cli run-code "async page => { - return await page.title(); -}" - -# Get current URL -playwright-cli run-code "async page => { - return page.url(); -}" - -# Get page content -playwright-cli run-code "async page => { - return await page.content(); -}" - -# Get viewport size -playwright-cli run-code "async page => { - return page.viewportSize(); -}" -``` - -## JavaScript Execution - -```bash -# Execute JavaScript and return result -playwright-cli run-code "async page => { - return await page.evaluate(() => { - return { - userAgent: navigator.userAgent, - language: navigator.language, - cookiesEnabled: navigator.cookieEnabled - }; - }); -}" - -# Pass arguments to evaluate -playwright-cli run-code "async page => { - const multiplier = 5; - return await page.evaluate(m => document.querySelectorAll('li').length * m, multiplier); -}" -``` - -## Error Handling - -```bash -# Try-catch in run-code -playwright-cli run-code "async page => { - try { - await page.click('.maybe-missing', { timeout: 1000 }); - return 'clicked'; - } catch (e) { - return 'element not found'; - } -}" -``` - -## Complex Workflows - -```bash -# Login and save state -playwright-cli run-code "async page => { - await page.goto('https://example.com/login'); - await page.fill('input[name=email]', 'user@example.com'); - await page.fill('input[name=password]', 'secret'); - await page.click('button[type=submit]'); - await page.waitForURL('**/dashboard'); - await page.context().storageState({ path: 'auth.json' }); - return 'Login successful'; -}" - -# Scrape data from multiple pages -playwright-cli run-code "async page => { - const results = []; - for (let i = 1; i <= 3; i++) { - await page.goto(\`https://example.com/page/\${i}\`); - const items = await page.locator('.item').allTextContents(); - results.push(...items); - } - return results; -}" -``` diff --git a/.claude/skills/playwright-cli/references/session-management.md b/.claude/skills/playwright-cli/references/session-management.md deleted file mode 100644 index fac96066c..000000000 --- a/.claude/skills/playwright-cli/references/session-management.md +++ /dev/null @@ -1,169 +0,0 @@ -# Browser Session Management - -Run multiple isolated browser sessions concurrently with state persistence. - -## Named Browser Sessions - -Use `-s` flag to isolate browser contexts: - -```bash -# Browser 1: Authentication flow -playwright-cli -s=auth open https://app.example.com/login - -# Browser 2: Public browsing (separate cookies, storage) -playwright-cli -s=public open https://example.com - -# Commands are isolated by browser session -playwright-cli -s=auth fill e1 "user@example.com" -playwright-cli -s=public snapshot -``` - -## Browser Session Isolation Properties - -Each browser session has independent: -- Cookies -- LocalStorage / SessionStorage -- IndexedDB -- Cache -- Browsing history -- Open tabs - -## Browser Session Commands - -```bash -# List all browser sessions -playwright-cli list - -# Stop a browser session (close the browser) -playwright-cli close # stop the default browser -playwright-cli -s=mysession close # stop a named browser - -# Stop all browser sessions -playwright-cli close-all - -# Forcefully kill all daemon processes (for stale/zombie processes) -playwright-cli kill-all - -# Delete browser session user data (profile directory) -playwright-cli delete-data # delete default browser data -playwright-cli -s=mysession delete-data # delete named browser data -``` - -## Environment Variable - -Set a default browser session name via environment variable: - -```bash -export PLAYWRIGHT_CLI_SESSION="mysession" -playwright-cli open example.com # Uses "mysession" automatically -``` - -## Common Patterns - -### Concurrent Scraping - -```bash -#!/bin/bash -# Scrape multiple sites concurrently - -# Start all browsers -playwright-cli -s=site1 open https://site1.com & -playwright-cli -s=site2 open https://site2.com & -playwright-cli -s=site3 open https://site3.com & -wait - -# Take snapshots from each -playwright-cli -s=site1 snapshot -playwright-cli -s=site2 snapshot -playwright-cli -s=site3 snapshot - -# Cleanup -playwright-cli close-all -``` - -### A/B Testing Sessions - -```bash -# Test different user experiences -playwright-cli -s=variant-a open "https://app.com?variant=a" -playwright-cli -s=variant-b open "https://app.com?variant=b" - -# Compare -playwright-cli -s=variant-a screenshot -playwright-cli -s=variant-b screenshot -``` - -### Persistent Profile - -By default, browser profile is kept in memory only. Use `--persistent` flag on `open` to persist the browser profile to disk: - -```bash -# Use persistent profile (auto-generated location) -playwright-cli open https://example.com --persistent - -# Use persistent profile with custom directory -playwright-cli open https://example.com --profile=/path/to/profile -``` - -## Default Browser Session - -When `-s` is omitted, commands use the default browser session: - -```bash -# These use the same default browser session -playwright-cli open https://example.com -playwright-cli snapshot -playwright-cli close # Stops default browser -``` - -## Browser Session Configuration - -Configure a browser session with specific settings when opening: - -```bash -# Open with config file -playwright-cli open https://example.com --config=.playwright/my-cli.json - -# Open with specific browser -playwright-cli open https://example.com --browser=firefox - -# Open in headed mode -playwright-cli open https://example.com --headed - -# Open with persistent profile -playwright-cli open https://example.com --persistent -``` - -## Best Practices - -### 1. Name Browser Sessions Semantically - -```bash -# GOOD: Clear purpose -playwright-cli -s=github-auth open https://github.com -playwright-cli -s=docs-scrape open https://docs.example.com - -# AVOID: Generic names -playwright-cli -s=s1 open https://github.com -``` - -### 2. Always Clean Up - -```bash -# Stop browsers when done -playwright-cli -s=auth close -playwright-cli -s=scrape close - -# Or stop all at once -playwright-cli close-all - -# If browsers become unresponsive or zombie processes remain -playwright-cli kill-all -``` - -### 3. Delete Stale Browser Data - -```bash -# Remove old browser data to free disk space -playwright-cli -s=oldsession delete-data -``` diff --git a/.claude/skills/playwright-cli/references/storage-state.md b/.claude/skills/playwright-cli/references/storage-state.md deleted file mode 100644 index c856db5e4..000000000 --- a/.claude/skills/playwright-cli/references/storage-state.md +++ /dev/null @@ -1,275 +0,0 @@ -# Storage Management - -Manage cookies, localStorage, sessionStorage, and browser storage state. - -## Storage State - -Save and restore complete browser state including cookies and storage. - -### Save Storage State - -```bash -# Save to auto-generated filename (storage-state-{timestamp}.json) -playwright-cli state-save - -# Save to specific filename -playwright-cli state-save my-auth-state.json -``` - -### Restore Storage State - -```bash -# Load storage state from file -playwright-cli state-load my-auth-state.json - -# Reload page to apply cookies -playwright-cli open https://example.com -``` - -### Storage State File Format - -The saved file contains: - -```json -{ - "cookies": [ - { - "name": "session_id", - "value": "abc123", - "domain": "example.com", - "path": "/", - "expires": 1735689600, - "httpOnly": true, - "secure": true, - "sameSite": "Lax" - } - ], - "origins": [ - { - "origin": "https://example.com", - "localStorage": [ - { "name": "theme", "value": "dark" }, - { "name": "user_id", "value": "12345" } - ] - } - ] -} -``` - -## Cookies - -### List All Cookies - -```bash -playwright-cli cookie-list -``` - -### Filter Cookies by Domain - -```bash -playwright-cli cookie-list --domain=example.com -``` - -### Filter Cookies by Path - -```bash -playwright-cli cookie-list --path=/api -``` - -### Get Specific Cookie - -```bash -playwright-cli cookie-get session_id -``` - -### Set a Cookie - -```bash -# Basic cookie -playwright-cli cookie-set session abc123 - -# Cookie with options -playwright-cli cookie-set session abc123 --domain=example.com --path=/ --httpOnly --secure --sameSite=Lax - -# Cookie with expiration (Unix timestamp) -playwright-cli cookie-set remember_me token123 --expires=1735689600 -``` - -### Delete a Cookie - -```bash -playwright-cli cookie-delete session_id -``` - -### Clear All Cookies - -```bash -playwright-cli cookie-clear -``` - -### Advanced: Multiple Cookies or Custom Options - -For complex scenarios like adding multiple cookies at once, use `run-code`: - -```bash -playwright-cli run-code "async page => { - await page.context().addCookies([ - { name: 'session_id', value: 'sess_abc123', domain: 'example.com', path: '/', httpOnly: true }, - { name: 'preferences', value: JSON.stringify({ theme: 'dark' }), domain: 'example.com', path: '/' } - ]); -}" -``` - -## Local Storage - -### List All localStorage Items - -```bash -playwright-cli localstorage-list -``` - -### Get Single Value - -```bash -playwright-cli localstorage-get token -``` - -### Set Value - -```bash -playwright-cli localstorage-set theme dark -``` - -### Set JSON Value - -```bash -playwright-cli localstorage-set user_settings '{"theme":"dark","language":"en"}' -``` - -### Delete Single Item - -```bash -playwright-cli localstorage-delete token -``` - -### Clear All localStorage - -```bash -playwright-cli localstorage-clear -``` - -### Advanced: Multiple Operations - -For complex scenarios like setting multiple values at once, use `run-code`: - -```bash -playwright-cli run-code "async page => { - await page.evaluate(() => { - localStorage.setItem('token', 'jwt_abc123'); - localStorage.setItem('user_id', '12345'); - localStorage.setItem('expires_at', Date.now() + 3600000); - }); -}" -``` - -## Session Storage - -### List All sessionStorage Items - -```bash -playwright-cli sessionstorage-list -``` - -### Get Single Value - -```bash -playwright-cli sessionstorage-get form_data -``` - -### Set Value - -```bash -playwright-cli sessionstorage-set step 3 -``` - -### Delete Single Item - -```bash -playwright-cli sessionstorage-delete step -``` - -### Clear sessionStorage - -```bash -playwright-cli sessionstorage-clear -``` - -## IndexedDB - -### List Databases - -```bash -playwright-cli run-code "async page => { - return await page.evaluate(async () => { - const databases = await indexedDB.databases(); - return databases; - }); -}" -``` - -### Delete Database - -```bash -playwright-cli run-code "async page => { - await page.evaluate(() => { - indexedDB.deleteDatabase('myDatabase'); - }); -}" -``` - -## Common Patterns - -### Authentication State Reuse - -```bash -# Step 1: Login and save state -playwright-cli open https://app.example.com/login -playwright-cli snapshot -playwright-cli fill e1 "user@example.com" -playwright-cli fill e2 "password123" -playwright-cli click e3 - -# Save the authenticated state -playwright-cli state-save auth.json - -# Step 2: Later, restore state and skip login -playwright-cli state-load auth.json -playwright-cli open https://app.example.com/dashboard -# Already logged in! -``` - -### Save and Restore Roundtrip - -```bash -# Set up authentication state -playwright-cli open https://example.com -playwright-cli eval "() => { document.cookie = 'session=abc123'; localStorage.setItem('user', 'john'); }" - -# Save state to file -playwright-cli state-save my-session.json - -# ... later, in a new session ... - -# Restore state -playwright-cli state-load my-session.json -playwright-cli open https://example.com -# Cookies and localStorage are restored! -``` - -## Security Notes - -- Never commit storage state files containing auth tokens -- Add `*.auth-state.json` to `.gitignore` -- Delete state files after automation completes -- Use environment variables for sensitive data -- By default, sessions run in-memory mode which is safer for sensitive operations diff --git a/.claude/skills/playwright-cli/references/test-generation.md b/.claude/skills/playwright-cli/references/test-generation.md deleted file mode 100644 index 7a09df387..000000000 --- a/.claude/skills/playwright-cli/references/test-generation.md +++ /dev/null @@ -1,88 +0,0 @@ -# Test Generation - -Generate Playwright test code automatically as you interact with the browser. - -## How It Works - -Every action you perform with `playwright-cli` generates corresponding Playwright TypeScript code. -This code appears in the output and can be copied directly into your test files. - -## Example Workflow - -```bash -# Start a session -playwright-cli open https://example.com/login - -# Take a snapshot to see elements -playwright-cli snapshot -# Output shows: e1 [textbox "Email"], e2 [textbox "Password"], e3 [button "Sign In"] - -# Fill form fields - generates code automatically -playwright-cli fill e1 "user@example.com" -# Ran Playwright code: -# await page.getByRole('textbox', { name: 'Email' }).fill('user@example.com'); - -playwright-cli fill e2 "password123" -# Ran Playwright code: -# await page.getByRole('textbox', { name: 'Password' }).fill('password123'); - -playwright-cli click e3 -# Ran Playwright code: -# await page.getByRole('button', { name: 'Sign In' }).click(); -``` - -## Building a Test File - -Collect the generated code into a Playwright test: - -```typescript -import { test, expect } from '@playwright/test'; - -test('login flow', async ({ page }) => { - // Generated code from playwright-cli session: - await page.goto('https://example.com/login'); - await page.getByRole('textbox', { name: 'Email' }).fill('user@example.com'); - await page.getByRole('textbox', { name: 'Password' }).fill('password123'); - await page.getByRole('button', { name: 'Sign In' }).click(); - - // Add assertions - await expect(page).toHaveURL(/.*dashboard/); -}); -``` - -## Best Practices - -### 1. Use Semantic Locators - -The generated code uses role-based locators when possible, which are more resilient: - -```typescript -// Generated (good - semantic) -await page.getByRole('button', { name: 'Submit' }).click(); - -// Avoid (fragile - CSS selectors) -await page.locator('#submit-btn').click(); -``` - -### 2. Explore Before Recording - -Take snapshots to understand the page structure before recording actions: - -```bash -playwright-cli open https://example.com -playwright-cli snapshot -# Review the element structure -playwright-cli click e5 -``` - -### 3. Add Assertions Manually - -Generated code captures actions but not assertions. Add expectations in your test: - -```typescript -// Generated action -await page.getByRole('button', { name: 'Submit' }).click(); - -// Manual assertion -await expect(page.getByText('Success')).toBeVisible(); -``` diff --git a/.claude/skills/playwright-cli/references/tracing.md b/.claude/skills/playwright-cli/references/tracing.md deleted file mode 100644 index 7ce7babbd..000000000 --- a/.claude/skills/playwright-cli/references/tracing.md +++ /dev/null @@ -1,139 +0,0 @@ -# Tracing - -Capture detailed execution traces for debugging and analysis. Traces include DOM snapshots, screenshots, network activity, and console logs. - -## Basic Usage - -```bash -# Start trace recording -playwright-cli tracing-start - -# Perform actions -playwright-cli open https://example.com -playwright-cli click e1 -playwright-cli fill e2 "test" - -# Stop trace recording -playwright-cli tracing-stop -``` - -## Trace Output Files - -When you start tracing, Playwright creates a `traces/` directory with several files: - -### `trace-{timestamp}.trace` - -**Action log** - The main trace file containing: -- Every action performed (clicks, fills, navigations) -- DOM snapshots before and after each action -- Screenshots at each step -- Timing information -- Console messages -- Source locations - -### `trace-{timestamp}.network` - -**Network log** - Complete network activity: -- All HTTP requests and responses -- Request headers and bodies -- Response headers and bodies -- Timing (DNS, connect, TLS, TTFB, download) -- Resource sizes -- Failed requests and errors - -### `resources/` - -**Resources directory** - Cached resources: -- Images, fonts, stylesheets, scripts -- Response bodies for replay -- Assets needed to reconstruct page state - -## What Traces Capture - -| Category | Details | -|----------|---------| -| **Actions** | Clicks, fills, hovers, keyboard input, navigations | -| **DOM** | Full DOM snapshot before/after each action | -| **Screenshots** | Visual state at each step | -| **Network** | All requests, responses, headers, bodies, timing | -| **Console** | All console.log, warn, error messages | -| **Timing** | Precise timing for each operation | - -## Use Cases - -### Debugging Failed Actions - -```bash -playwright-cli tracing-start -playwright-cli open https://app.example.com - -# This click fails - why? -playwright-cli click e5 - -playwright-cli tracing-stop -# Open trace to see DOM state when click was attempted -``` - -### Analyzing Performance - -```bash -playwright-cli tracing-start -playwright-cli open https://slow-site.com -playwright-cli tracing-stop - -# View network waterfall to identify slow resources -``` - -### Capturing Evidence - -```bash -# Record a complete user flow for documentation -playwright-cli tracing-start - -playwright-cli open https://app.example.com/checkout -playwright-cli fill e1 "4111111111111111" -playwright-cli fill e2 "12/25" -playwright-cli fill e3 "123" -playwright-cli click e4 - -playwright-cli tracing-stop -# Trace shows exact sequence of events -``` - -## Trace vs Video vs Screenshot - -| Feature | Trace | Video | Screenshot | -|---------|-------|-------|------------| -| **Format** | .trace file | .webm video | .png/.jpeg image | -| **DOM inspection** | Yes | No | No | -| **Network details** | Yes | No | No | -| **Step-by-step replay** | Yes | Continuous | Single frame | -| **File size** | Medium | Large | Small | -| **Best for** | Debugging | Demos | Quick capture | - -## Best Practices - -### 1. Start Tracing Before the Problem - -```bash -# Trace the entire flow, not just the failing step -playwright-cli tracing-start -playwright-cli open https://example.com -# ... all steps leading to the issue ... -playwright-cli tracing-stop -``` - -### 2. Clean Up Old Traces - -Traces can consume significant disk space: - -```bash -# Remove traces older than 7 days -find .playwright-cli/traces -mtime +7 -delete -``` - -## Limitations - -- Traces add overhead to automation -- Large traces can consume significant disk space -- Some dynamic content may not replay perfectly diff --git a/.claude/skills/playwright-cli/references/video-recording.md b/.claude/skills/playwright-cli/references/video-recording.md deleted file mode 100644 index 38391b37a..000000000 --- a/.claude/skills/playwright-cli/references/video-recording.md +++ /dev/null @@ -1,43 +0,0 @@ -# Video Recording - -Capture browser automation sessions as video for debugging, documentation, or verification. Produces WebM (VP8/VP9 codec). - -## Basic Recording - -```bash -# Start recording -playwright-cli video-start - -# Perform actions -playwright-cli open https://example.com -playwright-cli snapshot -playwright-cli click e1 -playwright-cli fill e2 "test input" - -# Stop and save -playwright-cli video-stop demo.webm -``` - -## Best Practices - -### 1. Use Descriptive Filenames - -```bash -# Include context in filename -playwright-cli video-stop recordings/login-flow-2024-01-15.webm -playwright-cli video-stop recordings/checkout-test-run-42.webm -``` - -## Tracing vs Video - -| Feature | Video | Tracing | -|---------|-------|---------| -| Output | WebM file | Trace file (viewable in Trace Viewer) | -| Shows | Visual recording | DOM snapshots, network, console, actions | -| Use case | Demos, documentation | Debugging, analysis | -| Size | Larger | Smaller | - -## Limitations - -- Recording adds slight overhead to automation -- Large recordings can consume significant disk space diff --git a/.claude/skills/playwright-dev/SKILL.md b/.claude/skills/playwright-dev/SKILL.md deleted file mode 100644 index 25cf694ca..000000000 --- a/.claude/skills/playwright-dev/SKILL.md +++ /dev/null @@ -1,20 +0,0 @@ ---- -name: playwright-dev -description: Explains how to develop Playwright - add APIs, MCP tools, CLI commands, and vendor dependencies. ---- - -# Playwright Development Guide - -## Table of Contents - -- [Library Architecture](library.md) — client/server/dispatcher structure, protocol layer, DEPS rules -- [Adding and Modifying APIs](api.md) — define API docs, implement client/server, add tests -- [MCP Tools and CLI Commands](mcp-dev.md) — add MCP tools, CLI commands, config options -- [Vendoring Dependencies](vendor.md) — bundle third-party npm packages into playwright-core or playwright - -## Build -- Assume watch is running and everything is up to date. -- If not, run `npm run build`. - -## Lint -- Run `npm run flint` to lint everything before commit. diff --git a/.claude/skills/playwright-dev/api.md b/.claude/skills/playwright-dev/api.md deleted file mode 100644 index 06431bbe6..000000000 --- a/.claude/skills/playwright-dev/api.md +++ /dev/null @@ -1,293 +0,0 @@ -# Adding and Modifying APIs - -- Before performing the implementation, go over the steps to understand and plan the work ahead. It is important to follow the steps in order, as some of them are prerequisites for others. - -## Step 1: Define API in Documentation - -Define (or update) API in `docs/src/api/class-xxx.md`. For the new methods, params and options use the version from package.json (without `-next`). - -### Documentation Format - -**Method definition:** -```markdown -## async method: Page.methodName -* since: v1.XX -- returns: <[null]|[Response]> - -Description of the method. - -### param: Page.methodName.paramName -* since: v1.XX -- `paramName` <[string]> - -Description of the parameter. - -### option: Page.methodName.optionName -* since: v1.XX -- `optionName` <[string]> - -Description of the option. -``` - -**Key syntax rules:** -- `* since: v1.XX` — version from package.json (without -next) -- `* langs: js, python` — language filter (optional) -- `* langs: alias-java: navigate` — language-specific method name -- `* deprecated: v1.XX` — deprecation marker -- `<[TypeName]>` — type annotation: `<[string]>`, `<[int]>`, `<[float]>`, `<[boolean]>` -- `<[null]|[Response]>` — union type -- `<[Array]<[Locator]>>` — array type -- `<[Object]>` with indented `- \`field\` <[type]>` — object type -- `### param:` — required parameter -- `### option:` — optional parameter -- `= %%-placeholder-name-%%` — reuse shared param definition from `docs/src/api/params.md` - -**Property definition:** -```markdown -## property: Page.propName -* since: v1.XX -- type: <[string]> - -Description. -``` - -**Event definition:** -```markdown -## event: Page.eventName -* since: v1.XX -- argument: <[Dialog]> - -Description. -``` - -Watch will kick in and auto-generate: -- `packages/playwright-core/types/types.d.ts` — public API types -- `packages/playwright/types/test.d.ts` — test API types - -## Step 2: Implement Client API - -Implement the new API in `packages/playwright-core/src/client/xxx.ts`. - -### Client Implementation Pattern - -Client classes extend `ChannelOwner` and call through `this._channel`: - -```typescript -// Direct channel call (most common) -async methodName(param: string, options: channels.FrameMethodNameOptions = {}): Promise { - await this._channel.methodName({ param, ...options, timeout: this._timeout(options) }); -} - -// Channel call with response wrapping -async goto(url: string, options: channels.FrameGotoOptions = {}): Promise { - return network.Response.fromNullable( - (await this._channel.goto({ url, ...options, timeout: this._timeout(options) })).response - ); -} -``` - -**Key patterns:** -- Parameters are assembled into a single object for the channel call -- Timeout is processed through `this._timeout(options)` or `this._navigationTimeout(options)` -- Return values from channel are unwrapped/converted: `Response.fromNullable()`, `ElementHandle.from()`, etc. -- Locator methods delegate to Frame: `return await this._frame.click(this._selector, { strict: true, ...options })` -- Page methods often delegate to `this._mainFrame` - -## Step 3: Define Protocol Channel - -Define (or update) channel for the API in `packages/protocol/src/protocol.yml` as needed. - -### Protocol YAML Format - -Methods are defined under `commands:` in the interface section: - -```yaml -Page: - type: interface - extends: EventTarget - - commands: - methodName: - title: Short description for tracing - parameters: - url: string # required string - timeout: float # required float - referer: string? # optional string (? suffix) - waitUntil: LifecycleEvent? # optional reference to another type - button: # optional enum - type: enum? - literals: - - left - - right - - middle - modifiers: # optional array of enums - type: array? - items: - type: enum - literals: - - Alt - - Control - - Meta - - Shift - position: Point? # optional reference type - viewportSize: # required inline object - type: object - properties: - width: int - height: int - returns: - response: Response? # optional return value - flags: - slowMo: true - snapshot: true - pausesBeforeAction: true -``` - -**Type primitives:** `string`, `int`, `float`, `boolean`, `binary`, `json` -**Optional:** append `?` to any type: `string?`, `int?`, `object?` -**Arrays:** `type: array` with `items:` (or `type: array?` for optional) -**Enums:** `type: enum` with `literals:` list -**References:** use type name directly: `Response`, `Frame`, `Point` -**Flags:** `slowMo`, `snapshot`, `pausesBeforeAction`, `pausesBeforeInput` - -Watch will kick in and auto-generate: -- `packages/protocol/src/channels.d.ts` — channel TypeScript interfaces -- `packages/playwright-core/src/protocol/validator.ts` — runtime validators -- `packages/playwright-core/src/utils/isomorphic/protocolMetainfo.ts` — method metadata - -## Step 4: Implement Dispatcher - -Implement dispatcher handler in `packages/playwright-core/src/server/dispatchers/xxxDispatcher.ts` as needed. - -### Dispatcher Pattern - -Dispatchers receive validated params and route to server objects: - -```typescript -// Simple pass-through (most common) -async methodName(params: channels.PageMethodNameParams, progress: Progress): Promise { - await this._page.methodName(progress, params.value); -} - -// With response wrapping -async goto(params: channels.FrameGotoParams, progress: Progress): Promise { - return { response: ResponseDispatcher.fromNullable(this._browserContextDispatcher, - await this._frame.goto(progress, params.url, params)) }; -} - -// With dispatcher extraction (when params contain dispatcher references) -async expectScreenshot(params: channels.PageExpectScreenshotParams, progress: Progress): Promise { - const mask = (params.mask || []).map(({ frame, selector }) => ({ - frame: (frame as FrameDispatcher)._object, - selector, - })); - return await this._page.expectScreenshot(progress, { ...params, mask }); -} - -// With array result wrapping -async querySelectorAll(params: channels.FrameQuerySelectorAllParams, progress: Progress): Promise { - const elements = await progress.race(this._frame.querySelectorAll(params.selector)); - return { elements: elements.map(e => ElementHandleDispatcher.from(this, e)) }; -} -``` - -**Key patterns:** -- Method signature: `async method(params: channels.XxxMethodParams, progress: Progress): Promise` -- Extract params: `params.url`, `params.selector`, etc. -- Convert dispatcher refs to server objects: `(params.frame as FrameDispatcher)._object` -- Wrap server objects as dispatchers in results: `ResponseDispatcher.fromNullable()`, `ElementHandleDispatcher.from()` -- All methods receive `Progress` for timeout/cancellation - -## Step 5: Implement Server Logic - -Handler should route the call into the corresponding method in `packages/playwright-core/src/server/xxx.ts`. - -Server methods implement the actual browser interaction: - -```typescript -// In packages/playwright-core/src/server/frames.ts -async goto(progress: Progress, url: string, options: types.GotoOptions = {}): Promise { - // ... validation, URL construction ... - // Delegates to browser-specific implementation: - const result = await this._page.delegate.navigateFrame(this, url, referer); - // ... wait for lifecycle events ... - return response; -} -``` - -Browser-specific implementations live in: -- `packages/playwright-core/src/server/chromium/crPage.ts` — Chromium (uses CDP: `this._client.send('Page.navigate', { ... })`) -- `packages/playwright-core/src/server/firefox/ffPage.ts` — Firefox -- `packages/playwright-core/src/server/webkit/wkPage.ts` — WebKit - -## Step 6: Write Tests - -### Test Location -- Page-only tests: `tests/page/xxx.spec.ts` — use `page` fixture -- Context tests: `tests/library/xxx.spec.ts` — use `context` fixture - -### Test Patterns - -**Page test:** -```typescript -import { test as it, expect } from './pageTest'; - -it('should do something @smoke', async ({ page, server }) => { - await page.goto(server.EMPTY_PAGE); - // ... assertions ... - expect(page.url()).toBe(server.EMPTY_PAGE); -}); - -it('should handle options', async ({ page, server, browserName, isAndroid }) => { - it.skip(isAndroid, 'Not supported on Android'); - it.info().annotations.push({ type: 'issue', description: 'https://github.com/user/repo/issues/123' }); - // ... -}); -``` - -**Library/context test:** -```typescript -import { contextTest as it, expect } from '../config/browserTest'; - -it('should work with context', async ({ context, server }) => { - const page = await context.newPage(); - await page.goto(server.EMPTY_PAGE); - // ... -}); -``` - -### Available Fixtures -- `page` — isolated page instance -- `context` — browser context (library tests) -- `server` — HTTP test server (`server.EMPTY_PAGE`, `server.PREFIX`, `server.CROSS_PROCESS_PREFIX`) -- `httpsServer` — HTTPS test server -- `asset(name)` — path to test asset file -- `browserName` — `'chromium' | 'firefox' | 'webkit'` -- `channel` — browser channel string -- `isAndroid`, `isBidi`, `isElectron` — platform booleans -- `isWindows`, `isMac`, `isLinux` — OS booleans -- `mode` — test mode (`'default'`, `'service'`, etc.) - -### Running Tests -```bash -npm run ctest tests/page/xxx.spec.ts # Chromium only -npm run test tests/page/xxx.spec.ts # All browsers -npm run ctest -- --grep "should do something" # Filter by name -``` - -## Architecture Overview - -``` -docs/src/api/class-xxx.md (API documentation — source of truth for public types) - → auto-generates → types.d.ts, test.d.ts - -packages/protocol/src/protocol.yml (RPC protocol definition) - → auto-generates → channels.d.ts, validator.ts, protocolMetainfo.ts - -Client call chain: - user code → Page.method() → Frame.method() → this._channel.method(params) - → Proxy validates & sends → Connection.sendMessageToServer() - → [wire] → - DispatcherConnection.dispatch() → XxxDispatcher.method(params, progress) - → ServerObject.method(progress, ...) → BrowserDelegate (CDP/Firefox/WebKit) -``` diff --git a/.claude/skills/playwright-dev/library.md b/.claude/skills/playwright-dev/library.md deleted file mode 100644 index e619a381f..000000000 --- a/.claude/skills/playwright-dev/library.md +++ /dev/null @@ -1,418 +0,0 @@ -# Playwright Library Architecture: Client, Server, and Dispatchers - -Playwright uses a client-server architecture connected by a protocol layer. The client provides the public API, the server performs actual browser automation, and dispatchers bridge the two over an RPC channel. - -## Package Layout - -``` -packages/protocol/src/ - protocol.yml — RPC protocol definition (source of truth) - channels.d.ts — generated TypeScript channel interfaces - callMetadata.d.ts — call metadata types - -packages/playwright-core/src/ - client/ — public API objects (ChannelOwner subclasses) - server/ — browser automation implementation (SdkObject subclasses) - server/dispatchers/ — protocol bridge (Dispatcher subclasses) - protocol/ — validators (generated + primitives) - utils/isomorphic/ — shared code used by both client and server - protocolMetainfo.ts — generated method metadata (flags, titles) -``` - -## Dependency Rules (DEPS.list) - -Each directory has a `DEPS.list` constraining its imports. These are enforced by `npm run flint`. - -**client/** can import from: -- `../protocol/` — validators and channel types -- `../utils/isomorphic` — shared utilities - -**server/** can import from: -- `../protocol/`, `../utils/`, `../utils/isomorphic/`, `../utilsBundle.ts` -- `./` (own directory), `./codegen/`, `./isomorphic/`, `./har/`, `./recorder/`, `./registry/`, `./utils/` -- Only `playwright.ts` can import browser engines (`./chromium/`, `./firefox/`, `./webkit/`, `./bidi/`, `./android/`, `./electron/`) -- Only `devtoolsController.ts` can additionally import `./chromium/` - -**server/dispatchers/** can import from: -- `../../protocol/`, `../../utils/`, `../../utils/isomorphic/` -- `../**` — all server modules - -**Key rule:** Client code NEVER imports server code. Server code NEVER imports client code. They communicate only through the protocol. - -## Protocol Layer - -### protocol.yml - -Defines all RPC interfaces, commands (methods), events, and types. Example: - -```yaml -Page: - type: interface - extends: EventTarget - initializer: - mainFrame: Frame - viewportSize: { type: object?, properties: { width: int, height: int } } - commands: - goto: - parameters: - url: string - timeout: float - waitUntil: LifecycleEvent? - returns: - response: Response? - events: - close: {} - navigated: - url: string - name: string -``` - -### Code Generation - -Running `node utils/generate_channels.js` (or via watch) produces: -- `packages/protocol/src/channels.d.ts` — TypeScript types: `PageChannel`, `PageGotoParams`, `PageGotoResult`, `PageInitializer`, event types -- `packages/playwright-core/src/protocol/validator.ts` — runtime validators: `scheme.PageGotoParams = tObject({...})` -- `packages/playwright-core/src/utils/isomorphic/protocolMetainfo.ts` — method flags (slowMo, snapshot, etc.) - -### Wire Format - -``` -Client → Server (RPC call): { id, guid, method, params, metadata? } -Server → Client (response): { id, result } or { id, error, log? } -Server → Client (event): { guid, method, params } -Server → Client (lifecycle): { guid, method: '__create__'|'__adopt__'|'__dispose__', params } -``` - -Object references are serialized as `{ guid: "object-guid" }` and resolved by validators. - -## Client Layer - -### ChannelOwner — Base Class - -Every client-side API object (Page, Frame, Browser, etc.) extends `ChannelOwner`: - -``` -packages/playwright-core/src/client/channelOwner.ts -``` - -Key properties: -- `_connection: Connection` — the RPC connection -- `_channel: T` — Proxy that intercepts method calls and sends RPC messages -- `_guid: string` — unique identifier matching the server-side object -- `_type: string` — type name (e.g., 'Page', 'Frame') -- `_parent: ChannelOwner` — parent in the object tree -- `_objects: Map` — child objects -- `_initializer` — initial state received from server on creation - -How `_channel` works: It's a Proxy. When you call `this._channel.goto(params)`: -1. Proxy intercepts the `goto` property access -2. Finds the validator for `PageGotoParams` -3. Returns an async function that validates params, wraps in `_wrapApiCall`, and calls `_connection.sendMessageToServer()` - -Event subscription optimization: `_eventToSubscriptionMapping` maps JS event names to protocol subscription events. When the first listener is added, calls `updateSubscription(event, true)` on the channel. When last listener is removed, calls `updateSubscription(event, false)`. This way the server only sends events that have listeners. - -### Connection - -``` -packages/playwright-core/src/client/connection.ts -``` - -Manages the client-server transport: -- `_objects: Map` — all live remote objects by GUID -- `_callbacks: Map` — pending RPC calls by message ID -- `sendMessageToServer(object, method, params, apiZone)` — sends RPC call, returns promise -- `dispatch(message)` — handles incoming messages: - - Response (has `id`): resolves/rejects the matching callback - - `__create__`: instantiates ChannelOwner subclass via factory switch - - `__adopt__`: reparents a child object - - `__dispose__`: disposes object and all children - - Event (has `method`): emits on the object's `_channel` - -### Representative Client Classes - -| Class | File | Key delegation | -|-------|------|----------------| -| `Playwright` | `playwright.ts` | Root object; owns `chromium`, `firefox`, `webkit` BrowserTypes | -| `BrowserType` | `browserType.ts` | `launch()` → `_channel.launch()` | -| `Browser` | `browser.ts` | `newContext()` → `_channel.newContext()` | -| `BrowserContext` | `browserContext.ts` | Owns pages, routes, tracing, cookies | -| `Page` | `page.ts` | Delegates most calls to `_mainFrame`; owns keyboard/mouse/touchscreen | -| `Frame` | `frame.ts` | `goto()`, `click()`, `evaluate()` → `_channel.*` | -| `Locator` | `locator.ts` | Delegates to `Frame` methods with selector + `strict: true` | -| `ElementHandle` | `elementHandle.ts` | DOM element reference | - -### Public API Exports - -`packages/playwright-core/src/client/api.ts` exports all public classes. - -## Server Layer - -### SdkObject — Base Class - -Every server-side domain object extends `SdkObject`: - -``` -packages/playwright-core/src/server/instrumentation.ts -``` - -Key properties: -- `guid: string` — unique identifier (shared with client-side ChannelOwner) -- `attribution: Attribution` — ownership chain: `{ playwright, browserType?, browser?, context?, page?, frame? }` -- `instrumentation: Instrumentation` — hooks for tracing, debugging, test runner integration - -Attribution is inherited from parent on construction. Instrumentation hooks include: -`onBeforeCall`, `onAfterCall`, `onBeforeInputAction`, `onCallLog`, `onPageOpen/Close`, `onBrowserOpen/Close`, `onDialog`, `onDownload`. - -### Key Server Classes - -| Class | File | Purpose | -|-------|------|---------| -| `Playwright` | `playwright.ts` | Root entry point; creates BrowserTypes | -| `BrowserType` | `browserType.ts` | Launches browser processes | -| `Browser` | `browser.ts` | Abstract base; owns BrowserContexts | -| `BrowserContext` | `browserContext.ts` | Isolation boundary; owns pages, cookies, routes | -| `Page` | `page.ts` | Owns FrameManager, workers; delegates to `PageDelegate` | -| `FrameManager` | `frames.ts` | Manages frame hierarchy | -| `Frame` | `frames.ts` | Navigation, DOM queries, JavaScript evaluation | -| `ElementHandle` | `dom.ts` | DOM element operations | -| `ProgressController` | `progress.ts` | Wraps async operations with timeout/cancellation/logging | - -### PageDelegate Pattern - -`Page` delegates browser-specific operations to a `PageDelegate` interface: - -```typescript -interface PageDelegate { - navigateFrame(frame, url, referer): Promise; - takeScreenshot(progress, format, ...): Promise; - adoptElementHandle(handle, to): Promise; - // ... more browser-specific operations -} -``` - -Implementations: -- `packages/playwright-core/src/server/chromium/crPage.ts` — uses CDP -- `packages/playwright-core/src/server/firefox/ffPage.ts` -- `packages/playwright-core/src/server/webkit/wkPage.ts` - -### Browser Engine Directories - -| Directory | Protocol | Key files | -|-----------|----------|-----------| -| `chromium/` | Chrome DevTools Protocol (CDP) | `crBrowser.ts`, `crPage.ts`, `crConnection.ts` | -| `firefox/` | Firefox internal protocol | `ffBrowser.ts`, `ffPage.ts`, `ffConnection.ts` | -| `webkit/` | WebKit internal protocol | `wkBrowser.ts`, `wkPage.ts`, `wkConnection.ts` | -| `bidi/` | WebDriver BiDi | `bidiChromium.ts`, `bidiFirefox.ts` | -| `android/` | ADB | `android.ts` | -| `electron/` | Electron/CDP | `electron.ts` | - -## Dispatcher Layer - -Dispathers do not implement things, they translate protocol to the server code calls. - -### Dispatcher — Base Class - -``` -packages/playwright-core/src/server/dispatchers/dispatcher.ts -``` - -Dispatchers bridge server objects to the protocol. Each wraps an `SdkObject` and exposes methods matching the protocol channel. - -```typescript -class Dispatcher -``` - -Key properties: -- `connection: DispatcherConnection` — the server-side connection -- `_object: Type` — the wrapped server object -- `_guid: string` — same GUID as the server object -- `_type: string` — type name matching protocol -- `_parent: ParentScopeType` — parent dispatcher -- `_dispatchers: Map` — child dispatchers - -Key methods: -- `_dispatchEvent(method, params)` — sends event to client via `connection.sendEvent()` -- `_runCommand(callMetadata, method, params)` — wraps method call in `ProgressController`, calls `this[method](params, progress)` -- `_dispose()` — recursively disposes self and children, sends `__dispose__` to client -- `adopt(child)` — reparents child dispatcher, sends `__adopt__` to client -- `addObjectListener(event, handler)` — listens on wrapped server object, auto-cleaned on dispose - -### Dispatcher Creation Pattern - -Dispatchers use a static factory to ensure one-dispatcher-per-object: - -```typescript -static from(parentScope, object): XxxDispatcher { - return parentScope.connection.existingDispatcher(object) || new XxxDispatcher(parentScope, object); -} -``` - -The constructor sends `__create__` to the client with the initializer data. - -### DispatcherConnection - -Server-side counterpart to client's `Connection`: -- `_dispatcherByGuid` — all dispatchers by GUID -- `_dispatcherByObject` — maps server objects to their dispatchers (ensures 1:1) -- `dispatch(message)` — validates params, creates `CallMetadata`, calls instrumentation hooks, runs dispatcher method, validates result, sends response -- `sendCreate/sendAdopt/sendDispose/sendEvent` — lifecycle messages to client -- GC: buckets with limits (JSHandle/ElementHandle: 100k, others: 10k); oldest 10% disposed when exceeded - -### Dispatcher Hierarchy - -``` -RootDispatcher -└── PlaywrightDispatcher - ├── BrowserTypeDispatcher (per engine) - │ └── BrowserDispatcher - │ └── BrowserContextDispatcher - │ ├── PageDispatcher - │ │ ├── FrameDispatcher (main + child frames) - │ │ ├── WorkerDispatcher - │ │ └── ... - │ ├── TracingDispatcher - │ └── APIRequestContextDispatcher - ├── AndroidDispatcher - ├── ElectronDispatcher - └── LocalUtilsDispatcher -``` - -### Key Dispatcher Files - -| File | Dispatches for | -|------|---------------| -| `playwrightDispatcher.ts` | Playwright, BrowserType registration | -| `browserTypeDispatcher.ts` | BrowserType (launch, connect) | -| `browserDispatcher.ts` | Browser | -| `browserContextDispatcher.ts` | BrowserContext | -| `pageDispatcher.ts` | Page, Worker, BindingCall | -| `frameDispatcher.ts` | Frame | -| `networkDispatchers.ts` | Request, Response, Route, WebSocket, APIRequestContext | -| `elementHandlerDispatcher.ts` | ElementHandle | -| `jsHandleDispatcher.ts` | JSHandle | -| `dialogDispatcher.ts` | Dialog | -| `tracingDispatcher.ts` | Tracing | -| `artifactDispatcher.ts` | Artifact | - -## End-to-End Flow Example - -`await page.goto('https://example.com')`: - -``` -CLIENT: - Page.goto() - → _wrapApiCall() captures stack trace, creates ApiZone - → _channel.goto({ url, timeout }) - → Proxy validates PageGotoParams - → connection.sendMessageToServer(page, 'goto', params) - → sends { id: 1, guid: 'page@abc', method: 'goto', params: {...} } - → waits on callback promise - -SERVER: - DispatcherConnection.dispatch(message) - → validates PageGotoParams (wire → objects) - → creates CallMetadata - → instrumentation.onBeforeCall() - → PageDispatcher._runCommand('goto', params) - → ProgressController.run(progress => this.goto(params, progress)) - → PageDispatcher.goto(): this._object.mainFrame().goto(progress, url, params) - → Frame.goto() → PageDelegate.navigateFrame() → CDP/protocol call - → validates PageGotoResult (objects → wire) - → instrumentation.onAfterCall() - → sends { id: 1, result: { response: { guid: 'response@xyz' } } } - -CLIENT: - connection.dispatch(response) - → validates PageGotoResult (wire → objects) - → resolves callback promise - → _wrapApiCall completes, returns Response object -``` - -## Object Lifecycle - -1. **Creation**: Server creates SdkObject → dispatcher constructor sends `__create__` → client `Connection.dispatch()` instantiates `ChannelOwner` subclass -2. **Adoption**: `dispatcher.adopt(child)` sends `__adopt__` → client reparents the `ChannelOwner` -3. **Disposal**: `dispatcher._dispose()` recursively disposes children → sends `__dispose__` → client removes `ChannelOwner` from maps -4. **GC**: Server-side `maybeDisposeStaleDispatchers()` evicts oldest dispatchers per bucket when limits exceeded - -## Testing: tests/library vs tests/page - -Tests live in two directories under `tests/`, each with distinct scope and fixtures. - -### tests/library — API and Feature Tests - -Tests the **Playwright public API surface**, browser lifecycle, and feature-level behavior. Uses `browserTest` fixtures which provide direct access to `browser`, `browserType`, `context`, and `contextFactory`. - -```typescript -import { browserTest as test, expect } from '../config/browserTest'; - -test('should create new page', async ({ browser }) => { - const page = await browser.newPage(); - expect(browser.contexts().length).toBe(1); - await page.close(); -}); -``` - -**What belongs here:** -- Browser and BrowserType API (`launch`, `connect`, `version`, `newContext`) -- BrowserContext API (cookies, storage state, permissions, proxy, CSP, geolocation, network interception at context level) -- Browser-specific features (`chromium/` for CDP, tracing, extensions, JS/CSS coverage, OOPIF; `firefox/` for launcher specifics) -- Protocol and channel tests -- Inspector, codegen, and recorder features (`inspector/`) -- Event system tests (`events/`) -- Unit tests for internal utilities (`unit/`) - -**Key fixtures** (from `browserTest`): `browser`, `browserType`, `context`, `contextFactory`, `launchPersistent`, `createUserDataDir`, `startRemoteServer`, `pageWithHar`. - -### tests/page — Page Interaction Tests - -Tests **user-facing page interactions**: clicking, typing, navigation, locators, assertions, and DOM operations. Uses `pageTest` fixtures which provide a ready-to-use `page` plus test servers. - -```typescript -import { test as it, expect } from './pageTest'; - -it('should click button', async ({ page, server }) => { - await page.goto(server.PREFIX + '/input/button.html'); - await page.locator('button').click(); - expect(await page.evaluate(() => window['result'])).toBe('Clicked'); -}); -``` - -**What belongs here:** -- Locator API (click, fill, type, select, query, filtering, convenience methods) -- ElementHandle interactions (click, screenshot, selection, bounding box) -- Expect/assertion matchers (boolean, text, value, accessibility) -- Page navigation (`goto`, `waitForNavigation`, `waitForURL`) -- Frame evaluation and hierarchy -- Request/response interception at page level -- JSHandle operations -- Screenshot and visual comparison tests - -**Key fixtures** (from `pageTest`/`serverFixtures`): `page`, `server`, `httpsServer`, `proxyServer`, `asset`. - -### Decision Rule - -| Question | → Directory | -|----------|-------------| -| Does it test browser/context lifecycle or launch options? | `tests/library` | -| Does it test a browser-specific protocol feature (CDP, etc.)? | `tests/library` | -| Does it test user interaction with page content (click, type, assert)? | `tests/page` | -| Does it test locators, selectors, or DOM queries? | `tests/page` | -| Does the test need direct `browser` or `browserType` access? | `tests/library` | -| Does the test just need a `page` and a test server? | `tests/page` | - -### Running Tests - -- `npm run ctest ` — runs on Chromium only (fast, use during development) -- `npm run test ` — runs on all browsers (Chromium, Firefox, WebKit) - -Examples: -```bash -npm run ctest tests/library/browser-context-cookies.spec.ts -npm run ctest tests/page/locator-click.spec.ts -npm run test tests/library/browser-context-cookies.spec.ts -``` - -### Configuration - -Both directories share a single config at `tests/library/playwright.config.ts`. It creates separate projects (`{browserName}-library` and `{browserName}-page`) pointing to their respective `testDir`. diff --git a/.claude/skills/playwright-dev/mcp-dev.md b/.claude/skills/playwright-dev/mcp-dev.md deleted file mode 100644 index e022738a5..000000000 --- a/.claude/skills/playwright-dev/mcp-dev.md +++ /dev/null @@ -1,498 +0,0 @@ -# MCP Tools and CLI Commands - -## Adding MCP Tools - -### Step 1: Create the Tool File - -Create `packages/playwright/src/mcp/browser/tools/.ts`. - -Import zod from the MCP bundle and use `defineTool` or `defineTabTool`: - -```typescript -import { z } from 'playwright-core/lib/mcpBundle'; -import { defineTool, defineTabTool } from './tool'; -``` - -**Choose `defineTabTool` vs `defineTool`:** -- `defineTabTool` — most tools use this. Receives a `Tab` object, auto-handles modal state (dialogs/file choosers). -- `defineTool` — receives the full `Context`. Use when you need `context.ensureBrowserContext()` without a specific tab, or need custom tab management. - -**Tool definition pattern:** - -```typescript -const myTool = defineTabTool({ - capability: 'core', // ToolCapability — see step 2 - - // Optional: only available in skill mode (not exposed via MCP) - // skillOnly: true, - - // Optional: this tool clears a modal state ('dialog' | 'fileChooser') - // clearsModalState: 'dialog', - - schema: { - name: 'browser_my_tool', // MCP tool name (browser_ prefix) - title: 'My Tool', // Human-readable title - description: 'Does something', // Description shown to LLM - inputSchema: z.object({ - ref: z.string().describe('Element reference from snapshot'), - value: z.string().optional().describe('Optional value'), - }), - type: 'action', // 'input' | 'assertion' | 'action' | 'readOnly' - }, - - handle: async (tab, params, response) => { - // Implementation using tab.page (Playwright Page object) - await tab.page.click(`[ref="${params.ref}"]`); - - // Add generated Playwright code - response.addCode(`await page.click('[ref="${params.ref}"]');`); - - // Include page snapshot in response (for navigation/state changes) - response.setIncludeSnapshot(); - - // Or add text result - response.addTextResult('Done'); - }, -}); - -export default [myTool]; -``` - -**Schema type values:** -- `'action'` — state-changing operations (navigate, click, fill) -- `'input'` — user input (typing, keyboard) -- `'readOnly'` — queries that don't modify state (list cookies, get snapshot) -- `'assertion'` — testing/verification tools - -**Response API:** -- `response.addTextResult(text)` — add text to result section -- `response.addError(error)` — add error message -- `response.addCode(code)` — add generated Playwright code snippet -- `response.setIncludeSnapshot()` — include ARIA snapshot in response -- `response.setIncludeFullSnapshot(filename?)` — force full snapshot -- `response.addResult(title, data, fileTemplate)` — add file result -- `response.registerImageResult(data, 'png'|'jpeg')` — add image - -**Context tool example** (for browser-context-level operations): - -```typescript -const myContextTool = defineTool({ - capability: 'storage', - schema: { /* ... */ type: 'readOnly' }, - - handle: async (context, params, response) => { - const browserContext = await context.ensureBrowserContext(); - const cookies = await browserContext.cookies(); - response.addTextResult(cookies.map(c => `${c.name}=${c.value}`).join('\n')); - }, -}); -``` - -### Step 2: Add ToolCapability (if needed) - -If your tool doesn't fit an existing capability, add a new one to `packages/playwright/src/mcp/config.d.ts`: - -```typescript -export type ToolCapability = - 'config' | - 'core' | // Always enabled - 'core-navigation' | // Always enabled - 'core-tabs' | // Always enabled - 'core-input' | // Always enabled - 'core-install' | // Always enabled - 'network' | - 'pdf' | - 'storage' | - 'testing' | - 'vision' | - 'devtools'; // Add yours here -``` - -**Capability filtering rules:** -- Tools with `core*` capabilities are always enabled -- Other capabilities must be enabled via `--caps` or config `capabilities` array -- `skillOnly: true` tools are only available in skill mode, never via MCP - -### Step 3: Register the Tool - -In `packages/playwright/src/mcp/browser/tools.ts`: - -```typescript -import myTool from './tools/myTool'; - -export const browserTools: Tool[] = [ - // ... existing tools ... - ...myTool, -]; -``` - -### Step 4: Write Tests - -Create `tests/mcp/.spec.ts`. Use the fixtures from `./fixtures`: - -```typescript -import { test, expect } from './fixtures'; - -test('browser_my_tool', async ({ client, server }) => { - // Setup: navigate to a page first - await client.callTool({ - name: 'browser_navigate', - arguments: { url: server.PREFIX }, - }); - - // Call your tool - expect(await client.callTool({ - name: 'browser_my_tool', - arguments: { ref: 'e1' }, - })).toHaveResponse({ - code: `await page.click('[ref="e1"]');`, - snapshot: expect.stringContaining('some content'), - }); -}); - -test('browser_my_tool error case', async ({ client }) => { - expect(await client.callTool({ - name: 'browser_my_tool', - arguments: { ref: 'invalid' }, - })).toHaveResponse({ - error: expect.stringContaining('Error:'), - isError: true, - }); -}); -``` - -**Test fixtures:** -- `client` — MCP client, call tools via `client.callTool({ name, arguments })` -- `startClient(options?)` — client factory, for custom config/args/roots -- `server` — HTTP test server (`server.PREFIX`, `server.HELLO_WORLD`, `server.setContent(path, html, contentType)`) -- `httpsServer` — HTTPS test server - -**Custom matchers:** -- `toHaveResponse({ code?, snapshot?, page?, error?, isError?, result?, events?, modalState? })` — matches parsed response sections -- `toHaveTextResponse(text)` — matches raw text with normalization - -**Parsed response sections:** -- `code` — generated Playwright code (without ```js fences) -- `snapshot` — ARIA page snapshot (with ```yaml fences) -- `page` — page info (URL, title) -- `error` — error message -- `result` — text result -- `events` — console messages, downloads -- `modalState` — active dialog/file chooser info -- `tabs` — tab listing -- `isError` — boolean - -### Testing MCP Tools -- Run tests: `npm run ctest-mcp ` -- Do not run `test --debug` - ---- - -## Adding CLI Commands - -CLI commands are thin wrappers over MCP tools. They live in the daemon and map CLI args to MCP tool calls. - -### Step 1: Implement the MCP Tool - -Implement the corresponding MCP tool first (see section above). CLI commands call MCP tools via `toolName`/`toolParams`. - -### Step 2: Add the Command Declaration - -In `packages/playwright/src/cli/daemon/commands.ts`, use `declareCommand()`: - -```typescript -import { z } from 'playwright-core/lib/mcpBundle'; -import { declareCommand } from './command'; - -const myCommand = declareCommand({ - name: 'my-command', // CLI command name (kebab-case) - description: 'Does something', // Shown in help - category: 'core', // Category for help grouping - - // Positional arguments (ordered, parsed from CLI positional args) - args: z.object({ - url: z.string().describe('The URL to navigate to'), - ref: z.string().optional().describe('Optional element reference'), - }), - - // Named options (parsed from --flag or --flag=value) - options: z.object({ - submit: z.boolean().optional().describe('Whether to submit'), - filename: z.string().optional().describe('Output filename'), - }), - - // MCP tool name — string or function for dynamic routing - toolName: 'browser_my_tool', - // OR dynamic: - // toolName: ({ submit }) => submit ? 'browser_submit' : 'browser_type', - - // Map CLI args/options to MCP tool params - toolParams: ({ url, ref, submit, filename }) => ({ - url, - ref, - submit, - filename, - }), -}); -``` - -Then add to the `commandsArray` at the bottom of the file, in the correct category section: - -```typescript -const commandsArray: AnyCommandSchema[] = [ - // core category - open, - close, - // ... existing commands ... - myCommand, // <-- add here in the right category - // ... -]; -``` - -**Categories** (defined in `packages/playwright/src/cli/daemon/command.ts`): - -```typescript -type Category = 'core' | 'navigation' | 'keyboard' | 'mouse' | 'export' | - 'storage' | 'tabs' | 'network' | 'devtools' | 'browsers' | - 'config' | 'install'; -``` - -To add a new category: -1. Add it to `Category` type in `packages/playwright/src/cli/daemon/command.ts` -2. Add it to the `categories` array in `packages/playwright/src/cli/daemon/helpGenerator.ts`: - ```typescript - const categories: { name: Category, title: string }[] = [ - // ... existing ... - { name: 'mycat', title: 'My Category' }, - ]; - ``` - -**Special tool patterns:** -- `toolName: ''` — command handled specially by daemon (e.g., `close`, `list`, `install`) -- Use `numberArg` for numeric CLI args: `x: numberArg.describe('X coordinate')` -- Param renaming: `toolParams: ({ w: width, h: height }) => ({ width, height })` -- Dynamic toolName: `toolName: ({ clear }) => clear ? 'browser_clear' : 'browser_list'` - -### Step 3: Update SKILL File - -Update `packages/playwright/src/skill/SKILL.md` with the new command documentation. -Add reference docs in `packages/playwright/src/skill/references/` if the feature is complex. - -Run `npm run playwright-cli -- --help` to verify the help output includes your new command. - -### Step 4: Write CLI Tests - -Create `tests/mcp/cli-.spec.ts`. Use fixtures from `./cli-fixtures`: - -```typescript -import { test, expect } from './cli-fixtures'; - -test('my-command', async ({ cli, server }) => { - // Open a page first - await cli('open', server.PREFIX); - - // Run your command - const { output, snapshot } = await cli('my-command', 'arg1', '--option=value'); - expect(output).toContain('expected text'); - expect(snapshot).toContain('expected snapshot content'); -}); -``` - -**CLI test fixtures:** -- `cli(...args)` — run CLI command, returns `{ output, error, exitCode, snapshot, attachments }` - - `output` — stdout text - - `snapshot` — extracted ARIA snapshot (if present) - - `attachments` — file attachments `{ name, data }[]` - - `error` — stderr text - - `exitCode` — process exit code - -### Testing CLI Commands -- Run tests: `npm run ctest-mcp cli-` -- Do not run `test --debug` - ---- - -## Adding Config Options - -When you need to add a new config option, update these files in order: - -### 1. Type definition: `packages/playwright/src/mcp/config.d.ts` - -Add the option to the `Config` type with JSDoc: - -```typescript -export type Config = { - // ... existing ... - - /** - * Description of the new option. - */ - myOption?: string; -}; -``` - -### 2. CLI options type: `packages/playwright/src/mcp/browser/config.ts` - -Add to `CLIOptions` type: - -```typescript -export type CLIOptions = { - // ... existing ... - myOption?: string; -}; -``` - -If the option needs to be in `FullConfig` (with required/resolved values), update `FullConfig` and `defaultConfig`: - -```typescript -export type FullConfig = Config & { - // ... existing ... - myOption: string; // required in resolved config -}; - -export const defaultConfig: FullConfig = { - // ... existing ... - myOption: 'default-value', -}; -``` - -### 3. Config from CLI: `configFromCLIOptions()` in `config.ts` - -Map CLI option to config: - -```typescript -const config: Config = { - // ... existing ... - myOption: cliOptions.myOption, -}; -``` - -### 4. Config from env: `configFromEnv()` in `config.ts` - -Add environment variable mapping: - -```typescript -options.myOption = envToString(process.env.PLAYWRIGHT_MCP_MY_OPTION); -// For booleans: envToBoolean(process.env.PLAYWRIGHT_MCP_MY_OPTION) -// For numbers: numberParser(process.env.PLAYWRIGHT_MCP_MY_OPTION) -// For comma lists: commaSeparatedList(process.env.PLAYWRIGHT_MCP_MY_OPTION) -// For semicolon lists: semicolonSeparatedList(process.env.PLAYWRIGHT_MCP_MY_OPTION) -``` - -### 5. MCP server CLI: `packages/playwright/src/mcp/program.ts` - -Add CLI flag: - -```typescript -command - .option('--my-option ', 'description of option') -``` - -### 6. Merge config (if nested) - -If the option is nested, update `mergeConfig()` in `config.ts` to deep-merge it. - -**Config resolution order:** `defaultConfig` → config file → env vars → CLI args (last wins). - ---- - -## SKILL File - -The skill file is located at `packages/playwright/src/skill/SKILL.md`. It contains documentation for all available CLI commands and MCP tools. Update it whenever you add new commands or tools. - -Reference docs live in `packages/playwright/src/skill/references/`: -- `request-mocking.md` — network mocking patterns -- `running-code.md` — code execution -- `session-management.md` — session handling -- `storage-state.md` — state persistence -- `test-generation.md` — test creation -- `tracing.md` — trace recording -- `video-recording.md` — video capture - -Run `npm run playwright-cli -- --help` to see the latest available commands and use them to update the skill file. - ---- - -## Architecture Reference - -### Directory Structure - -``` -packages/playwright/src/ -├── mcp/ -│ ├── browser/ -│ │ ├── tools/ # All MCP tool implementations -│ │ │ ├── tool.ts # Tool/TabTool types, defineTool(), defineTabTool() -│ │ │ ├── common.ts # close, resize -│ │ │ ├── navigate.ts # navigate, goBack, goForward, reload -│ │ │ ├── snapshot.ts # page snapshot -│ │ │ ├── form.ts # click, type, fill, select, check -│ │ │ ├── keyboard.ts # press, keydown, keyup -│ │ │ ├── mouse.ts # mouse move, click, wheel -│ │ │ ├── tabs.ts # tab management -│ │ │ ├── cookies.ts # cookie CRUD -│ │ │ ├── webstorage.ts # localStorage, sessionStorage -│ │ │ ├── storage.ts # storage state save/load -│ │ │ ├── network.ts # network requests listing -│ │ │ ├── route.ts # request mocking/routing -│ │ │ ├── console.ts # console messages -│ │ │ ├── evaluate.ts # JS evaluation -│ │ │ ├── screenshot.ts # screenshots -│ │ │ ├── pdf.ts # PDF generation -│ │ │ ├── files.ts # file upload -│ │ │ ├── dialogs.ts # dialog handling -│ │ │ ├── verify.ts # assertions -│ │ │ ├── wait.ts # wait operations -│ │ │ ├── tracing.ts # trace recording -│ │ │ ├── video.ts # video recording -│ │ │ ├── runCode.ts # run Playwright code -│ │ │ ├── devtools.ts # DevTools integration -│ │ │ ├── config.ts # config tool -│ │ │ ├── install.ts # browser install -│ │ │ └── utils.ts # shared utilities -│ │ ├── tools.ts # Tool registry (browserTools array, filteredTools) -│ │ ├── config.ts # Config resolution, CLIOptions, FullConfig -│ │ ├── context.ts # Browser context management -│ │ ├── response.ts # Response class, parseResponse() -│ │ └── tab.ts # Tab management -│ ├── sdk/ -│ │ ├── server.ts # MCP server -│ │ └── tool.ts # ToolSchema type, toMcpTool() -│ ├── config.d.ts # Config type, ToolCapability type -│ └── program.ts # MCP server CLI setup -├── cli/ -│ ├── client/ -│ │ ├── program.ts # CLI client entry (argument parsing) -│ │ ├── session.ts # Session management -│ │ └── registry.ts # Session registry -│ └── daemon/ -│ ├── command.ts # Category type, CommandSchema, declareCommand(), parseCommand() -│ ├── commands.ts # All CLI command declarations -│ ├── helpGenerator.ts # Help text generation (generateHelp, generateHelpJSON) -│ └── daemon.ts # Daemon server -└── skill/ - ├── SKILL.md # Skill documentation - └── references/ # Reference docs - -tests/mcp/ -├── fixtures.ts # MCP test fixtures (client, startClient, server) -├── cli-fixtures.ts # CLI test fixtures (cli helper) -├── .spec.ts # MCP tool tests -└── cli-.spec.ts # CLI command tests -``` - -### Execution Flow - -``` -MCP Server mode: - LLM → MCP protocol → Server.callTool(name, args) - → zod validates input → Tool.handle(context|tab, params, response) - → response.serialize() → MCP protocol → LLM - -CLI mode: - User → `playwright-cli my-command arg1 --opt=val` - → Client parses with minimist → sends to Daemon via socket - → parseCommand() maps CLI args to MCP tool params via zod - → backend.callTool(toolName, toolParams) - → Response formatted → printed to stdout -``` diff --git a/.claude/skills/playwright-dev/vendor.md b/.claude/skills/playwright-dev/vendor.md deleted file mode 100644 index e58a39c94..000000000 --- a/.claude/skills/playwright-dev/vendor.md +++ /dev/null @@ -1,190 +0,0 @@ -# Vendoring (Bundling) a New Dependency - -Playwright vendors third-party npm packages by bundling them with esbuild into self-contained files. -This isolates dependencies, prevents version conflicts, and keeps the published packages lean. - -## Architecture Overview - -Each bundle lives under `packages//bundles//` and consists of three parts: - -1. **Bundle directory** (`bundles//`) — has its own `package.json` with the dependencies to vendor, plus a `src/BundleImpl.ts` entry point that imports and re-exports them. -2. **Build configuration** in `utils/build/build.js` — an esbuild entry that bundles the impl file into a single minified CJS file. -3. **Wrapper file** (`src/Bundle.ts`) — a thin typed wrapper that `require()`s the built bundle impl and re-exports symbols with TypeScript types. - -Data flow: -``` -bundles//package.json (declares npm deps) - → npm ci → node_modules/ -bundles//src/BundleImpl.ts (imports from node_modules, re-exports) - → esbuild (bundle + minify) → -lib/BundleImpl.js (single self-contained file) - ← -src/Bundle.ts (typed wrapper, require('./...BundleImpl')) - → esbuild (normal compile) → -lib/Bundle.js (used by application code) -``` - -## Step-by-Step: Adding a New Bundle - -### Decide which package it belongs to - -- `packages/playwright-core/bundles/` — for core browser automation deps (networking, compression, protocols, etc.) -- `packages/playwright/bundles/` — for test runner deps (assertion libs, transpilers, file watchers, etc.) - -### 1. Create the bundle directory - -``` -packages//bundles// -├── package.json -└── src/ - └── BundleImpl.ts -``` - -### 2. Create `package.json` - -Minimal private package with only the deps you want to bundle: - -```json -{ - "name": "-bundle", - "version": "0.0.1", - "private": true, - "dependencies": { - "some-lib": "^1.2.3" - }, - "devDependencies": { - "@types/some-lib": "^1.2.0" - } -} -``` - -Then run `npm install` inside the bundle directory to generate `package-lock.json`. - -### 3. Create `src/BundleImpl.ts` - -This is the esbuild entry point. Import from `node_modules` and re-export: - -```typescript -// For default exports: -import someLibrary from 'some-lib'; -export const someLib = someLibrary; - -// For named exports: -export { SomeClass } from 'some-lib'; - -// For namespace imports: -import * as someLibrary from 'some-lib'; -export const someLib = someLibrary; - -// For vendored/third-party code that can't be bundled: -const custom = require('./third_party/custom'); -export const customThing = custom; -``` - -### 4. Register the bundle in `utils/build/build.js` - -Add an entry to the `bundles` array (around line 246): - -```javascript -bundles.push({ - modulePath: 'packages//bundles/', - entryPoints: ['src/BundleImpl.ts'], - // Use outdir for a single .js file alongside other lib files: - outdir: 'packages//lib', - // OR use outfile for output in a subdirectory (needed if bundle has non-JS assets): - // outfile: 'packages//lib/BundleImpl/index.js', - - // Optional: deps that should NOT be bundled (must be installed at runtime): - // external: ['express'], - - // Optional: redirect imports to custom implementations: - // alias: { 'some-module': 'custom-impl.ts' }, -}); -``` - -**`outdir` vs `outfile`:** -- `outdir` — output goes to `lib/BundleImpl.js` (most bundles use this) -- `outfile` — output goes to `lib/BundleImpl/index.js` (use when you need to copy companion files like binaries next to the bundle) - -### 5. Create the typed wrapper `src/Bundle.ts` - -This file lives in the main package source (NOT in the bundle directory). It provides TypeScript types while loading the bundled code at runtime: - -```typescript -// packages//src/Bundle.ts -// (or src/subdir/Bundle.ts if it belongs in a subdirectory) - -export const someLib: typeof import('../bundles//node_modules/some-lib') - = require('./BundleImpl').someLib; - -export const SomeClass: typeof import('../bundles//node_modules/some-lib').SomeClass - = require('./BundleImpl').SomeClass; - -// Re-export types if needed: -export type { SomeType } from '../bundles//node_modules/some-lib'; -``` - -The pattern is: `typeof import('../bundles//node_modules/...')` for the type, `require('./BundleImpl').` for the value. - -If the wrapper lives in a subdirectory (e.g. `src/common/Bundle.ts`), adjust the `outdir` accordingly so the BundleImpl ends up next to the compiled wrapper: -```javascript -// in build.js -outdir: 'packages//lib/common', -``` - -### 6. Build and verify - -```bash -npm run build -``` - -Or if watch is running, it will pick up changes automatically. - -### 7. Use the bundle in application code - -Import from the wrapper file, never from the bundle directory or `node_modules` directly: - -```typescript -import { someLib } from '../Bundle'; -``` - -## Existing Bundles Reference - -### playwright-core bundles - -| Bundle | Deps | Output | -|--------|------|--------| -| `utils` | colors, commander, debug, diff, dotenv, graceful-fs, https-proxy-agent, jpeg-js, mime, minimatch, open, pngjs, progress, proxy-from-env, socks-proxy-agent, ws, yaml | `lib/utilsBundleImpl/index.js` | -| `zip` | yauzl, yazl, get-stream, debug | `lib/zipBundleImpl.js` | -| `mcp` | @modelcontextprotocol/sdk, zod, zod-to-json-schema | `lib/mcpBundleImpl/index.js` | - -### playwright bundles - -| Bundle | Deps | Output | -|--------|------|--------| -| `utils` | chokidar, enquirer, json5, source-map-support, stoppable, unified, remark-parse | `lib/utilsBundleImpl.js` | -| `babel` | ~30 @babel/* packages | `lib/transform/babelBundleImpl.js` | -| `expect` | expect, jest-matcher-utils | `lib/common/expectBundleImpl.js` | - -## Advanced Patterns - -### Adding a dep to an existing bundle - -If the dep logically belongs with an existing bundle (e.g. a new utility lib → `utils` bundle): - -1. Add the dependency to the existing `bundles//package.json` -2. Run `npm install` in that bundle directory -3. Add the import/export to the existing `src/BundleImpl.ts` -4. Add the typed re-export to the existing `src/Bundle.ts` - -### Vendored third-party code - -If a package can't be bundled by esbuild (e.g. it uses dynamic requires or has runtime file dependencies), place a modified copy in `bundles//src/third_party/` and require it from the BundleImpl. See `bundles/zip/src/third_party/extract-zip.js` for an example. - -### External dependencies - -Use `external: ['pkg']` in the build.js config when a dependency should NOT be bundled — e.g. optional peer deps that users install themselves. These must be available at runtime in the consumer's `node_modules`. - -### Module aliases - -Use `alias: { 'module-name': 'local-file.ts' }` to replace a dependency with a custom local implementation. The alias path is relative to the bundle's `modulePath`. See the `mcp` bundle's `raw-body` alias for an example. diff --git a/.claude/skills/real-time-features/SKILL.md b/.claude/skills/real-time-features/SKILL.md deleted file mode 100644 index 51ad1c4ef..000000000 --- a/.claude/skills/real-time-features/SKILL.md +++ /dev/null @@ -1,703 +0,0 @@ ---- -name: real-time-features -description: Implement real-time functionality using WebSockets, Server-Sent Events (SSE), or long polling. Use when building chat applications, live dashboards, collaborative editing, notifications, or any feature requiring instant updates. ---- - -# Real-Time Features - -## Overview - -Implement real-time bidirectional communication between clients and servers for instant data synchronization and live updates. - -## When to Use - -- Chat and messaging applications -- Live dashboards and analytics -- Collaborative editing (Google Docs-style) -- Real-time notifications -- Live sports scores or stock tickers -- Multiplayer games -- Live auctions or bidding systems -- IoT device monitoring -- Real-time location tracking - -## Technologies Comparison - -| Technology | Direction | Use Case | Browser Support | -|------------|-----------|----------|-----------------| -| **WebSockets** | Bidirectional | Chat, gaming, collaboration | Excellent | -| **SSE** | Server → Client | Live updates, notifications | Good (no IE) | -| **Long Polling** | Request/Response | Fallback, simple updates | Universal | -| **WebRTC** | Peer-to-peer | Video/audio streaming | Good | - -## Implementation Examples - -### 1. **WebSocket Server (Node.js)** - -```typescript -// server.ts -import WebSocket, { WebSocketServer } from 'ws'; -import { createServer } from 'http'; - -interface Message { - type: 'join' | 'message' | 'leave' | 'typing'; - userId: string; - username: string; - content?: string; - timestamp: number; -} - -interface Client { - ws: WebSocket; - userId: string; - username: string; - roomId: string; -} - -class ChatServer { - private wss: WebSocketServer; - private clients: Map = new Map(); - private rooms: Map> = new Map(); - - constructor(port: number) { - const server = createServer(); - this.wss = new WebSocketServer({ server }); - - this.wss.on('connection', this.handleConnection.bind(this)); - - server.listen(port, () => { - console.log(`WebSocket server running on port ${port}`); - }); - - // Heartbeat to detect disconnections - this.startHeartbeat(); - } - - private handleConnection(ws: WebSocket): void { - const clientId = this.generateId(); - - console.log(`New connection: ${clientId}`); - - ws.on('message', (data: string) => { - try { - const message: Message = JSON.parse(data.toString()); - this.handleMessage(clientId, message, ws); - } catch (error) { - console.error('Invalid message format:', error); - } - }); - - ws.on('close', () => { - this.handleDisconnect(clientId); - }); - - ws.on('error', (error) => { - console.error(`WebSocket error for ${clientId}:`, error); - }); - - // Keep connection alive - (ws as any).isAlive = true; - ws.on('pong', () => { - (ws as any).isAlive = true; - }); - } - - private handleMessage( - clientId: string, - message: Message, - ws: WebSocket - ): void { - switch (message.type) { - case 'join': - this.handleJoin(clientId, message, ws); - break; - - case 'message': - this.broadcastToRoom(clientId, message); - break; - - case 'typing': - this.broadcastToRoom(clientId, message, [clientId]); - break; - - case 'leave': - this.handleDisconnect(clientId); - break; - } - } - - private handleJoin( - clientId: string, - message: Message, - ws: WebSocket - ): void { - const client: Client = { - ws, - userId: message.userId, - username: message.username, - roomId: 'general' // Could be dynamic - }; - - this.clients.set(clientId, client); - - // Add to room - if (!this.rooms.has(client.roomId)) { - this.rooms.set(client.roomId, new Set()); - } - this.rooms.get(client.roomId)!.add(clientId); - - // Notify room - this.broadcastToRoom(clientId, { - type: 'join', - userId: message.userId, - username: message.username, - timestamp: Date.now() - }); - - // Send room state to new user - this.sendRoomState(clientId); - } - - private broadcastToRoom( - senderId: string, - message: Message, - exclude: string[] = [] - ): void { - const sender = this.clients.get(senderId); - if (!sender) return; - - const roomClients = this.rooms.get(sender.roomId); - if (!roomClients) return; - - const payload = JSON.stringify(message); - - roomClients.forEach(clientId => { - if (!exclude.includes(clientId)) { - const client = this.clients.get(clientId); - if (client && client.ws.readyState === WebSocket.OPEN) { - client.ws.send(payload); - } - } - }); - } - - private sendRoomState(clientId: string): void { - const client = this.clients.get(clientId); - if (!client) return; - - const roomClients = this.rooms.get(client.roomId); - if (!roomClients) return; - - const users = Array.from(roomClients) - .map(id => this.clients.get(id)) - .filter(c => c && c.userId !== client.userId) - .map(c => ({ userId: c!.userId, username: c!.username })); - - client.ws.send(JSON.stringify({ - type: 'room_state', - users, - timestamp: Date.now() - })); - } - - private handleDisconnect(clientId: string): void { - const client = this.clients.get(clientId); - if (!client) return; - - // Remove from room - const roomClients = this.rooms.get(client.roomId); - if (roomClients) { - roomClients.delete(clientId); - - // Notify others - this.broadcastToRoom(clientId, { - type: 'leave', - userId: client.userId, - username: client.username, - timestamp: Date.now() - }); - } - - this.clients.delete(clientId); - console.log(`Client disconnected: ${clientId}`); - } - - private startHeartbeat(): void { - setInterval(() => { - this.wss.clients.forEach((ws: any) => { - if (ws.isAlive === false) { - return ws.terminate(); - } - ws.isAlive = false; - ws.ping(); - }); - }, 30000); - } - - private generateId(): string { - return Math.random().toString(36).substr(2, 9); - } -} - -// Start server -new ChatServer(8080); -``` - -### 2. **WebSocket Client (React)** - -```typescript -// useWebSocket.ts -import { useEffect, useRef, useState, useCallback } from 'react'; - -interface UseWebSocketOptions { - url: string; - onMessage?: (data: any) => void; - onOpen?: () => void; - onClose?: () => void; - onError?: (error: Event) => void; - reconnectAttempts?: number; - reconnectInterval?: number; -} - -export const useWebSocket = (options: UseWebSocketOptions) => { - const { - url, - onMessage, - onOpen, - onClose, - onError, - reconnectAttempts = 5, - reconnectInterval = 3000 - } = options; - - const [isConnected, setIsConnected] = useState(false); - const [connectionStatus, setConnectionStatus] = useState< - 'connecting' | 'connected' | 'disconnected' | 'error' - >('connecting'); - - const wsRef = useRef(null); - const reconnectCountRef = useRef(0); - const reconnectTimeoutRef = useRef(); - - const connect = useCallback(() => { - try { - setConnectionStatus('connecting'); - const ws = new WebSocket(url); - - ws.onopen = () => { - console.log('WebSocket connected'); - setIsConnected(true); - setConnectionStatus('connected'); - reconnectCountRef.current = 0; - onOpen?.(); - }; - - ws.onmessage = (event) => { - try { - const data = JSON.parse(event.data); - onMessage?.(data); - } catch (error) { - console.error('Failed to parse message:', error); - } - }; - - ws.onerror = (error) => { - console.error('WebSocket error:', error); - setConnectionStatus('error'); - onError?.(error); - }; - - ws.onclose = () => { - console.log('WebSocket disconnected'); - setIsConnected(false); - setConnectionStatus('disconnected'); - onClose?.(); - - // Attempt reconnection - if (reconnectCountRef.current < reconnectAttempts) { - reconnectCountRef.current++; - console.log( - `Reconnecting... (${reconnectCountRef.current}/${reconnectAttempts})` - ); - reconnectTimeoutRef.current = setTimeout(() => { - connect(); - }, reconnectInterval); - } - }; - - wsRef.current = ws; - } catch (error) { - console.error('Failed to connect:', error); - setConnectionStatus('error'); - } - }, [url, onMessage, onOpen, onClose, onError, reconnectAttempts, reconnectInterval]); - - const disconnect = useCallback(() => { - if (reconnectTimeoutRef.current) { - clearTimeout(reconnectTimeoutRef.current); - } - wsRef.current?.close(); - wsRef.current = null; - }, []); - - const send = useCallback((data: any) => { - if (wsRef.current?.readyState === WebSocket.OPEN) { - wsRef.current.send(JSON.stringify(data)); - } else { - console.warn('WebSocket is not connected'); - } - }, []); - - useEffect(() => { - connect(); - return () => { - disconnect(); - }; - }, [connect, disconnect]); - - return { - isConnected, - connectionStatus, - send, - disconnect, - reconnect: connect - }; -}; - -// Usage in component -const ChatComponent: React.FC = () => { - const [messages, setMessages] = useState([]); - - const { isConnected, send } = useWebSocket({ - url: 'ws://localhost:8080', - onMessage: (data) => { - if (data.type === 'message') { - setMessages(prev => [...prev, data]); - } - }, - onOpen: () => { - send({ - type: 'join', - userId: 'user123', - username: 'John Doe', - timestamp: Date.now() - }); - } - }); - - const sendMessage = (content: string) => { - send({ - type: 'message', - userId: 'user123', - username: 'John Doe', - content, - timestamp: Date.now() - }); - }; - - return ( -
    -
    Status: {isConnected ? 'Connected' : 'Disconnected'}
    -
    - {messages.map((msg, i) => ( -
    {msg.username}: {msg.content}
    - ))} -
    -
    - ); -}; -``` - -### 3. **Server-Sent Events (SSE)** - -```typescript -// server.ts - SSE endpoint -import express from 'express'; - -const app = express(); - -interface Client { - id: string; - res: express.Response; -} - -class SSEManager { - private clients: Client[] = []; - - addClient(id: string, res: express.Response): void { - // Set SSE headers - res.setHeader('Content-Type', 'text/event-stream'); - res.setHeader('Cache-Control', 'no-cache'); - res.setHeader('Connection', 'keep-alive'); - res.setHeader('Access-Control-Allow-Origin', '*'); - - this.clients.push({ id, res }); - - // Send initial connection event - this.sendToClient(id, { - type: 'connected', - clientId: id, - timestamp: Date.now() - }); - - console.log(`Client ${id} connected. Total: ${this.clients.length}`); - } - - removeClient(id: string): void { - this.clients = this.clients.filter(client => client.id !== id); - console.log(`Client ${id} disconnected. Total: ${this.clients.length}`); - } - - sendToClient(id: string, data: any): void { - const client = this.clients.find(c => c.id === id); - if (client) { - client.res.write(`data: ${JSON.stringify(data)}\n\n`); - } - } - - broadcast(data: any, excludeId?: string): void { - const message = `data: ${JSON.stringify(data)}\n\n`; - this.clients.forEach(client => { - if (client.id !== excludeId) { - client.res.write(message); - } - }); - } - - sendEvent(event: string, data: any): void { - const message = `event: ${event}\ndata: ${JSON.stringify(data)}\n\n`; - this.clients.forEach(client => { - client.res.write(message); - }); - } -} - -const sseManager = new SSEManager(); - -app.get('/events', (req, res) => { - const clientId = Math.random().toString(36).substr(2, 9); - - sseManager.addClient(clientId, res); - - req.on('close', () => { - sseManager.removeClient(clientId); - }); -}); - -// Simulate real-time updates -setInterval(() => { - sseManager.broadcast({ - type: 'update', - value: Math.random() * 100, - timestamp: Date.now() - }); -}, 5000); - -app.listen(3000, () => { - console.log('SSE server running on port 3000'); -}); -``` - -```typescript -// client.ts - SSE client -class SSEClient { - private eventSource: EventSource | null = null; - private reconnectAttempts = 0; - private maxReconnectAttempts = 5; - - connect(url: string, handlers: { - onMessage?: (data: any) => void; - onError?: (error: Event) => void; - onOpen?: () => void; - }): void { - this.eventSource = new EventSource(url); - - this.eventSource.onopen = () => { - console.log('SSE connected'); - this.reconnectAttempts = 0; - handlers.onOpen?.(); - }; - - this.eventSource.onmessage = (event) => { - try { - const data = JSON.parse(event.data); - handlers.onMessage?.(data); - } catch (error) { - console.error('Failed to parse SSE data:', error); - } - }; - - this.eventSource.onerror = (error) => { - console.error('SSE error:', error); - handlers.onError?.(error); - - if (this.reconnectAttempts < this.maxReconnectAttempts) { - this.reconnectAttempts++; - setTimeout(() => { - console.log('Reconnecting to SSE...'); - this.connect(url, handlers); - }, 3000); - } - }; - - // Custom event listeners - this.eventSource.addEventListener('custom-event', (event: any) => { - console.log('Custom event:', JSON.parse(event.data)); - }); - } - - disconnect(): void { - this.eventSource?.close(); - this.eventSource = null; - } -} - -// Usage -const client = new SSEClient(); -client.connect('http://localhost:3000/events', { - onMessage: (data) => { - console.log('Received:', data); - }, - onOpen: () => { - console.log('Connected to server'); - } -}); -``` - -### 4. **Socket.IO (Production-Ready)** - -```typescript -// server.ts -import { Server } from 'socket.io'; -import { createServer } from 'http'; - -const httpServer = createServer(); -const io = new Server(httpServer, { - cors: { - origin: process.env.CLIENT_URL || 'http://localhost:3000', - methods: ['GET', 'POST'] - }, - pingTimeout: 60000, - pingInterval: 25000 -}); - -// Middleware -io.use((socket, next) => { - const token = socket.handshake.auth.token; - if (isValidToken(token)) { - next(); - } else { - next(new Error('Authentication error')); - } -}); - -io.on('connection', (socket) => { - console.log(`User connected: ${socket.id}`); - - // Join room - socket.on('join-room', (roomId: string) => { - socket.join(roomId); - socket.to(roomId).emit('user-joined', { - userId: socket.id, - timestamp: Date.now() - }); - }); - - // Handle messages - socket.on('message', (data) => { - const roomId = Array.from(socket.rooms)[1]; // First is own ID - io.to(roomId).emit('message', { - ...data, - userId: socket.id, - timestamp: Date.now() - }); - }); - - // Typing indicator - socket.on('typing', (isTyping: boolean) => { - const roomId = Array.from(socket.rooms)[1]; - socket.to(roomId).emit('user-typing', { - userId: socket.id, - isTyping - }); - }); - - socket.on('disconnect', () => { - console.log(`User disconnected: ${socket.id}`); - }); -}); - -httpServer.listen(3001); - -function isValidToken(token: string): boolean { - // Implement token validation - return true; -} -``` - -## Best Practices - -### ✅ DO -- Implement reconnection logic with exponential backoff -- Use heartbeat/ping-pong to detect dead connections -- Validate and sanitize all messages -- Implement authentication and authorization -- Handle connection limits and rate limiting -- Use compression for large payloads -- Implement proper error handling -- Monitor connection health -- Use rooms/channels for targeted messaging -- Implement graceful shutdown - -### ❌ DON'T -- Send sensitive data without encryption -- Keep connections open indefinitely without cleanup -- Broadcast to all users when targeted messaging suffices -- Ignore connection state management -- Send large payloads frequently -- Skip message validation -- Forget about mobile/unstable connections -- Ignore scaling considerations - -## Performance Optimization - -```typescript -// Message batching -class MessageBatcher { - private queue: any[] = []; - private timer: NodeJS.Timeout | null = null; - private batchSize = 10; - private batchDelay = 100; - - constructor( - private sendFn: (messages: any[]) => void - ) {} - - add(message: any): void { - this.queue.push(message); - - if (this.queue.length >= this.batchSize) { - this.flush(); - } else if (!this.timer) { - this.timer = setTimeout(() => this.flush(), this.batchDelay); - } - } - - private flush(): void { - if (this.queue.length > 0) { - this.sendFn(this.queue.splice(0)); - } - if (this.timer) { - clearTimeout(this.timer); - this.timer = null; - } - } -} -``` - -## Resources - -- [WebSocket API (MDN)](https://developer.mozilla.org/en-US/docs/Web/API/WebSocket) -- [Socket.IO Documentation](https://socket.io/docs/) -- [Server-Sent Events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events) -- [ws - WebSocket Library](https://github.com/websockets/ws) diff --git a/.claude/skills/sks/skill.md b/.claude/skills/sks/skill.md deleted file mode 100644 index 1e488d20d..000000000 --- a/.claude/skills/sks/skill.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -name: sks -description: 显示已激活的 Skills 状态和所有可用命令列表 ---- - -无需任何参数,**直接运行**: - -```bash -python3 "$(git rev-parse --show-toplevel 2>/dev/null)/.claude/skills/sks/status.py" -``` diff --git a/.claude/skills/sks/status.py b/.claude/skills/sks/status.py deleted file mode 100755 index 32b3a2137..000000000 --- a/.claude/skills/sks/status.py +++ /dev/null @@ -1,87 +0,0 @@ -#!/usr/bin/env python3 -"""sks - Show active skills status and available commands.""" -import sys -import os -import subprocess -from pathlib import Path - -# ANSI colors -BOLD = "\033[1m" -CYAN = "\033[36m" -GREEN = "\033[32m" -GRAY = "\033[90m" -RED = "\033[31m" -RESET = "\033[0m" - - -def get_paths() -> tuple[Path, Path, Path]: - """Return (claude_dir, skills_dir, groups_dir).""" - try: - result = subprocess.run( - ["git", "rev-parse", "--show-toplevel"], - capture_output=True, text=True, check=True - ) - claude_dir = Path(result.stdout.strip()) / ".claude" - except subprocess.CalledProcessError: - claude_dir = Path.cwd() / ".claude" - return claude_dir, claude_dir / "skills", claude_dir / "skill-groups" - - -def header(title: str) -> None: - line = "─" * 40 - print(f"\n{BOLD}{title}{RESET} {GRAY}{line}{RESET}") - - -def main() -> None: - claude_dir, skills_dir, groups_dir = get_paths() - - # Commands list - print(f"{BOLD}可用命令{RESET}") - cmds = [ - ("/sks", "显示已激活 skill 和分组概览"), - ("/sksls", "列出所有组和 skill 的激活状态"), - ("/skssearch <词>", "搜索 SkillsMP(关键词 + AI 语义混合)"), - ("/sksadd <组> ", "从搜索结果安装第 N 条到指定组"), - ("/skson <组>", "激活整组"), - ("/sksoff <组>", "关闭整组"), - ("/sksgnew <组>", "创建新分组"), - ("/sksgrm <组>", "删除整组"), - ("/sksrm <组/skill>", "删除单个 skill"), - ] - for cmd, desc in cmds: - print(f" {CYAN}{cmd:<20}{RESET} {desc}") - - # Active skills - header("已激活的 Skills") - found = 0 - if skills_dir.exists(): - for item in sorted(skills_dir.iterdir()): - if item.is_symlink(): - target = item.resolve() - group = target.parent.name - print(f" ✅ {GREEN}{item.name}{RESET} {GRAY}← {group}{RESET}") - found += 1 - if found == 0: - print(f" {GRAY}(无激活的分组 skill){RESET}") - - # Groups overview - header("分组库") - if groups_dir.exists() and any(groups_dir.iterdir()): - for group_dir in sorted(groups_dir.iterdir()): - if not group_dir.is_dir(): - continue - gname = group_dir.name - total = sum(1 for d in group_dir.iterdir() if d.is_dir()) - active = sum( - 1 for link in skills_dir.iterdir() - if link.is_symlink() and gname in str(link.resolve()) - ) if skills_dir.exists() else 0 - print(f" 📁 {BOLD}{gname}{RESET} {GREEN}{active}{RESET}{GRAY}/{total} 已激活{RESET}") - else: - print(f" {GRAY}(暂无分组,运行 /sksgnew <组名> 创建){RESET}") - - print() - - -if __name__ == "__main__": - main() diff --git a/.claude/skills/sksadd/add.py b/.claude/skills/sksadd/add.py deleted file mode 100755 index 9fedfa3b9..000000000 --- a/.claude/skills/sksadd/add.py +++ /dev/null @@ -1,147 +0,0 @@ -#!/usr/bin/env python3 -"""sksadd - Install skill from last search results into a group.""" -import json -import re -import shutil -import subprocess -import sys -from pathlib import Path - -# ANSI colors -BOLD = "\033[1m" -CYAN = "\033[36m" -GREEN = "\033[32m" -GRAY = "\033[90m" -RED = "\033[31m" -RESET = "\033[0m" - - -def get_paths() -> tuple[Path, Path, Path]: - """Return (claude_dir, skills_dir, groups_dir).""" - try: - result = subprocess.run( - ["git", "rev-parse", "--show-toplevel"], - capture_output=True, text=True, check=True - ) - claude_dir = Path(result.stdout.strip()) / ".claude" - except subprocess.CalledProcessError: - claude_dir = Path.cwd() / ".claude" - return claude_dir, claude_dir / "skills", claude_dir / "skill-groups" - - -def extract_error_reason(stdout: str, stderr: str) -> str: - """Extract a concise failure reason from npx skills output.""" - combined = stdout + "\n" + stderr - # Common patterns from npx skills output - if "No matching skills found" in combined: - m = re.search(r"No matching skills found for: (.+)", combined) - return f"路径不存在:{m.group(1).strip()}" if m else "路径不存在" - if "Could not find" in combined: - return "找不到 skill(路径错误或仓库无此 skill)" - if "ENOTFOUND" in combined or "ECONNREFUSED" in combined: - return "网络连接失败" - if "404" in combined: - return "仓库或路径 404" - if "already exists" in combined: - return "skill 已存在(先删除再安装)" - # Last resort: first non-empty stderr line - for line in stderr.strip().splitlines(): - line = line.strip() - if line and not line.startswith("npm"): - return line[:120] - return "未知错误(运行 npx skills add 查看详情)" - - -def install_one(index: int, data: list, group: str, - skills_dir: Path, group_dir: Path) -> bool: - """Install a single skill. Returns True on success.""" - skill = data[index] - install_cmd = skill.get("installCmd", "") - skill_name = (install_cmd.split("@")[-1].split("/")[-1] - if "@" in install_cmd else install_cmd.split("/")[-1]) - - print(f"{GRAY}[#{index+1}] 正在安装 {skill_name}...{RESET}") - - if not install_cmd: - print(f"{RED}[#{index+1}] ❌ {skill_name} — 无安装命令(AI 搜索结果数据不完整){RESET}") - return False - - try: - result = subprocess.run( - ["npx", "skills", "add", install_cmd, "--agent", "claude-code", "--copy", "-y"], - capture_output=True, text=True, stdin=subprocess.DEVNULL, timeout=30 - ) - except subprocess.TimeoutExpired: - print(f"{RED}[#{index+1}] ❌ {skill_name} — 超时(30s){RESET}") - return False - - if result.returncode != 0: - reason = extract_error_reason(result.stdout, result.stderr) - print(f"{RED}[#{index+1}] ❌ {skill_name} — {reason}{RESET}") - return False - - installed = skills_dir / skill_name - if not installed.exists(): - print(f"{RED}[#{index+1}] ❌ {skill_name} — 安装后找不到文件(skill 名与目录名不匹配){RESET}") - return False - - dest = group_dir / skill_name - shutil.move(str(installed), str(dest)) - print(f"[#{index+1}] ✅ {BOLD}{skill_name}{RESET} → {group}") - return True - - -def main() -> None: - if len(sys.argv) < 3: - print(f"{GRAY}用法:/sksadd <组名> <编号> [编号2 编号3 ...]{RESET}") - print(f"{GRAY}先运行 {RESET}{CYAN}/skssearch <关键词>{RESET}{GRAY} 获取编号。{RESET}") - sys.exit(1) - - group = sys.argv[1] - try: - indices = [int(x) - 1 for x in sys.argv[2:]] - except ValueError: - print(f"❌ 编号必须是整数") - sys.exit(1) - - claude_dir, skills_dir, groups_dir = get_paths() - last_search = claude_dir / ".sks-last-search.json" - - if not last_search.exists(): - print(f"❌ 没有搜索记录,请先运行 {CYAN}/skssearch <关键词>{RESET}") - sys.exit(1) - - group_dir = groups_dir / group - if not group_dir.exists(): - print(f"❌ 组 '{group}' 不存在,先运行 {CYAN}/sksgnew {group}{RESET}") - sys.exit(1) - - data = json.loads(last_search.read_text()) - for i in indices: - if i < 0 or i >= len(data): - print(f"❌ 编号 {i+1} 超出范围(共 {len(data)} 条结果)") - sys.exit(1) - - results = [None] * len(indices) - - # npx skills 有全局锁,不支持并行,只能顺序安装 - for pos, idx in enumerate(indices): - results[pos] = install_one(idx, data, group, skills_dir, group_dir) - - success = sum(1 for r in results if r) - total = len(indices) - - if total > 1: - print(f"\n{BOLD}安装完成:{success}/{total} 成功{RESET}") - - if success > 0: - print(f"{GRAY}激活请运行:{RESET}{CYAN}/skson {group}{RESET}") - print(f"\n{BOLD}⚠️ 新安装的 skill 需要重启 session 才能生效{RESET}") - print(f"{GRAY}请关闭当前 Claude Code 会话,重新打开后再运行 /skson {group}{RESET}\n") - - if success < total: - sys.exit(1) - - -if __name__ == "__main__": - main() diff --git a/.claude/skills/sksadd/skill.md b/.claude/skills/sksadd/skill.md deleted file mode 100644 index 48dff34a0..000000000 --- a/.claude/skills/sksadd/skill.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -name: sksadd -description: 从上次搜索结果安装 skill 到指定组 ---- - -从对话上下文推断组名和编号(如用户说"安装第3个"、"装到 docs 组"、"装1到5"),**直接运行**: - -```bash -python3 "$(git rev-parse --show-toplevel 2>/dev/null)/.claude/skills/sksadd/add.py" "" "" ["" ...] -``` - -支持多个编号,顺序安装。组名或编号不明确时才询问用户。 diff --git a/.claude/skills/sksgnew/gnew.py b/.claude/skills/sksgnew/gnew.py deleted file mode 100755 index 8a852c30b..000000000 --- a/.claude/skills/sksgnew/gnew.py +++ /dev/null @@ -1,49 +0,0 @@ -#!/usr/bin/env python3 -"""sksgnew - Create a new skill group.""" -import os -import subprocess -import sys -from pathlib import Path - -# ANSI colors -BOLD = "\033[1m" -CYAN = "\033[36m" -GREEN = "\033[32m" -GRAY = "\033[90m" -RED = "\033[31m" -RESET = "\033[0m" - - -def get_paths() -> tuple[Path, Path, Path]: - """Return (claude_dir, skills_dir, groups_dir).""" - try: - result = subprocess.run( - ["git", "rev-parse", "--show-toplevel"], - capture_output=True, text=True, check=True - ) - claude_dir = Path(result.stdout.strip()) / ".claude" - except subprocess.CalledProcessError: - claude_dir = Path.cwd() / ".claude" - return claude_dir, claude_dir / "skills", claude_dir / "skill-groups" - - -def main() -> None: - if len(sys.argv) < 2: - print(f"{GRAY}用法:/sksgnew <组名>{RESET}") - sys.exit(1) - - group = sys.argv[1] - _, _, groups_dir = get_paths() - group_dir = groups_dir / group - - if group_dir.exists(): - print(f"⚠️ 组 {BOLD}{group}{RESET} 已存在") - sys.exit(0) - - group_dir.mkdir(parents=True) - print(f"✅ 已创建组 {BOLD}{group}{RESET}") - print(f"{GRAY}搜索 skill:{RESET}{CYAN}/skssearch <关键词>{RESET}{GRAY},然后运行 {RESET}{CYAN}/sksadd {group} <编号>{RESET}\n") - - -if __name__ == "__main__": - main() diff --git a/.claude/skills/sksgnew/skill.md b/.claude/skills/sksgnew/skill.md deleted file mode 100644 index d66c0e362..000000000 --- a/.claude/skills/sksgnew/skill.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -name: sksgnew -description: 创建新的 skill 分组 ---- - -从对话上下文推断组名(如用户说"新建一个 docs 组"、"创建 writing 分组"),**直接运行**: - -```bash -python3 "$(git rev-parse --show-toplevel 2>/dev/null)/.claude/skills/sksgnew/gnew.py" "" -``` - -组名不明确时才询问用户。 diff --git a/.claude/skills/sksgrm/grm.py b/.claude/skills/sksgrm/grm.py deleted file mode 100755 index e4f0b7cbd..000000000 --- a/.claude/skills/sksgrm/grm.py +++ /dev/null @@ -1,57 +0,0 @@ -#!/usr/bin/env python3 -"""sksgrm - Delete an entire skill group.""" -import os -import shutil -import subprocess -import sys -from pathlib import Path - -# ANSI colors -BOLD = "\033[1m" -CYAN = "\033[36m" -GREEN = "\033[32m" -GRAY = "\033[90m" -RED = "\033[31m" -RESET = "\033[0m" - - -def get_paths() -> tuple[Path, Path, Path]: - """Return (claude_dir, skills_dir, groups_dir).""" - try: - result = subprocess.run( - ["git", "rev-parse", "--show-toplevel"], - capture_output=True, text=True, check=True - ) - claude_dir = Path(result.stdout.strip()) / ".claude" - except subprocess.CalledProcessError: - claude_dir = Path.cwd() / ".claude" - return claude_dir, claude_dir / "skills", claude_dir / "skill-groups" - - -def main() -> None: - if len(sys.argv) < 2: - print(f"{GRAY}用法:/sksgrm <组名>{RESET}") - sys.exit(1) - - group = sys.argv[1] - _, skills_dir, groups_dir = get_paths() - group_dir = groups_dir / group - - if not group_dir.exists(): - print(f"❌ 组 '{group}' 不存在") - sys.exit(1) - - for skill_dir in group_dir.iterdir(): - if not skill_dir.is_dir(): - continue - link = skills_dir / skill_dir.name - if link.is_symlink(): - link.unlink() - print(f" {GRAY}○ 关闭 {skill_dir.name}{RESET}") - - shutil.rmtree(group_dir) - print(f"✅ 已删除组 {BOLD}{group}{RESET}\n") - - -if __name__ == "__main__": - main() diff --git a/.claude/skills/sksgrm/skill.md b/.claude/skills/sksgrm/skill.md deleted file mode 100644 index 2bde4463f..000000000 --- a/.claude/skills/sksgrm/skill.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -name: sksgrm -description: 删除整个 skill 分组 ---- - -从对话上下文推断组名(如用户说"删掉 docs 组"、"移除整个 writing 分组"),**直接运行**: - -```bash -python3 "$(git rev-parse --show-toplevel 2>/dev/null)/.claude/skills/sksgrm/grm.py" "" -``` - -组名不明确时才询问用户。 diff --git a/.claude/skills/sksls/ls.py b/.claude/skills/sksls/ls.py deleted file mode 100755 index 629865ee6..000000000 --- a/.claude/skills/sksls/ls.py +++ /dev/null @@ -1,54 +0,0 @@ -#!/usr/bin/env python3 -"""sksls - List all skill groups and their activation status.""" -import os -import subprocess -import sys -from pathlib import Path - -# ANSI colors -BOLD = "\033[1m" -CYAN = "\033[36m" -GREEN = "\033[32m" -GRAY = "\033[90m" -RED = "\033[31m" -RESET = "\033[0m" - - -def get_paths() -> tuple[Path, Path, Path]: - """Return (claude_dir, skills_dir, groups_dir).""" - try: - result = subprocess.run( - ["git", "rev-parse", "--show-toplevel"], - capture_output=True, text=True, check=True - ) - claude_dir = Path(result.stdout.strip()) / ".claude" - except subprocess.CalledProcessError: - claude_dir = Path.cwd() / ".claude" - return claude_dir, claude_dir / "skills", claude_dir / "skill-groups" - - -def main() -> None: - _, skills_dir, groups_dir = get_paths() - - if not groups_dir.exists() or not any(groups_dir.iterdir()): - print(f"{GRAY}暂无分组,运行 /sksgnew <组名> 创建第一个组。{RESET}") - sys.exit(0) - - for group_dir in sorted(groups_dir.iterdir()): - if not group_dir.is_dir(): - continue - print(f"\n📁 {BOLD}{group_dir.name}{RESET}") - for skill_dir in sorted(group_dir.iterdir()): - if not skill_dir.is_dir(): - continue - link = skills_dir / skill_dir.name - if link.is_symlink(): - print(f" ✅ {GREEN}{skill_dir.name}{RESET}") - else: - print(f" {GRAY}○ {skill_dir.name}{RESET}") - - print() - - -if __name__ == "__main__": - main() diff --git a/.claude/skills/sksls/skill.md b/.claude/skills/sksls/skill.md deleted file mode 100644 index 9762dcd74..000000000 --- a/.claude/skills/sksls/skill.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -name: sksls -description: 列出所有组和 skill 的激活状态 ---- - -无需任何参数,**直接运行**: - -```bash -python3 "$(git rev-parse --show-toplevel 2>/dev/null)/.claude/skills/sksls/ls.py" -``` diff --git a/.claude/skills/sksoff/off.py b/.claude/skills/sksoff/off.py deleted file mode 100755 index 56b058538..000000000 --- a/.claude/skills/sksoff/off.py +++ /dev/null @@ -1,57 +0,0 @@ -#!/usr/bin/env python3 -"""sksoff - Deactivate all skills in a group.""" -import os -import subprocess -import sys -from pathlib import Path - -# ANSI colors -BOLD = "\033[1m" -CYAN = "\033[36m" -GREEN = "\033[32m" -GRAY = "\033[90m" -RED = "\033[31m" -RESET = "\033[0m" - - -def get_paths() -> tuple[Path, Path, Path]: - """Return (claude_dir, skills_dir, groups_dir).""" - try: - result = subprocess.run( - ["git", "rev-parse", "--show-toplevel"], - capture_output=True, text=True, check=True - ) - claude_dir = Path(result.stdout.strip()) / ".claude" - except subprocess.CalledProcessError: - claude_dir = Path.cwd() / ".claude" - return claude_dir, claude_dir / "skills", claude_dir / "skill-groups" - - -def main() -> None: - if len(sys.argv) < 2: - print(f"{GRAY}用法:/sksoff <组名>{RESET}") - sys.exit(1) - - group = sys.argv[1] - _, skills_dir, groups_dir = get_paths() - group_dir = groups_dir / group - - if not group_dir.exists(): - print(f"❌ 组 '{group}' 不存在,运行 /sksls 查看所有组") - sys.exit(1) - - count = 0 - for skill_dir in sorted(group_dir.iterdir()): - if not skill_dir.is_dir(): - continue - link = skills_dir / skill_dir.name - if link.is_symlink(): - link.unlink() - print(f" {GRAY}○ {skill_dir.name}{RESET}") - count += 1 - - print(f"\n{BOLD}已关闭 {count} 个 skill{RESET}{GRAY}(组: {group}){RESET}\n") - - -if __name__ == "__main__": - main() diff --git a/.claude/skills/sksoff/skill.md b/.claude/skills/sksoff/skill.md deleted file mode 100644 index c0149df9d..000000000 --- a/.claude/skills/sksoff/skill.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -name: sksoff -description: 关闭整组 skill ---- - -从对话上下文推断组名(如用户说"关掉 docs 组"、"禁用 writing"),**直接运行**: - -```bash -python3 "$(git rev-parse --show-toplevel 2>/dev/null)/.claude/skills/sksoff/off.py" "" -``` - -组名不明确时才询问用户。 diff --git a/.claude/skills/skson/on.py b/.claude/skills/skson/on.py deleted file mode 100755 index ba7693e70..000000000 --- a/.claude/skills/skson/on.py +++ /dev/null @@ -1,60 +0,0 @@ -#!/usr/bin/env python3 -"""skson - Activate all skills in a group.""" -import os -import subprocess -import sys -from pathlib import Path - -# ANSI colors -BOLD = "\033[1m" -CYAN = "\033[36m" -GREEN = "\033[32m" -GRAY = "\033[90m" -RED = "\033[31m" -RESET = "\033[0m" - - -def get_paths() -> tuple[Path, Path, Path]: - """Return (claude_dir, skills_dir, groups_dir).""" - try: - result = subprocess.run( - ["git", "rev-parse", "--show-toplevel"], - capture_output=True, text=True, check=True - ) - claude_dir = Path(result.stdout.strip()) / ".claude" - except subprocess.CalledProcessError: - claude_dir = Path.cwd() / ".claude" - return claude_dir, claude_dir / "skills", claude_dir / "skill-groups" - - -def main() -> None: - if len(sys.argv) < 2: - print(f"{GRAY}用法:/skson <组名>{RESET}") - sys.exit(1) - - group = sys.argv[1] - _, skills_dir, groups_dir = get_paths() - group_dir = groups_dir / group - - if not group_dir.exists(): - print(f"❌ 组 '{group}' 不存在,运行 /sksls 查看所有组") - sys.exit(1) - - skills_dir.mkdir(parents=True, exist_ok=True) - count = 0 - for skill_dir in sorted(group_dir.iterdir()): - if not skill_dir.is_dir(): - continue - link = skills_dir / skill_dir.name - if link.is_symlink(): - print(f" {GRAY}跳过 {skill_dir.name}(已激活){RESET}") - else: - link.symlink_to(skill_dir.resolve()) - print(f" ✅ {GREEN}{skill_dir.name}{RESET}") - count += 1 - - print(f"\n{BOLD}已激活 {count} 个 skill{RESET}{GRAY}(组: {group}){RESET}\n") - - -if __name__ == "__main__": - main() diff --git a/.claude/skills/skson/skill.md b/.claude/skills/skson/skill.md deleted file mode 100644 index 043a6a0aa..000000000 --- a/.claude/skills/skson/skill.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -name: skson -description: 激活整组 skill ---- - -从对话上下文推断组名(如用户说"开启 docs 组"、"激活 writing"),**直接运行**: - -```bash -python3 "$(git rev-parse --show-toplevel 2>/dev/null)/.claude/skills/skson/on.py" "" -``` - -组名不明确时才询问用户。 diff --git a/.claude/skills/sksrm/rm.py b/.claude/skills/sksrm/rm.py deleted file mode 100755 index 3cd99cbb4..000000000 --- a/.claude/skills/sksrm/rm.py +++ /dev/null @@ -1,56 +0,0 @@ -#!/usr/bin/env python3 -"""sksrm - Remove a single skill from a group.""" -import os -import shutil -import subprocess -import sys -from pathlib import Path - -# ANSI colors -BOLD = "\033[1m" -CYAN = "\033[36m" -GREEN = "\033[32m" -GRAY = "\033[90m" -RED = "\033[31m" -RESET = "\033[0m" - - -def get_paths() -> tuple[Path, Path, Path]: - """Return (claude_dir, skills_dir, groups_dir).""" - try: - result = subprocess.run( - ["git", "rev-parse", "--show-toplevel"], - capture_output=True, text=True, check=True - ) - claude_dir = Path(result.stdout.strip()) / ".claude" - except subprocess.CalledProcessError: - claude_dir = Path.cwd() / ".claude" - return claude_dir, claude_dir / "skills", claude_dir / "skill-groups" - - -def main() -> None: - if len(sys.argv) < 2 or "/" not in sys.argv[1]: - print(f"{GRAY}用法:/sksrm <组名/skill名>{RESET}") - print(f"{GRAY}示例:/sksrm frontend/react-expert{RESET}") - sys.exit(1) - - arg = sys.argv[1] - group, skill = arg.split("/", 1) - _, skills_dir, groups_dir = get_paths() - skill_dir = groups_dir / group / skill - - if not skill_dir.exists(): - print(f"❌ 未找到 {arg},运行 {CYAN}/sksls{RESET} 查看") - sys.exit(1) - - link = skills_dir / skill - if link.is_symlink(): - link.unlink() - print(f" {GRAY}○ 关闭 {skill}{RESET}") - - shutil.rmtree(skill_dir) - print(f"✅ 已删除 {BOLD}{arg}{RESET}\n") - - -if __name__ == "__main__": - main() diff --git a/.claude/skills/sksrm/skill.md b/.claude/skills/sksrm/skill.md deleted file mode 100644 index 890227650..000000000 --- a/.claude/skills/sksrm/skill.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -name: sksrm -description: 删除单个 skill ---- - -从对话上下文推断目标(格式 `<组名>/`,如用户说"删掉 docs 组里的 readme-writer"),**直接运行**: - -```bash -python3 "$(git rev-parse --show-toplevel 2>/dev/null)/.claude/skills/sksrm/rm.py" "" -``` - -组名或 skill 名不明确时才询问用户。 diff --git a/.claude/skills/skssearch/search.py b/.claude/skills/skssearch/search.py deleted file mode 100755 index 972d48a66..000000000 --- a/.claude/skills/skssearch/search.py +++ /dev/null @@ -1,209 +0,0 @@ -#!/usr/bin/env python3 -"""skssearch - Search SkillsMP for skills.""" -import json -import os -import re -import sys -import subprocess -import threading -import urllib.parse -from pathlib import Path - -# ANSI colors -BOLD = "\033[1m" -CYAN = "\033[36m" -GREEN = "\033[32m" -GRAY = "\033[90m" -RED = "\033[31m" -RESET = "\033[0m" - - -def get_paths() -> tuple[Path, Path, Path]: - """Return (claude_dir, skills_dir, groups_dir).""" - try: - result = subprocess.run( - ["git", "rev-parse", "--show-toplevel"], - capture_output=True, text=True, check=True - ) - claude_dir = Path(result.stdout.strip()) / ".claude" - except subprocess.CalledProcessError: - claude_dir = Path.cwd() / ".claude" - return claude_dir, claude_dir / "skills", claude_dir / "skill-groups" - - -def load_api_key() -> str | None: - """Load SKILLSMP_API_KEY from environment or shell config.""" - key = os.environ.get("SKILLSMP_API_KEY") - if key: - return key - # Try to source shell configs - for cfg in ["~/.zshrc", "~/.bash_profile", "~/.bashrc"]: - result = subprocess.run( - ["bash", "-c", f"source {cfg} 2>/dev/null; echo $SKILLSMP_API_KEY"], - capture_output=True, text=True - ) - key = result.stdout.strip() - if key: - return key - return None - - -def fetch_url(url: str, api_key: str) -> dict: - """Fetch JSON via curl.""" - result = subprocess.run( - ["curl", "-s", "--max-time", "15", url, "-H", f"Authorization: Bearer {api_key}"], - capture_output=True, text=True - ) - if result.returncode != 0: - return {} - try: - return json.loads(result.stdout) - except json.JSONDecodeError: - return {} - - -def parse_install_cmd(github_url: str) -> str | None: - """Derive install command from githubUrl.""" - m = re.match(r"https://github\.com/([^/]+)/([^/]+)/tree/[^/]+(/.+)", github_url) - if not m: - return None - owner, repo, path = m.group(1), m.group(2), m.group(3).rstrip("/") - skill_name = path.split("/")[-1] - return f"{owner}/{repo}@{skill_name}" - - -def main() -> None: - if len(sys.argv) < 2: - print(f"{GRAY}用法:/skssearch <关键词>{RESET}") - sys.exit(1) - - keyword = " ".join(sys.argv[1:]) - - # Check for Chinese characters - if re.search(r"[\u4e00-\u9fff]", keyword): - print(f"❌ 检测到中文关键词,SkillsMP 仅支持英文搜索") - print(f"建议使用英文关键词,例如:readme writer / documentation / testing") - sys.exit(0) - - api_key = load_api_key() - if not api_key: - print("❌ 未找到 SKILLSMP_API_KEY") - print("请设置环境变量并写入 shell 配置:") - print(" echo 'export SKILLSMP_API_KEY=your_key' >> ~/.zshrc") - sys.exit(1) - - enc = urllib.parse.quote(keyword) - base = "https://skillsmp.com/api/v1/skills" - results: dict[str, dict] = {} - - kw_data: list[dict] = [] - ai_data: list[dict] = [] - - def fetch_kw(): - nonlocal kw_data - data = fetch_url(f"{base}/search?q={enc}&limit=20&sortBy=stars", api_key) - kw_data = data.get("data", {}).get("skills", []) if data.get("success") else [] - - def fetch_ai(): - nonlocal ai_data - data = fetch_url(f"{base}/ai-search?q={enc}", api_key) - ai_data = data.get("data", {}).get("data", []) if data.get("success") else [] - - t1 = threading.Thread(target=fetch_kw) - t2 = threading.Thread(target=fetch_ai) - t1.start(); t2.start() - t1.join(); t2.join() - - # Merge results - skills: dict[str, dict] = {} - for s in kw_data: - sid = s["id"] - skills[sid] = {**s, "_from": "kw"} - - for item in ai_data: - # AI search returns different structure: attributes.file contains skill metadata - file_meta = item.get("attributes", {}).get("file", {}) - sid = file_meta.get("skill-id") - if not sid: - # Fallback to old structure - s = item.get("skill", {}) - sid = s.get("id") - - if not sid: - continue - - # Build skill dict from file metadata - s = { - "id": sid, - "name": file_meta.get("skill-name", ""), - "description": "", # Not in file metadata, would need to parse content - "githubUrl": "", # Not available in this structure - "stars": 0, # Not available - "author": sid.split("-")[0] if "-" in sid else "", # Extract from ID - } - - if sid not in skills: - skills[sid] = {**s, "_from": "ai"} - elif skills[sid]["_from"] == "kw": - skills[sid]["_from"] = "both" - - if not kw_data and not ai_data: - print(f"\n{GRAY}两路搜索均无结果,建议使用更通用的英文关键词。{RESET}\n") - sys.exit(0) - - # Deduplicate by name, sort by stars - by_name: dict[str, dict] = {} - for s in skills.values(): - name = s.get("name", "") - if name not in by_name: - by_name[name] = s - else: - # Keep higher stars, but merge _from tags - existing = by_name[name] - if s.get("stars", 0) > existing.get("stars", 0): - # Replace with higher-starred version, but preserve _from info - s_from = s.get("_from", "kw") - e_from = existing.get("_from", "kw") - if s_from != e_from: - s["_from"] = "both" - by_name[name] = s - elif s.get("_from") != existing.get("_from"): - # Same or lower stars, but different source - mark as both - existing["_from"] = "both" - result = sorted(by_name.values(), key=lambda x: -x.get("stars", 0))[:20] - - # Save for sksadd - claude_dir, _, _ = get_paths() - save = [] - for s in result: - cmd = parse_install_cmd(s.get("githubUrl", "")) or s.get("name", "") - save.append({**s, "installCmd": cmd}) - (claude_dir / ".sks-last-search.json").write_text( - json.dumps(save, ensure_ascii=False, indent=2) - ) - - # Output - print(f"\n{BOLD}搜索 \"{keyword}\" 共 {len(result)} 条:{RESET}\n") - for i, s in enumerate(save, 1): - name = s.get("name", "") - desc = s.get("description", "").replace("\n", " ") - stars = s.get("stars", 0) - author = s.get("author", "") - cmd = s.get("installCmd", "") - src = s.get("_from", "kw") - tag = (f"{GREEN}[kw+ai]{RESET}" if src == "both" - else f"{CYAN}[ai]{RESET}" if src == "ai" - else f"{GRAY}[kw]{RESET}") - print(f" {CYAN}{i:>2}. {BOLD}{name}{RESET} {tag} {GRAY}⭐{stars} {author}{RESET}") - if desc: - print(f" {GRAY}{desc[:80]}{'...' if len(desc) > 80 else ''}{RESET}") - if cmd: - print(f" {GRAY}安装: npx skills add {cmd} --agent claude-code --copy -y{RESET}") - print() - - print(f"{GRAY}运行 /sksadd <组名> <编号> 安装到指定组。{RESET}") - print(f"{GRAY}如需先创建组,运行 /sksgnew <组名>。{RESET}\n") - - -if __name__ == "__main__": - main() diff --git a/.claude/skills/skssearch/skill.md b/.claude/skills/skssearch/skill.md deleted file mode 100644 index b93991415..000000000 --- a/.claude/skills/skssearch/skill.md +++ /dev/null @@ -1,57 +0,0 @@ ---- -name: skssearch -description: 搜索 SkillsMP skill 库。当用户描述需求、询问某类 skill 是否存在、或明确搜索时触发。 ---- - -# skssearch - -## 关键词生成(最关键) - -SkillsMP 只支持英文。关键词生成的核心原则: - -> **像 skill 作者一样思考,而不是像用户一样搜索。** -> Skill 以「能力」命名,不以「需求」命名。 - -### 第一步:提取意图 - -从用户描述中识别: -- **核心动作**:用户想让 skill 做什么?(write / review / generate / analyze / test / format / search...) -- **目标对象**:作用在什么上?(readme / code / csv / api / image / pr / commit...) -- **领域/技术栈**(如有):react / python / sql / docker / markdown... - -### 第二步:生成关键词 - -用 1-2 个词,优先选: - -| 策略 | 示例 | -|------|------| -| 动作名词(skill 最常见命名) | `reviewer`、`writer`、`generator`、`analyzer` | -| 领域词(单词)| `csv`、`testing`、`readme`、`documentation` | -| 动作+领域(精确时用) | `code review`、`readme writer` | - -**不要用**:完整需求描述(`help me write project documentation`)、实现细节(`using langchain to process csv`)、中文词(无效) - -### 第三步:选最优关键词 - -- 优先选覆盖面最广的词(`testing` > `pytest unit test`) -- 同一概念有多个英文词时,选 skill 作者最可能用的那个 -- 不确定时,选最短的那个 - -### 示例 - -| 用户说的 | 思考过程 | 关键词 | -|---------|---------|--------| -| "帮我找个写 README 的" | 动作=write,对象=readme → skill 作者会叫它 readme writer | `readme writer` | -| "有没有代码审查的?" | 动作=review,对象=code → 常见名 | `code review` | -| "我需要处理 CSV 文件" | 对象=csv,动作=process → 领域词更精准 | `csv` | -| "找个能生成单元测试的" | 动作=generate,对象=test → 领域词 | `testing` | -| "有 React 相关的吗" | 技术栈=react → 直接用 | `react` | -| "想找个调试助手" | 动作=debug → 动作名词 | `debugging` | - -## 执行 - -确定关键词后,**直接运行**,不要询问确认: - -```bash -python3 "$(git rev-parse --show-toplevel 2>/dev/null)/.claude/skills/skssearch/search.py" "" -``` diff --git a/.claude/skills/tailwind-design-system/SKILL.md b/.claude/skills/tailwind-design-system/SKILL.md deleted file mode 100644 index 0a8f806f8..000000000 --- a/.claude/skills/tailwind-design-system/SKILL.md +++ /dev/null @@ -1,874 +0,0 @@ ---- -name: tailwind-design-system -description: Build scalable design systems with Tailwind CSS v4, design tokens, component libraries, and responsive patterns. Use when creating component libraries, implementing design systems, or standardizing UI patterns. ---- - -# Tailwind Design System (v4) - -Build production-ready design systems with Tailwind CSS v4, including CSS-first configuration, design tokens, component variants, responsive patterns, and accessibility. - -> **Note**: This skill targets Tailwind CSS v4 (2024+). For v3 projects, refer to the [upgrade guide](https://tailwindcss.com/docs/upgrade-guide). - -## When to Use This Skill - -- Creating a component library with Tailwind v4 -- Implementing design tokens and theming with CSS-first configuration -- Building responsive and accessible components -- Standardizing UI patterns across a codebase -- Migrating from Tailwind v3 to v4 -- Setting up dark mode with native CSS features - -## Key v4 Changes - -| v3 Pattern | v4 Pattern | -| ------------------------------------- | --------------------------------------------------------------------- | -| `tailwind.config.ts` | `@theme` in CSS | -| `@tailwind base/components/utilities` | `@import "tailwindcss"` | -| `darkMode: "class"` | `@custom-variant dark (&:where(.dark, .dark *))` | -| `theme.extend.colors` | `@theme { --color-*: value }` | -| `require("tailwindcss-animate")` | CSS `@keyframes` in `@theme` + `@starting-style` for entry animations | - -## Quick Start - -```css -/* app.css - Tailwind v4 CSS-first configuration */ -@import "tailwindcss"; - -/* Define your theme with @theme */ -@theme { - /* Semantic color tokens using OKLCH for better color perception */ - --color-background: oklch(100% 0 0); - --color-foreground: oklch(14.5% 0.025 264); - - --color-primary: oklch(14.5% 0.025 264); - --color-primary-foreground: oklch(98% 0.01 264); - - --color-secondary: oklch(96% 0.01 264); - --color-secondary-foreground: oklch(14.5% 0.025 264); - - --color-muted: oklch(96% 0.01 264); - --color-muted-foreground: oklch(46% 0.02 264); - - --color-accent: oklch(96% 0.01 264); - --color-accent-foreground: oklch(14.5% 0.025 264); - - --color-destructive: oklch(53% 0.22 27); - --color-destructive-foreground: oklch(98% 0.01 264); - - --color-border: oklch(91% 0.01 264); - --color-ring: oklch(14.5% 0.025 264); - - --color-card: oklch(100% 0 0); - --color-card-foreground: oklch(14.5% 0.025 264); - - /* Ring offset for focus states */ - --color-ring-offset: oklch(100% 0 0); - - /* Radius tokens */ - --radius-sm: 0.25rem; - --radius-md: 0.375rem; - --radius-lg: 0.5rem; - --radius-xl: 0.75rem; - - /* Animation tokens - keyframes inside @theme are output when referenced by --animate-* variables */ - --animate-fade-in: fade-in 0.2s ease-out; - --animate-fade-out: fade-out 0.2s ease-in; - --animate-slide-in: slide-in 0.3s ease-out; - --animate-slide-out: slide-out 0.3s ease-in; - - @keyframes fade-in { - from { - opacity: 0; - } - to { - opacity: 1; - } - } - - @keyframes fade-out { - from { - opacity: 1; - } - to { - opacity: 0; - } - } - - @keyframes slide-in { - from { - transform: translateY(-0.5rem); - opacity: 0; - } - to { - transform: translateY(0); - opacity: 1; - } - } - - @keyframes slide-out { - from { - transform: translateY(0); - opacity: 1; - } - to { - transform: translateY(-0.5rem); - opacity: 0; - } - } -} - -/* Dark mode variant - use @custom-variant for class-based dark mode */ -@custom-variant dark (&:where(.dark, .dark *)); - -/* Dark mode theme overrides */ -.dark { - --color-background: oklch(14.5% 0.025 264); - --color-foreground: oklch(98% 0.01 264); - - --color-primary: oklch(98% 0.01 264); - --color-primary-foreground: oklch(14.5% 0.025 264); - - --color-secondary: oklch(22% 0.02 264); - --color-secondary-foreground: oklch(98% 0.01 264); - - --color-muted: oklch(22% 0.02 264); - --color-muted-foreground: oklch(65% 0.02 264); - - --color-accent: oklch(22% 0.02 264); - --color-accent-foreground: oklch(98% 0.01 264); - - --color-destructive: oklch(42% 0.15 27); - --color-destructive-foreground: oklch(98% 0.01 264); - - --color-border: oklch(22% 0.02 264); - --color-ring: oklch(83% 0.02 264); - - --color-card: oklch(14.5% 0.025 264); - --color-card-foreground: oklch(98% 0.01 264); - - --color-ring-offset: oklch(14.5% 0.025 264); -} - -/* Base styles */ -@layer base { - * { - @apply border-border; - } - - body { - @apply bg-background text-foreground antialiased; - } -} -``` - -## Core Concepts - -### 1. Design Token Hierarchy - -``` -Brand Tokens (abstract) - └── Semantic Tokens (purpose) - └── Component Tokens (specific) - -Example: - oklch(45% 0.2 260) → --color-primary → bg-primary -``` - -### 2. Component Architecture - -``` -Base styles → Variants → Sizes → States → Overrides -``` - -## Patterns - -### Pattern 1: CVA (Class Variance Authority) Components - -```typescript -// components/ui/button.tsx -import { Slot } from '@radix-ui/react-slot' -import { cva, type VariantProps } from 'class-variance-authority' -import { cn } from '@/lib/utils' - -const buttonVariants = cva( - // Base styles - v4 uses native CSS variables - 'inline-flex items-center justify-center whitespace-nowrap rounded-md text-sm font-medium transition-colors focus-visible:outline-none focus-visible:ring-2 focus-visible:ring-ring focus-visible:ring-offset-2 disabled:pointer-events-none disabled:opacity-50', - { - variants: { - variant: { - default: 'bg-primary text-primary-foreground hover:bg-primary/90', - destructive: 'bg-destructive text-destructive-foreground hover:bg-destructive/90', - outline: 'border border-border bg-background hover:bg-accent hover:text-accent-foreground', - secondary: 'bg-secondary text-secondary-foreground hover:bg-secondary/80', - ghost: 'hover:bg-accent hover:text-accent-foreground', - link: 'text-primary underline-offset-4 hover:underline', - }, - size: { - default: 'h-10 px-4 py-2', - sm: 'h-9 rounded-md px-3', - lg: 'h-11 rounded-md px-8', - icon: 'size-10', - }, - }, - defaultVariants: { - variant: 'default', - size: 'default', - }, - } -) - -export interface ButtonProps - extends React.ButtonHTMLAttributes, - VariantProps { - asChild?: boolean -} - -// React 19: No forwardRef needed -export function Button({ - className, - variant, - size, - asChild = false, - ref, - ...props -}: ButtonProps & { ref?: React.Ref }) { - const Comp = asChild ? Slot : 'button' - return ( - - ) -} - -// Usage - - - -``` - -### Pattern 2: Compound Components (React 19) - -```typescript -// components/ui/card.tsx -import { cn } from '@/lib/utils' - -// React 19: ref is a regular prop, no forwardRef -export function Card({ - className, - ref, - ...props -}: React.HTMLAttributes & { ref?: React.Ref }) { - return ( -
    - ) -} - -export function CardHeader({ - className, - ref, - ...props -}: React.HTMLAttributes & { ref?: React.Ref }) { - return ( -
    - ) -} - -export function CardTitle({ - className, - ref, - ...props -}: React.HTMLAttributes & { ref?: React.Ref }) { - return ( -

    - ) -} - -export function CardDescription({ - className, - ref, - ...props -}: React.HTMLAttributes & { ref?: React.Ref }) { - return ( -

    - ) -} - -export function CardContent({ - className, - ref, - ...props -}: React.HTMLAttributes & { ref?: React.Ref }) { - return ( -

    - ) -} - -export function CardFooter({ - className, - ref, - ...props -}: React.HTMLAttributes & { ref?: React.Ref }) { - return ( -
    - ) -} - -// Usage - - - Account - Manage your account settings - - -
    ...
    -
    - - - -
    -``` - -### Pattern 3: Form Components - -```typescript -// components/ui/input.tsx -import { cn } from '@/lib/utils' - -export interface InputProps extends React.InputHTMLAttributes { - error?: string - ref?: React.Ref -} - -export function Input({ className, type, error, ref, ...props }: InputProps) { - return ( -
    - - {error && ( - - )} -
    - ) -} - -// components/ui/label.tsx -import { cva, type VariantProps } from 'class-variance-authority' - -const labelVariants = cva( - 'text-sm font-medium leading-none peer-disabled:cursor-not-allowed peer-disabled:opacity-70' -) - -export function Label({ - className, - ref, - ...props -}: React.LabelHTMLAttributes & { ref?: React.Ref }) { - return ( -