Security scanner for AI-generated apps. Catches the bugs Cursor, Lovable, Windsurf, v0, and Claude routinely ship: hardcoded LLM keys, prompt-injection sinks, leaked Server Actions, hallucinated imports, missing auth on streaming endpoints, and the other "looks fine to a linter" issues that traditional tools miss.
# One-shot, no install
npx ubon@latest check
# Or install globally
npm install -g ubon
ubon checkubon check # fast static scan, exit 1 on errors
ubon scan --interactive # walk through findings one by one
ubon check --json # deterministic JSON for agents/CI
ubon rules list --json # machine-readable rule catalog
ubon check --sarif out.sarif # SARIF 2.1.0 for GitHub code scanning
ubon mcp # serve as an MCP tool to your AI assistant
ubon doctor # check environment and optional depsModern AI coding assistants are great at producing code that runs. They are routinely careless about code that's safe to deploy:
- Hardcoded LLM API keys in client bundles
- Server Actions with no auth check
- Streaming routes with no rate limit
- MCP server configs with literal secrets
import.meta.env.PUBLIC_*reading server-only values'use client'files importing fromactions/- Edge runtime routes calling Node-only APIs
- Hallucinated imports that pass the type checker because the package never gets installed
Ubon's job is to catch those, fast, with high confidence and file:line
context β and to expose them to the agent itself via JSON / NDJSON / MCP
so the AI can fix what it broke.
v3.2.0 is an additive release for agentic development workflows: installable guardrails, richer machine-readable output, and a validation harness that proves Ubon catches planted AI-era bugs before a release ships.
- Agent harness installer:
ubon agent install --all --writecan generate Cursor, Claude Code, Codex, pre-commit, GitHub Actions, and.gitignoreharness files from one dry-run-first workflow. - Expanded Cursor hooks: templates now cover file edits, shell commands, MCP calls, prompt submission, stop gates, and pre-compaction context.
- Agent-specific rules (
CC009βCC011): catches unknown Cursor hook events, broad agent autonomy, and dangerous reusable commands / skills. - Agent-ready CLI:
ubon changed,ubon verify,ubon review,ubon rules list --json, and presets foragent,ci,release, andlocalworkflows. - MCP upgrade: tools for changed-file scans,
baseSha, verification, status, rule catalog access, and fix planning. - Repair context: JSON / MCP output can include source context so agents have enough local evidence to patch findings.
- Validation harness: fixture benchmarks, CLI/MCP contract tests,
deterministic fix/rescan checks, dogfood, and package dry-run verification
are wired into
npm run verify:release. - Release discipline:
npm run dogfoodscans Ubon itself and must pass with 0 unsuppressed critical findings before publish.
For the original v3 breaking changes (Node 20+, removed Python / Rails / Vue profiles), see MIGRATION-v3.md.
| Capability | Ubon | ESLint | npm audit | Lovable scanner |
|---|---|---|---|---|
| LLM / vector-DB hardcoded secrets | β | β | β | |
| Prompt-injection sinks | β | β | β | β |
| Server Actions / Edge runtime checks | β | β | β | β |
| Supabase RLS validation | β | β | β | |
| Insecure cookies / CORS / redirects | β | β | β | β |
| Client env-var leaks (Next/Vite) | β | β | β | β |
| Accessibility basics | β | β | β | |
| Dependency advisories (OSV) | β | β | β | β |
| MCP server for AI agents | β | β | β | β |
| Code style / formatting | β | β | β | β |
Use them together. ESLint covers code style; npm audit covers CVEs in your dependency tree; Ubon covers the gap that AI assistants regularly leave behind.
ubon agent install --cursor --write # writes Cursor hooks + rulesThen point Cursor at the MCP server:
Full Cursor + Lovable + comparison details in docs/INTEGRATIONS.md.
The demo fixture in examples/ai-harness-demo
contains the kinds of issues AI agents often leave behind: an LLM key in
source, server-side fetch to a user-controlled URL, a misspelled Cursor hook,
and a reusable agent command that pipes network output into a shell.
ubon check -d examples/ai-harness-demo --preset localThe expected rule IDs are checked by the test suite so the demo stays honest.
ubon init # writes ubon.config.json
ubon check --update-baseline # accept current findings as baseline
ubon check --baseline .ubon-baseline.json --focus-new --fail-on error// ubon.config.json
{
"profile": "next",
"minConfidence": 0.85,
"failOn": "error",
"disabledRules": ["VIBE003"],
"exclude": ["legacy/**"]
}For the JS variant (executes user code), pass --allow-config-js or
set UBON_ALLOW_CONFIG_JS=1.
- docs/CLI.md β every command and flag
- docs/START-HERE.md β two-minute setup and daily commands
- docs/AGENT-HARNESS.md β Cursor / Claude Code / Codex / MCP / hooks
- docs/AGENT-SEMANTICS.md β how Ubon models agent events, rules, and gates
- docs/PROGRAMMATIC.md β Node API, JSON/NDJSON, and MCP contracts
- docs/RULES.md β full rule glossary
- docs/CONFIG.md β config file schema
- docs/INTEGRATIONS.md β Cursor / Lovable / comparison
- docs/MCP.md β Model Context Protocol server
- docs/ADVANCED.md β profiles, suppressions, baselines, output schemas, release policy
- docs/RELEASE.md β release verification and publish checklist
- docs/VALIDATION.md β dogfood, fixtures, contracts, and repair loops
- MIGRATION-v3.md β upgrading from v2.x
- CHANGELOG.md β release history
- Node.js 20 or newer (v3 dropped Node 16/18)
- Git (for
--git-changed-sinceand thegit-historyscanner) - Optional:
@modelcontextprotocol/sdkforubon mcpβ installed automatically as anoptionalDependencyofubon. If your install flags skipped it, seedocs/MCP.md.
Run ubon doctor to verify.
I'm Luisfer Romero Calero. I built Ubon because the gap between "AI shipped this" and "this is safe to deploy" keeps widening. The tool's name comes from the lotus (ΰΈΰΈΈΰΈΰΈ₯) in Thai β clarity in the middle of vibe-coded chaos.
If Ubon helps you ship safer apps, the highest praise is to wire it into your CI and your AI assistant β and tell me what it caught.
MIT β see LICENSE.
