Skip to content
Merged
14 changes: 7 additions & 7 deletions manifest.json
Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@
{
"version": "2",
"updated_at": "2026-04-07T12:45:02Z",
"updated_at": "2026-04-07T13:18:03Z",
"skills": {
"databricks-apps": {
"version": "0.1.1",
"description": "Databricks Apps development and deployment",
"experimental": false,
"updated_at": "2026-04-07T12:41:53Z",
"updated_at": "2026-04-07T13:17:59Z",
"files": [
"SKILL.md",
"agents/openai.yaml",
Expand All @@ -29,7 +29,7 @@
"version": "0.1.0",
"description": "Core Databricks skill for CLI, auth, and data exploration",
"experimental": false,
"updated_at": "2026-04-07T12:41:53Z",
"updated_at": "2026-04-07T13:17:41Z",
"files": [
"SKILL.md",
"agents/openai.yaml",
Expand All @@ -44,7 +44,7 @@
"version": "0.0.0",
"description": "Declarative Automation Bundles (DABs) for deploying and managing Databricks resources",
"experimental": false,
"updated_at": "2026-04-07T12:44:49Z",
"updated_at": "2026-04-07T13:17:41Z",
"files": [
"SKILL.md",
"agents/openai.yaml",
Expand All @@ -62,7 +62,7 @@
"version": "0.1.0",
"description": "Databricks Jobs orchestration and scheduling",
"experimental": false,
"updated_at": "2026-04-07T12:41:53Z",
"updated_at": "2026-04-07T13:17:41Z",
"files": [
"SKILL.md",
"agents/openai.yaml",
Expand All @@ -74,7 +74,7 @@
"version": "0.1.0",
"description": "Databricks Lakebase database development",
"experimental": false,
"updated_at": "2026-04-07T12:41:53Z",
"updated_at": "2026-04-07T13:17:59Z",
"files": [
"SKILL.md",
"agents/openai.yaml",
Expand All @@ -86,7 +86,7 @@
"version": "0.1.0",
"description": "Databricks Pipelines (DLT) for ETL and streaming",
"experimental": false,
"updated_at": "2026-04-07T12:41:53Z",
"updated_at": "2026-04-07T13:17:41Z",
"files": [
"SKILL.md",
"agents/openai.yaml",
Expand Down
23 changes: 21 additions & 2 deletions skills/databricks-apps/references/appkit/lakebase.md
Original file line number Diff line number Diff line change
Expand Up @@ -132,9 +132,11 @@ export const appRouter = t.router({
});
```

> **Deploy first (App + Lakebase only)!** When your Databricks App uses Lakebase, the Service Principal must create and own the schema. Run `databricks apps deploy` before any local development. See **`databricks-lakebase`** skill's **Schema Permissions for Deployed Apps** for details.

## Schema Initialization

**Always create a custom schema** — the Service Principal has `CONNECT_AND_CREATE` permission but **cannot access the `public` schema**. Initialize tables on server startup:
**Always create a custom schema** — the Service Principal cannot access any existing schemas (including `public`). It must create the schema itself to become its owner. See **`databricks-lakebase`** skill's **Schema Permissions for Deployed Apps** for the full permission model and deploy-first workflow. Initialize tables on server startup:

```typescript
// server/server.ts — run once at startup before handling requests
Expand Down Expand Up @@ -180,6 +182,21 @@ const prisma = new PrismaClient({ adapter });

## Local Development

### Prerequisites (MUST verify before local development)

**This applies when your Databricks App uses Lakebase.** Run this check before any local development:

```bash
databricks apps get <APP_NAME> --profile <PROFILE>
```

Check the response for the `active_deployment` field. If it exists with `status.state` of `SUCCEEDED`, the app has been deployed. If `active_deployment` is missing, the app has never been deployed:
1. **STOP** — do not proceed with local development
2. Deploy first: `databricks apps deploy <APP_NAME> --profile <PROFILE>`
3. Wait for deployment to complete, then continue

If you skip this step, the Service Principal won't own the database schema. You'll create schemas under your credentials that the SP **cannot access** after deployment. See **`databricks-lakebase`** skill's **Schema Permissions for Deployed Apps** for the full workflow and recovery steps.

The Lakebase env vars (`PGHOST`, `PGDATABASE`, etc.) are auto-set only when deployed. For local development, get the connection details from your endpoint and set them manually:

```bash
Expand All @@ -206,7 +223,9 @@ Load `server/.env` in your dev server (e.g. via `dotenv` or `node --env-file=ser

| Error | Cause | Solution |
|-------|-------|---------|
| `permission denied for schema public` | Service Principal lacks access to `public` | Create custom schema: `CREATE SCHEMA IF NOT EXISTS app_data` |
| `permission denied for schema public` | SP cannot access `public` schema | Create custom schema: `CREATE SCHEMA IF NOT EXISTS app_data` and qualify all table names with `app_data.` |
| `permission denied for schema <name>` | Schema was created by another role (e.g. you ran locally before deploying) | **Ask the user before dropping** — `DROP SCHEMA` deletes all data. See **`databricks-lakebase`** skill's **Schema Permissions for Deployed Apps** for options |
| Works locally but `permission denied` after deploy | Local credentials created the schema; the SP can't access schemas it doesn't own | **Ask the user before dropping** — warn about data loss, then deploy first. See **`databricks-lakebase`** skill's **Schema Permissions for Deployed Apps** for options |
| `connection refused` | Pool not connected or wrong env vars | Check `PGHOST`, `PGPORT`, `LAKEBASE_ENDPOINT` are set |
| `relation "X" does not exist` | Tables not initialized | Run `CREATE TABLE IF NOT EXISTS` at startup |
| App builds but pool fails at runtime | Env vars not set locally | Set vars in `server/.env` — see Local Development above |
22 changes: 22 additions & 0 deletions skills/databricks-lakebase/SKILL.md
Original file line number Diff line number Diff line change
Expand Up @@ -148,6 +148,27 @@ Where `<BRANCH_NAME>` is the full resource name (e.g. `projects/<PROJECT_ID>/bra

For the full app development workflow, use the **`databricks-apps`** skill.

### Schema Permissions for Deployed Apps

When a Lakebase database is used by a deployed Databricks App, the app's Service Principal has `CAN_CONNECT_AND_CREATE` permission, which means it can create new objects but **cannot access any existing schemas or tables** (including `public`). The SP must create the schema itself to become its owner.

**ALWAYS deploy the app before running it locally.** This is the #1 source of Lakebase permission errors.

When deployed, the app's Service Principal runs the schema initialization SQL (e.g. `CREATE SCHEMA IF NOT EXISTS app_data`), creating the schema and tables — and becoming their **owner**. Only the owner (or a superuser) can access those objects.

**If you run locally first**, your personal credentials create the schema and become the owner. The deployed Service Principal then **cannot access it** — even though it has `CAN_CONNECT_AND_CREATE` — because it didn't create it and cannot access existing schemas.

**Correct workflow:**
1. **Deploy first**: `databricks apps deploy <APP_NAME> --profile <PROFILE>` — verify with `databricks apps get <APP_NAME> --profile <PROFILE>` that the app is deployed before proceeding
2. **Grant local access** *(if needed)*: if you're not the project creator, assign `databricks_superuser` to your identity via the Lakebase UI. Project creators already have sufficient access.
3. **Develop locally**: your credentials get DML access (SELECT/INSERT/UPDATE/DELETE) to SP-owned schemas

> **Note:** Project creators already have access to SP-owned schemas. Other team members need `databricks_superuser` (grants full DML but **not DDL**). If you need to alter the schema during local development, redeploy the app to apply DDL changes.

**If you already ran locally first** and hit `permission denied` after deploying: the schema is owned by your personal credentials, not the SP. **⚠️ Do NOT drop the schema without asking the user first** — dropping it (`DROP SCHEMA <name> CASCADE`) **deletes all data** in that schema. Ask the user how they'd like to proceed:
- **Option A (destructive):** Drop the schema and redeploy so the SP recreates it. Only safe if the schema has no valuable data.
- **Option B (manual):** The user can reassign ownership or manually grant the SP access, preserving existing data.

### Other Workflows

**Connect a Postgres client**
Expand Down Expand Up @@ -178,5 +199,6 @@ databricks postgres create-endpoint projects/<PROJECT_ID>/branches/<BRANCH_ID> <
|-------|----------|
| `cannot configure default credentials` | Use `--profile` flag or authenticate first |
| `PERMISSION_DENIED` | Check workspace permissions |
| `permission denied for schema <name>` | Schema owned by another role. Deploy the app first so the SP creates and owns the schema. See **Schema Permissions for Deployed Apps** above |
| Protected branch cannot be deleted | `update-branch` to set `spec.is_protected` to `false` first |
| Long-running operation timeout | Use `--no-wait` and poll with `get-operation` |
Loading