Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
21 changes: 21 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,13 +38,34 @@ npm run dev

Or configure them in-app via the settings panel.

### Using Ollama

openwork can use local Ollama models without an API key.

```bash
# Install and start Ollama
ollama serve

# Pull at least one local model
ollama pull llama3.1:8b

# Optional: point openwork at a non-default Ollama server
export OLLAMA_BASE_URL=http://127.0.0.1:11434

# OLLAMA_HOST also works, including host:port values like 127.0.0.1:11434
export OLLAMA_HOST=127.0.0.1:11434
```

If you need the setting to persist for the app, add `OLLAMA_BASE_URL=...` or `OLLAMA_HOST=...` to `~/.openwork/.env`.

## Supported Models

| Provider | Models |
| --------- | -------------------------------------------------------------------------------------- |
| Anthropic | Claude Opus 4.5, Claude Sonnet 4.5, Claude Haiku 4.5, Claude Opus 4.1, Claude Sonnet 4 |
| OpenAI | GPT-5.2, GPT-5.1, o3, o3 Mini, o4 Mini, o1, GPT-4.1, GPT-4o |
| Google | Gemini 3 Pro Preview, Gemini 3 Flash Preview, Gemini 2.5 Pro, Gemini 2.5 Flash, Gemini 2.5 Flash Lite |
| Ollama | Any local Ollama chat model returned by your Ollama server, such as llama3.1:8b or qwen3:14b |

## Contributing

Expand Down
105 changes: 65 additions & 40 deletions package-lock.json

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

3 changes: 2 additions & 1 deletion package.json
Original file line number Diff line number Diff line change
Expand Up @@ -52,6 +52,7 @@
"@langchain/langgraph": "^1.0.15",
"@langchain/langgraph-checkpoint": "^1.0.0",
"@langchain/langgraph-sdk": "^1.5.3",
"@langchain/ollama": "^1.1.0",
"@langchain/openai": "^1.2.3",
"@radix-ui/react-context-menu": "^2.2.16",
"@radix-ui/react-dialog": "^1.1.15",
Expand Down Expand Up @@ -116,4 +117,4 @@
"picomatch": ">=4.0.4"
}
}
}
}
13 changes: 9 additions & 4 deletions src/main/agent/runtime.ts
Original file line number Diff line number Diff line change
@@ -1,10 +1,11 @@
/* eslint-disable @typescript-eslint/no-unused-vars */
import { createDeepAgent } from "deepagents"
import { getDefaultModel } from "../ipc/models"
import { getApiKey, getThreadCheckpointPath } from "../storage"
import { getApiKey, getOllamaBaseUrl, getThreadCheckpointPath } from "../storage"
import { ChatAnthropic } from "@langchain/anthropic"
import { ChatOpenAI } from "@langchain/openai"
import { ChatGoogleGenerativeAI } from "@langchain/google-genai"
import { ChatOllama } from "@langchain/ollama"
import { SqlJsSaver } from "../checkpointer/sqljs-saver"
import { LocalSandbox } from "./local-sandbox"

Expand Down Expand Up @@ -61,7 +62,7 @@ export async function closeCheckpointer(threadId: string): Promise<void> {
// Get the appropriate model instance based on configuration
function getModelInstance(
modelId?: string
): ChatAnthropic | ChatOpenAI | ChatGoogleGenerativeAI | string {
): ChatAnthropic | ChatOpenAI | ChatGoogleGenerativeAI | ChatOllama {
const model = modelId || getDefaultModel()
console.log("[Runtime] Using model:", model)

Expand Down Expand Up @@ -103,8 +104,12 @@ function getModelInstance(
})
}

// Default to model string (let deepagents handle it)
return model
const baseUrl = getOllamaBaseUrl()
console.log("[Runtime] Using Ollama base URL:", baseUrl)
return new ChatOllama({
model,
baseUrl
})
}

export interface CreateAgentRuntimeOptions {
Expand Down
Loading