Skip to content

Adding ollama as a service to docker compose file #3

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 3 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
37 changes: 28 additions & 9 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,18 +47,24 @@ Choose models based on your system capabilities:
| **Chat** | `phi3:mini` | ~2.3GB | 4GB | Low-resource systems |


### Installation Options

Choose your preferred installation method:

### Option 1: Direct Installation

### Prerequisites (Required for Both Installation Methods)
**Prerequisite: Ollama (for local AI models)**

Install Ollama

**1. Install Ollama** (for local AI models):
```bash
# macOS
brew install ollama

# Or download from https://ollama.com
```
Start Ollama and install required models

**2. Start Ollama and install required models**:
```bash
ollama serve

Expand All @@ -69,11 +75,7 @@ ollama pull nomic-embed-text
ollama pull qwen3:14b
```

### Installation Options

Choose your preferred installation method:

### Option 1: Direct Installation

**Additional Prerequisites:**
- Python 3.8+
Expand Down Expand Up @@ -106,7 +108,10 @@ Choose your preferred installation method:

### Option 2: Docker Installation

**Additional Prerequisites:**
With this option, you don't need to separately install Ollama, it will automatically
get started by docker compose.

**Prerequisites:**
- Docker and Docker Compose

**Installation Steps:**
Expand All @@ -119,10 +124,24 @@ Choose your preferred installation method:

2. **Start with Docker Compose**:
```bash
# if you don't have a GPU
docker-compose up

# if you have a GPU
docker compose -f docker-compose.yml -f docker-compose.gpu.yml up
```

3. **Open your browser** to `http://localhost:8501`
3. **Install models**

```
# embedding model
docker exec -it ollama ollama pull nomic-embed-text

# chat model
docker exec -it ollama ollama pull qwen3:14b
```

4. **Open your browser** to `http://localhost:8501`

## 📖 How to Use

Expand Down
10 changes: 10 additions & 0 deletions docker-compose.gpu.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
services:
ollama:
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities:
- gpu
22 changes: 18 additions & 4 deletions docker-compose.yml
Original file line number Diff line number Diff line change
Expand Up @@ -11,10 +11,24 @@ services:
- "8501:8501"
environment:
# Configure Ollama connection (both env vars for compatibility)
- OLLAMA_HOST=http://host.docker.internal:11434
- OLLAMA_BASE_URL=http://host.docker.internal:11434
- OLLAMA_HOST=ollama:11434
- OLLAMA_BASE_URL=http://ollama:11434
restart: unless-stopped
volumes:
- ./data:/app/data # For persistent data storage
extra_hosts:
- "host.docker.internal:host-gateway"

ollama:
image: docker.io/ollama/ollama:latest
container_name: ollama
pull_policy: always
tty: true
restart: always
environment:
- OLLAMA_KEEP_ALIVE=24h
- OLLAMA_HOST=0.0.0.0
- OLLAMA_PORT=11434
volumes:
- ollama:/root/.ollama

volumes:
ollama: { }
8 changes: 0 additions & 8 deletions docker.env.example

This file was deleted.