@@ -47,18 +47,24 @@ Choose models based on your system capabilities:
47
47
| ** Chat** | ` phi3:mini ` | ~ 2.3GB | 4GB | Low-resource systems |
48
48
49
49
50
+ ### Installation Options
51
+
52
+ Choose your preferred installation method:
53
+
54
+ ### Option 1: Direct Installation
50
55
51
- ### Prerequisites (Required for Both Installation Methods)
56
+ ** Prerequisite: Ollama (for local AI models)**
57
+
58
+ Install Ollama
52
59
53
- ** 1. Install Ollama** (for local AI models):
54
60
``` bash
55
61
# macOS
56
62
brew install ollama
57
63
58
64
# Or download from https://ollama.com
59
65
```
66
+ Start Ollama and install required models
60
67
61
- ** 2. Start Ollama and install required models** :
62
68
``` bash
63
69
ollama serve
64
70
@@ -69,11 +75,7 @@ ollama pull nomic-embed-text
69
75
ollama pull qwen3:14b
70
76
```
71
77
72
- ### Installation Options
73
-
74
- Choose your preferred installation method:
75
78
76
- ### Option 1: Direct Installation
77
79
78
80
** Additional Prerequisites:**
79
81
- Python 3.8+
@@ -106,7 +108,10 @@ Choose your preferred installation method:
106
108
107
109
### Option 2: Docker Installation
108
110
109
- ** Additional Prerequisites:**
111
+ With this option, you don't need to separately install Ollama, it will automatically
112
+ get started by docker compose.
113
+
114
+ ** Prerequisites:**
110
115
- Docker and Docker Compose
111
116
112
117
** Installation Steps:**
@@ -122,7 +127,17 @@ Choose your preferred installation method:
122
127
docker-compose up
123
128
```
124
129
125
- 3 . ** Open your browser** to ` http://localhost:8501 `
130
+ 3 . ** Install models**
131
+
132
+ ```
133
+ # embedding model
134
+ docker exec -it ollama ollama pull nomic-embed-text
135
+
136
+ # chat model
137
+ docker exec -it ollama ollama pull qwen3:14b
138
+ ```
139
+
140
+ 4 . ** Open your browser** to ` http://localhost:8501 `
126
141
127
142
## 📖 How to Use
128
143
0 commit comments