Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature Request: Integrate Ollama for Local LLM Support #9

Open
lazarevtill opened this issue Apr 6, 2025 · 0 comments
Open

Feature Request: Integrate Ollama for Local LLM Support #9

lazarevtill opened this issue Apr 6, 2025 · 0 comments

Comments

@lazarevtill
Copy link
Contributor

lazarevtill commented Apr 6, 2025

Feature Request: Integrate Ollama for Local LLM Support

Overview

Add support for [Ollama](https://ollama.ai) to enable users to run local language models like Llama 3, Mistral, and others directly from Zola, enhancing privacy and reducing API costs.

Motivation

Currently, Zola supports cloud-based AI models like OpenAI and Mistral which require API keys and internet connectivity. Adding Ollama support would:

  • Allow users to run models locally on their own hardware
  • Eliminate API costs for those models
  • Enhance privacy by keeping data local
  • Provide offline capability for certain use cases
  • Extend the range of models available to users

Implementation Details

1. Add Ollama SDK

Install the Ollama SDK package (https://sdk.vercel.ai/providers/community-providers/ollama)

2. Update Configuration

Modify app/lib/config.ts to add Ollama models and provider:

// Add to imports
import { ollama } from "@ai-sdk/ollama"
import OllamaIcon from "@/components/icons/ollama"

// Add to MODELS array
{
  id: "ollama/llama3",
  name: "Llama 3 (Local)",
  provider: "ollama",
  features: [
    {
      id: "file-upload",
      enabled: false,
    },
  ],
  api_sdk: ollama("llama3", {
    baseUrl: process.env.OLLAMA_BASE_URL || "http://localhost:11434"
  }),
},
{
  id: "ollama/mistral",
  name: "Mistral (Local)",
  provider: "ollama",
  features: [
    {
      id: "file-upload",
      enabled: false,
    },
  ],
  api_sdk: ollama("mistral", {
    baseUrl: process.env.OLLAMA_BASE_URL || "http://localhost:11434"
  }),
},

// Add to PROVIDERS array
{
  id: "ollama",
  name: "Ollama",
  icon: OllamaIcon,
},

3. Create Ollama Icon Component

Create a new file at components/icons/ollama.tsx:

import * as React from "react"
import type { SVGProps } from "react"

const Icon = (props: SVGProps<SVGSVGElement>) => (
  <svg
    xmlns="http://www.w3.org/2000/svg"
    width={64}
    height={64}
    viewBox="0 0 64 64"
    fill="none"
    {...props}
  >
    <g clipPath="url(#a)">
      <path
        fill="#3D59A1"
        d="M32 0C14.327 0 0 14.327 0 32c0 17.673 14.327 32 32 32 17.673 0 32-14.327 32-32C64 14.327 49.673 0 32 0Zm-5.333 47.133c-6.488 0-11.733-5.245-11.733-11.733 0-6.488 5.245-11.733 11.733-11.733 6.488 0 11.733 5.245 11.733 11.733 0 6.488-5.245 11.733-11.733 11.733Zm20.313.713c-.99 0-1.778-.883-1.778-1.982 0-1.099.789-1.982 1.778-1.982.99 0 1.778.883 1.778 1.982 0 1.099-.789 1.982-1.778 1.982Zm4.8-8.4c-.99 0-1.778-.883-1.778-1.982 0-1.099.789-1.982 1.778-1.982.99 0 1.778.883 1.778 1.982 0 1.099-.789 1.982-1.778 1.982Zm-1.92-14.4c0 1.767-1.408 3.2-3.14 3.2-1.732 0-3.14-1.433-3.14-3.2 0-1.767 1.408-3.2 3.14-3.2 1.732 0 3.14 1.433 3.14 3.2Zm-9.28-12.8c0 2.209-1.76 4-3.92 4-2.16 0-3.92-1.791-3.92-4 0-2.209 1.76-4 3.92-4 2.16 0 3.92 1.791 3.92 4Z"
      />
    </g>
    <defs>
      <clipPath id="a">
        <path fill="#fff" d="M0 0h64v64H0z" />
      </clipPath>
    </defs>
  </svg>
)
export default Icon

4. Environment Variable Configuration

Add Ollama environment variable to .env.local and documentation:

# Ollama configuration
OLLAMA_BASE_URL=http://localhost:11434  # Default URL for local Ollama

5. Add Ollama Health Check Endpoint (Optional)

Create a new API endpoint at app/api/ollama-health/route.ts:

export async function GET(req: Request) {
  try {
    const ollamaUrl = process.env.OLLAMA_BASE_URL || "http://localhost:11434"
    const response = await fetch(`${ollamaUrl}/api/tags`)
    
    if (!response.ok) {
      return new Response(
        JSON.stringify({ error: "Ollama service is not responding correctly" }),
        { status: response.status }
      )
    }
    
    const data = await response.json()
    return new Response(JSON.stringify({ status: "available", models: data.models }), {
      status: 200,
    })
  } catch (err: any) {
    return new Response(
      JSON.stringify({ 
        error: "Failed to connect to Ollama service", 
        details: err.message 
      }),
      { status: 500 }
    )
  }
}

6. Docker Compose Integration

Update docker-compose.yml to add Ollama service:

services:
  # Existing Zola service...
  
  # Add Ollama service
  ollama:
    image: ollama/ollama:latest
    volumes:
      - ollama_data:/root/.ollama
    ports:
      - "11434:11434"
    restart: unless-stopped

volumes:
  ollama_data:

Expected Behavior

  • Users will see Ollama models in the model selector dropdown
  • When Ollama is running locally, users can select and use any locally available models
  • Docker users can run both Zola and Ollama in containers for a complete local setup

Documentation Updates Needed

  • Update README.md with Ollama setup instructions
  • Add Ollama configuration to environment variable docs
  • Include Docker Compose instructions for running with Ollama

Considerations

  • Ollama needs to be running separately (or via Docker Compose)
  • Default fallbacks should be graceful when Ollama is not available
  • The UI should indicate when models are local vs. cloud-based

Related Issues

  • None

References

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant