Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ollama models are not working with Loca;LLM #1358

Open
anilmadishetty2498 opened this issue Sep 5, 2024 · 3 comments
Open

Ollama models are not working with Loca;LLM #1358

anilmadishetty2498 opened this issue Sep 5, 2024 · 3 comments
Labels
bug Something isn't working

Comments

@anilmadishetty2498
Copy link

System Info

python=3.11.7

🐛 Describe the bug

from pandasai.llm.local_llm import LocalLLM

llm = LocalLLM(api_base="http://localhost:11434/v1",
model="llama3")

db = Agent(scm_vc, config={"llm": llm})

not working

Copy link

dosubot bot commented Sep 5, 2024

The Ollama models are not working with LocalLLM because LocalLLM is specifically designed to interface with OpenAI's API. The LocalLLM class initializes an OpenAI client and uses it to generate chat completions. There is no implementation or support for Ollama models in this class.

class LocalLLM(LLM):
    def __init__(self, api_base: str, model: str = "", api_key: str = "", **kwargs):
        if not api_key:
            api_key = "dummy"

        self.model = model
        self.client = OpenAI(base_url=api_base, api_key=api_key).chat.completions
        self._invocation_params = kwargs

The self.client is set to OpenAI(base_url=api_base, api_key=api_key).chat.completions, which means it is configured to use OpenAI's chat completion endpoint. Therefore, it cannot work with Ollama models without modification to support Ollama's API [1].

To continue talking to Dosu, mention @dosu.

@dosubot dosubot bot added the bug Something isn't working label Sep 5, 2024
@arquehi
Copy link

arquehi commented Oct 8, 2024

ollama_host = os.getenv('OLLAMA_HOST') + "/v1"
ollama_model = os.getenv('OLLAMA_MODEL_NAME')
llm = LocalLLM(api_base=ollama_host, model=ollama_model, api_key="ollama")
config = {
"enable_cache": False,
"verbose": True,
"open_charts": False,
"save_charts": True,
"llm": llm,
"response_parser": CustomResponseParser
}
sdf_agent = SmartDataframe(df, config=config)
this is my code,it`s worked. share with you,good luck

@CongxianYang
Copy link

ollama_host = os.getenv('OLLAMA_HOST') + "/v1" ollama_model = os.getenv('OLLAMA_MODEL_NAME') llm = LocalLLM(api_base=ollama_host, model=ollama_model, api_key="ollama") config = { "enable_cache": False, "verbose": True, "open_charts": False, "save_charts": True, "llm": llm, "response_parser": CustomResponseParser } sdf_agent = SmartDataframe(df, config=config) this is my code,it`s worked. share with you,good luck

CustomResponseParser ,Is there an example of this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants