Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Question]: Model Ollama cannot connect #333

Open
ginisksam opened this issue Apr 12, 2024 · 43 comments
Open

[Question]: Model Ollama cannot connect #333

ginisksam opened this issue Apr 12, 2024 · 43 comments
Labels
🙋‍♀️ question Further information is requested

Comments

@ginisksam
Copy link

Describe your problem

But LLM limited. Got Ollama - Mistral instance running at 127.0.0.1:11434 but cannot add Ollama as model in RagFlow. Please assist. This software is very good and flexible for document split-chunk-semantic for embedding. Many thanks

@ginisksam ginisksam added the 🙋‍♀️ question Further information is requested label Apr 12, 2024
@KevinHuSh
Copy link
Collaborator

Do you mean on the demo Website or locally deployed?
If on the demo Website, 127.0.0.1 is a not accessible IP. Make sure the server deploying Ollama has an internet accessible IP address.
If you deploy RAGFlow locally, make sure both Ollama and RAGFlow in the same LAN that can comunicate eachother.
A correct Ollama IP and Port is the key.

@ginisksam
Copy link
Author

ginisksam commented Apr 12, 2024

locally deployed. The error is as flws:

hint : 102
Fail to access model(mistral).ERROR: [Errno 111] Connection refused

As you know Ollama is really popular now for local machine.

OK, I got your msg. IP on same LAN is key.
Will try restart Ollama as root and try OLLAMA_HOST=0.0.0.0:11434 ollama serve

Thanks

@shaoxinghua0623
Copy link

shaoxinghua0623 commented Apr 12, 2024

Hello, I got the same problem.

hint : 102
Fail to access model(qwen:14b).ERROR: [Errno 111] Connection refused

I modified Environment="OLLAMA_HOST=0.0.0.0"
And when I try 0.0.0.0:11434 on browser, it showed ollama is running.

I couldn't add model on the web.
Could you help me, thx.

image

image

image

image

@mjiulee
Copy link

mjiulee commented Apr 12, 2024

感谢大神~~

尝试了下,ollama链接不成功,但是打开:否是否支持 Vision后,可以添加成功。

但是,在chat选项里面,没列初ollama的选项,应该是假象。
11

聊天配置,没得选刚才添加的ollama模型
微信截图_20240412125738

希望大神完善下。

@shaoxinghua0623
Copy link

Yes, I got the same promble with you. @mjiulee

@mjiulee
Copy link

mjiulee commented Apr 12, 2024

@shaoxinghua0623

我又尝试了遍,神奇的可以了。

就是把url换成你装ollama的服务器的ip地址就行,例如:http://192.168.0.100:11434

然后就ok了~

@shaoxinghua0623
Copy link

@mjiulee

真的,谢谢兄弟!

@matheospower
Copy link

matheospower commented Apr 12, 2024

@shaoxinghua0623
I have the same issue on Ubuntu 22.04 . Did the above resolve your issue?
If yes can you please help me how to find the appropriate IP for Ollama url?

@shaoxinghua0623
Copy link

shaoxinghua0623 commented Apr 12, 2024

@matheospower
you can use the command ifconfig on the terminal to find the IP of Ubuntu.
And the ollama base url is http://IP of your Ubuntu:11434.
IP of your Ubuntu is not 0.0.0.0 or 127.0.0.1

@matheospower
Copy link

@matheospower you can use the command ifconfig on the terminal to find the IP of Ubuntu. And the Ollama base url is http://IP of your Ubuntu:11434. IP of your Ubuntu is not 0.0.0.0 or 127.0.0.1

Thank you for the answer! Unfortunately, this did not resolve my problem. Not sure If I need to open a new issue but I will post it here.

My problem is that I get stuck in the pop-up to add an Ollama model. I tested the Ollama service (running), from outside and inside the ragflow-server with curl, and seems fine and can be reached. After setting the url in the pop-up and clicking ok, it is loading for some time and then gives me a connection time-out. Also I cannot see anything in the docker logs -f ragflow-server or the rag flow-logs directory.

If anyone had a similar issue or can give a hint on how to troubleshoot please let me know!

@fredrousseau
Copy link

Hi , same issue there .... I did test using http://host.docker.internal:11434/ as a base url ( that's probably the way to go specially in a docker deployment model) but I got an error "Hint 102 : Fail to access model(/mistral).ERROR: [Errno -2] Name or service not known " ...

@fredrousseau
Copy link

Found a way to solve that issue , I had to change my ollama settings to "Environment="OLLAMA_HOST=PRIVATEIP" to get it exposed ... it looks like if you listen on the loopback (127.0.0.1) ragflow is not able to reach it

@ShawnHoo7256
Copy link

ShawnHoo7256 commented Apr 13, 2024

Solve this Problem:

  1. Make sure Ollama is OK.
  2. Config as follow:
截屏2024-04-13 12 26 32

@ginisksam
Copy link
Author

ginisksam commented Apr 13, 2024

Found a way to solve that issue , I had to change my ollama settings to "Environment="OLLAMA_HOST=PRIVATEIP" to get it exposed ... it looks like if you listen on the loopback (127.0.0.1) ragflow is not able to reach it

Kool. Will give it a try.

FINDINGS:
Just discovered that my existing ollama working well with langchain is not at root level.

If I edit my ollama.service file and set Environment="OLLAMA_HOST=PRIVATEIP" and systemctl start ollama.service - In browser PRIVATEIP:11434 => ollama is running. Fine.

But in terminal - ollama list - all the models are missing!!!
Case in point - Can ollama resides in root and user - and serve at root or user level separately at any one time? Will not affect each other?
OS: Linux Mint 21.3 (newbie)

@OmegAshEnr01n
Copy link

Found a way to solve that issue , I had to change my ollama settings to "Environment="OLLAMA_HOST=PRIVATEIP" to get it exposed ... it looks like if you listen on the loopback (127.0.0.1) ragflow is not able to reach it

This didnt work for me either

@hiwujie
Copy link

hiwujie commented Apr 19, 2024

If you are in docker and cannot connect to a service running on your host machine running on a local interface or loopback:

  • localhost
  • 127.0.0.1
  • 0.0.0.0

Then in docker you need to replace that localhost part with host.docker.internal. For example, if running Ollama on the host machine, bound to http://127.0.0.1:11434/ you should put http://host.docker.internal:11434 into the connection URL in OLLAMA setup.

Important
On linux http://host.docker.internal:xxxx does not work.

@ganchun1130
Copy link

我可以解决这个问题!
很简单!
在基础URL这一栏里填写:例如:http://192.168.0.100:11434/v1
注意:一定要加v1,我猜测这是ragflow模仿OpenAI的调用格式,而且在ollama的官方调用OpenAI格式的服务时,也是加上了v1!
这样就可以添加模型了!

@BooleanMind
Copy link

BooleanMind commented Apr 28, 2024

If you are in docker and cannot connect to a service running on your host machine running on a local interface or loopback:

  • localhost
  • 127.0.0.1
  • 0.0.0.0

Then in docker you need to replace that localhost part with host.docker.internal. For example, if running Ollama on the host machine, bound to http://127.0.0.1:11434/ you should put http://host.docker.internal:11434 into the connection URL in OLLAMA setup.

Important On linux http://host.docker.internal:xxxx does not work.

This suggestion is so crucial that it needs to be inserted in the /docs/ollama.md imho.

@OmegAshEnr01n
Copy link

How can I make it work on Linux lol.

@gaspardpetit
Copy link

gaspardpetit commented May 2, 2024

If you are in docker and cannot connect to a service running on your host machine running on a local interface or loopback:

  • localhost
  • 127.0.0.1
  • 0.0.0.0

Then in docker you need to replace that localhost part with host.docker.internal. For example, if running Ollama on the host machine, bound to http://127.0.0.1:11434/ you should put http://host.docker.internal:11434 into the connection URL in OLLAMA setup.

Important On linux http://host.docker.internal:xxxx does not work.

host.docker.internal can work on Linux if you modify the docker-compose.yml by adding extra_hosts like this:

    extra_hosts:
      - "host.docker.internal:host-gateway"

Once host-gateway is mapped to host.docker.internal, you should be able to refer to a ollama instance running on the same host as ragflow (but not within the docker-compose) by referring to is as http://host.docker.internal:11434/

@tslyellow
Copy link

您的意思是在演示网站上还是在本地部署?如果在演示网站上,127.0.0.1 是无法访问的 IP。确保部署 Ollama 的服务器具有可通过 Internet 访问的 IP 地址。如果在本地部署 RAGFlow,请确保 Ollama 和 RAGFlow 位于可以相互通信的同一 LAN。正确的 Ollama IP 和端口是关键。

Can't you use 'http://localhost:11434' to connect to ollama on the demo? You can only use 'http://localhost:11434' to connect to ollama after local deployment, is that right? If I want to add ollama3 to the demo, what is the best way?

@tslyellow
Copy link

http://host.docker.internal:11434

Hi, I ragflow via docker, ollama is in win local, I set the url to ' http://host.docker.internal:11434 ' and still get an error, do you know what's going on? If you can, can you help me out?
1719489963889

@gaspardpetit
Copy link

@tslyellow On Windows when running a Linux container in WSL, if you want to reach a port on the Windows host, you need to add --add-host=host.docker.internal:host-gateway to your docker command line, and target host.docker.internal (like you are doing above). If you are launching the container from docker compose, then see my post above about using extra_hosts.

If it still does not work, it may be that ollama, it bound to 127.0.0.0 by default, so the port may not be available outside of your loopback device. To instruct ollama to listen to all network devices (including the docker virtual network), you need to set the OLLAMA_HOST environment variable to 0.0.0.0. Note that this will also expose ollama to incoming traffic from outside your PC, so you may want to ensure that you have proper firewall settings in place. Alternatively, you may chose to bind ollama to your WLS IP, which can be found by running ipconfig.

@zzlTim
Copy link

zzlTim commented Jul 24, 2024

ifconfig

你解决了吗?

@zzlTim
Copy link

zzlTim commented Jul 24, 2024

我怎样才能让它在 Linux 上运行,哈哈。

你解决了吗

@Stella12121
Copy link

@shaoxinghua0623

我又尝试了遍,神奇的可以了。

就是把url换成你装ollama的服务器的ip地址就行,例如:http://192.168.0.100:11434

然后就ok了~

你好,我已经把url换成装ollama的服务器ip地址了,但出现了如图所示的问题,请问有何见解吗?非常感谢!
connection issue

@zzlTim
Copy link

zzlTim commented Jul 25, 2024

http://host.docker.internal:11434

嗨,我通过 docker ragflow,ollama 在 win 本地,我将 url 设置为“http://host.docker.internal:11434”但仍然出现错误,你知道发生了什么吗?如果可以,你能帮我吗? 1719489963889

你解决了吗

@zzlTim
Copy link

zzlTim commented Jul 25, 2024

完全相同的步骤,甚至是新系统,新机器。感觉就是他们的框架有问题

@yangboz
Copy link
Contributor

yangboz commented Aug 5, 2024

@shaoxinghua0623

我又尝试了遍,神奇的可以了。

就是把url换成你装ollama的服务器的ip地址就行,例如:http://192.168.0.100:11434

然后就ok了~

几个月后神奇的又不可以了:(

@cidxb
Copy link

cidxb commented Aug 20, 2024

Same problem here...

@yangboz
Copy link
Contributor

yangboz commented Aug 27, 2024

完全相同的步骤,甚至是新系统,新机器。感觉就是他们的框架有问题

总结之前的思路,确保服务端器 开启ollama服务,防火墙开放11434端口,许可外部进行端口连接,good luck.

@yi0n
Copy link

yi0n commented Sep 5, 2024

You need to make sure, how your dockers network and therefore the containers network is set up.
Check by: docker inspect ollama
You will see an Gateway IP address that docker chose for that container, if you didn't some custom networking config.
Maybe its something like "172.17.0.1"
That one you will use for the base url: http://172.17.0.1:11434

@npnpatidar
Copy link

npnpatidar commented Sep 7, 2024

#I made it work like this using above comments:

#add this to docker-compose and then use docker inspect to find IP which can be added in ragflow ui

ollama:
image: ollama/ollama
ports:
- 11434:11434
environment:
- OLLAMA_HOST=0.0.0.0
networks:
- ragflow
restart: always

@ppoccian
Copy link

image
Have you tried this ?

@yangboz
Copy link
Contributor

yangboz commented Oct 23, 2024

yes , in ubuntu , configured in systemctl service, tried restart to make it work.

@LIUBINfighter
Copy link

原来大家都卡在这里。。能不能直接把ollama放进ragflow的docker里?

@LIUBINfighter
Copy link

我成功了,兄弟们。首先要向这位哥们学习:

#I made it work like this using above comments:

#add this to docker-compose and then use docker inspect to find IP which can be added in ragflow ui

ollama: image: ollama/ollama ports: - 11434:11434 environment: - OLLAMA_HOST=0.0.0.0 networks: - ragflow restart: always

这是最简单的在这个docker里运行ollama的方法
然后运行

sudo docker ps

检查运行的docker,记下ollama对应的名字以及对应的IP, 在网页端填入 172.xx.xx.xx:11434

看到这个就稳了一半

image

进入docker

sudo docker exec -it <docker-ollama-name> /bin/bash

然后正常使用,pull你想要的llm

@GuokaiLiu
Copy link

我成功了,兄弟们。首先要向这位哥们学习:

#I made it work like this using above comments:
#add this to docker-compose and then use docker inspect to find IP which can be added in ragflow ui
ollama: image: ollama/ollama ports: - 11434:11434 environment: - OLLAMA_HOST=0.0.0.0 networks: - ragflow restart: always

这是最简单的在这个docker里运行ollama的方法 然后运行

sudo docker ps

检查运行的docker,记下ollama对应的名字以及对应的IP, 在网页端填入 172.xx.xx.xx:11434

看到这个就稳了一半

image

进入docker

sudo docker exec -it <docker-ollama-name> /bin/bash

然后正常使用,pull你想要的llm

谢谢,但是如果我想用宿主机的Ollama该怎么办呢?

@GuokaiLiu
Copy link

#I made it work like this using above comments:

#add this to docker-compose and then use docker inspect to find IP which can be added in ragflow ui

ollama: image: ollama/ollama ports: - 11434:11434 environment: - OLLAMA_HOST=0.0.0.0 networks: - ragflow restart: always

Thanks. Will this have to pull models in the docker container?

@GuokaiLiu
Copy link

You need to make sure, how your dockers network and therefore the containers network is set up. Check by: docker inspect ollama You will see an Gateway IP address that docker chose for that container, if you didn't some custom networking config. Maybe its something like "172.17.0.1" That one you will use for the base url: http://172.17.0.1:11434

Thanks. It returns : Error: No such object ollama

@GuokaiLiu
Copy link

put http://host.docker.internal:11434 into the connection URL in OLLAMA setup

Could you kindly share the operations on 'put http://host.docker.internal:11434/ into the connection URL in OLLAMA setup'?

@GuokaiLiu
Copy link

PRIVATEIP

What does PRIVATEIP mean here?

@liang996-tech
Copy link

@shaoxinghua0623

我又尝试了遍,神奇的可以了。

就是把url换成你装ollama的服务器的ip地址就行,例如:http://192.168.0.100:11434

然后就ok了~

谢谢你,这个方法很好!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🙋‍♀️ question Further information is requested
Projects
None yet
Development

No branches or pull requests