Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Follow the instruction step by step, but couldn't run my local LLM successfully. #819

Closed
GregChiang0201 opened this issue Jul 18, 2024 · 1 comment

Comments

@GregChiang0201
Copy link

I think the program has no problem, and everything can work properly(I already used print to check every crucial parts), while I ran python3 run_localGPT.py, it can show Enter a query: too, but it I enter anything(except exit), my program will stuck and no response as follow, and I always need to use crtl+C to shut it down.

2024-07-18 18:32:05,980 - INFO - run_localGPT.py:145 - LLM pipeline loaded.
2024-07-18 18:32:05,985 - INFO - run_localGPT.py:167 - Retrieval-based QA pipeline created.
2024-07-18 18:32:05,985 - INFO - run_localGPT.py:257 - Model is ready for queries. Enter 'exit' to quit the program.

Enter a query: explain pdf to me
2024-07-18 18:32:12,415 - INFO - run_localGPT.py:268 - Processing query: explain pdf to me
^C
Aborted!

@GregChiang0201
Copy link
Author

The program has no problem, is device issue, I only access 1 CPU core to run the program, if rise the number of CPU will be fine.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant