Offline LLMs #813
Unanswered
Hrishikesh007788
asked this question in
Q&A
Replies: 3 comments 1 reply
-
Same question, I can't find the way to do it |
Beta Was this translation helpful? Give feedback.
0 replies
-
Are you trying with LM Studio? |
Beta Was this translation helpful? Give feedback.
0 replies
-
@Hrishikesh007788 Just import the local LLM you want to use via langchain and pass it to a Example: from langchain_community.llms.huggingface_pipeline import HuggingFacePipeline
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_id = "gpt2"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer, max_new_tokens=100)
llm = HuggingFacePipeline(pipeline=pipe)
df = SmartDataframe(..., config={"llm": llm}) Hope this helps. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
@gventuri Is there a way to connect pandasai with LLM in our local machine completely offline? And is there is way to serve the local LLM in a server such that anyone can access it?
Beta Was this translation helpful? Give feedback.
All reactions