Skip to content

Conversation

slekkala1
Copy link
Contributor

@slekkala1 slekkala1 commented Sep 2, 2025

What does this PR do?

The notebook was reverted(#3259) as it had some local paths, I missed correcting. Trying with corrections now

Test Plan

Ran the Jupyter notebook

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Meta Open Source bot. label Sep 2, 2025
}
],
"source": [
"!pip install fastapi uvicorn \"langchain>=0.2\" langchain-openai \\\n",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use uv pip install?

"source": [
"# Insert chunks into FAISS vector store\n",
"\n",
"response = client.vector_io.insert(vector_db_id=\"acme_docs\", chunks=chunks)"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yea let's use the new vector_stores APIs instead

Copy link
Collaborator

@mattf mattf left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ways to make this simpler -

  • !pip install everything in one step
  • use pkill -f llama_stack.core.server.server -e
  • use uv run --with llama-stack llama stack build ... --run for one step install & startup
  • avoid !pip install and subprocess(... "pip"...)
  • the vector_store api will do this chunking for you
  • pass base_url and api_key to ChatOpenAI instead of seting env vars

questions -

  • why do you need to adjust UV_SYSTEM_PYTHON?
  • why are so many models listed?

@reluctantfuturist reluctantfuturist mentioned this pull request Sep 3, 2025
41 tasks
@slekkala1 slekkala1 force-pushed the add-langchain-llamastack-nb branch from 0019616 to 7ae7d6a Compare September 9, 2025 18:15
Copy link
Contributor

@ehhuang ehhuang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@slekkala1
Copy link
Contributor Author

slekkala1 commented Sep 9, 2025

ways to make this simpler -

  • !pip install everything in one step
  • use pkill -f llama_stack.core.server.server -e
  • use uv run --with llama-stack llama stack build ... --run for one step install & startup
  • avoid !pip install and subprocess(... "pip"...)
  • the vector_store api will do this chunking for you
  • pass base_url and api_key to ChatOpenAI instead of seting env vars

questions -

  • why do you need to adjust UV_SYSTEM_PYTHON?

For this, I have added a comment above, just in case user has this set UV_SYSTEM_PYTHON

  • why are so many models listed?

Removed the long list and made it short just of the provider used in the tutorials

@mattf thanks for the helpful comments! I have made it simpler with the suggestions above.

@slekkala1
Copy link
Contributor Author

@mattf Will be landing this, if no other comments to address?

@slekkala1
Copy link
Contributor Author

@mattf Will be landing this, if no other comments to address?

ok landing for now, feel free to raise a task if anything pending still

@slekkala1 slekkala1 merged commit c7ef1f1 into main Sep 11, 2025
6 checks passed
@slekkala1 slekkala1 deleted the add-langchain-llamastack-nb branch September 11, 2025 18:10
iamemilio pushed a commit to iamemilio/llama-stack that referenced this pull request Sep 24, 2025
…ck#3314)

# What does this PR do?
The notebook was
reverted(llamastack#3259) as it had
some local paths, I missed correcting. Trying with corrections now


## Test Plan
Ran the Jupyter notebook
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Meta Open Source bot.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants