Skip to content

Conversation

gitbuda
Copy link
Member

@gitbuda gitbuda commented Sep 24, 2025

  • Make it work with a fast local LLM
  • Profile the execution -> per page/small_doc it's slow ~10-12s/page, local gpt-oss-20b on legacy RTX 3060 takes ~85s, multiple LLM calls take most of the time

@gitbuda gitbuda self-assigned this Sep 24, 2025
@gitbuda gitbuda marked this pull request as ready for review October 4, 2025 14:35
@gitbuda gitbuda requested a review from antejavor as a code owner October 4, 2025 14:35
Copy link
Contributor

@antejavor antejavor left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have you planned to have this released on PyPI?

@gitbuda
Copy link
Member Author

gitbuda commented Oct 19, 2025

Merging this.
The future is listed under the #88 (I think we should, once it's polished, publish the LightRAG integration under the PyPi).

@gitbuda gitbuda merged commit fe70838 into main Oct 19, 2025
2 checks passed
@gitbuda gitbuda deleted the add-lightrag-integration branch October 19, 2025 18:07
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants