Skip to content

Releases: superlinear-ai/raglite

v1.0.0

11 Jun 15:55
376fe0d
Compare
Choose a tag to compare

🎉 RAGLite v1.0: DuckDB, Qwen3, parallel insertion, benchmarking, better retrieval quality

Release Highlights

  • 🐤 support for DuckDB (#137)
  • 🐻 support for Qwen3 (#124)
  • ⚡️ parallel document insertion (#150)
  • 🏁 benchmarking with raglite bench (#150)
  • 🎯 better retrieval quality with improved multi-vector search, chunk quality, and chunk front matter (#123, #126, #132)
  • 💎 new and improved query adapter algorithm (#146, #147, #149)

What's Changed

New Contributors

Full Changelog: v0.7.0...v1.0.0

v0.7.0

17 Mar 13:22
f6495ae
Compare
Choose a tag to compare

What's Changed

New Contributors

Full Changelog: v0.6.2...v0.7.0

v0.6.2

06 Jan 22:27
290e2c0
Compare
Choose a tag to compare

What's Changed

  • fix: remove unnecessary stop sequence by @lsorber in #84

Full Changelog: v0.6.1...v0.6.2

v0.6.1

06 Jan 14:18
d1e1f39
Compare
Choose a tag to compare

What's Changed

  • fix: conditionally enable LlamaRAMCache by @lsorber in #83
  • fix(deps): exclude litellm versions that break get_model_info by @lsorber in #78
  • fix: improve (re)insertion speed by @lsorber in #80
  • fix: fix Markdown heading boundary probas by @lsorber in #81

Full Changelog: v0.6.0...v0.6.1

v0.6.0

05 Jan 15:39
b19963d
Compare
Choose a tag to compare

What's Changed

New Contributors

Full Changelog: v0.5.1...v0.6.0

v0.5.1

18 Dec 15:15
bf598dc
Compare
Choose a tag to compare

What's Changed

  • fix: improve output for empty databases by @lsorber in #68

Full Changelog: v0.5.0...v0.5.1

v0.5.0

17 Dec 09:49
2e9bfaf
Compare
Choose a tag to compare

What's Changed

New Contributors

Full Changelog: v0.4.1...v0.5.0

v0.4.1

05 Dec 20:50
0c5b7b5
Compare
Choose a tag to compare

What's Changed

  • fix: support embedding with LiteLLM for Ragas by @undo76 in #56
  • fix: add and enable OpenAI strict mode by @undo76 in #55

Full Changelog: v0.4.0...v0.4.1

v0.4.0

04 Dec 16:31
abb4d1b
Compare
Choose a tag to compare

What's Changed

  • feat: improve late chunking and optimize pgvector settings by @lsorber in #51
    • Add a workaround for #24 to increase the embedder's context size from 512 to a user-definable size.
    • Increase the default embedder context size to 1024 tokens (more degrades bge-m3's performance).
    • Upgrade llama-cpp-python to the latest version.
    • More robust testing of rerankers with Kendall's rank correlation coefficient.
    • Optimise pgvector's settings.
    • Offer better control of oversampling in hybrid and vector search.
    • Upgrade to the PostgreSQL 17.

Full Changelog: v0.3.0...v0.4.0

v0.3.0

03 Dec 18:26
0fd1970
Compare
Choose a tag to compare

What's Changed

  • feat: support prompt caching and apply Anthropic's long-context prompt format by @undo76 in #52

Full Changelog: v0.2.1...v0.3.0