Skip to content

LMDeploy Release V0.0.10

Compare
Choose a tag to compare
@lvhan028 lvhan028 released this 26 Sep 12:52
· 913 commits to main since this release
b58a9df

What's Changed

💥 Improvements

🐞 Bug fixes

  • Fix side effect brought by supporting codellama: sequence_start is always true when calling model.get_prompt by @lvhan028 in #466
  • Miss meta instruction of internlm-chat model by @lvhan028 in #470
  • [bug] Fix race condition by @akhoroshev in #460
  • Fix compatibility issues with Pydantic 2 by @aisensiy in #465
  • fix benchmark serving cannot use Qwen tokenizer by @AllentDan in #443
  • Fix memory leak by @lvhan028 in #488

📚 Documentations

🌐 Other

New Contributors

Full Changelog: v0.0.9...v0.0.10