Open
Description
OS
Windows
GPU Library
CUDA 12.x
Python version
3.12
Pytorch version
Tabbyapi latest
Model
No response
Describe the bug
[BUG]i just updated tabbyapi to be able to use the latest exllama 0.29 version with gemma 3 27b exl2, but asking it to tell a kid story ends up looping nonsense afterwards 2-3 correct paragraphs. I tried with the turboderp 4, 5 and 6bpw models, i limited to 20000 and to 10000 context length in tabby api and tried with cache mode q4, q6 and FP16. Same behaviour..
Reproduction steps
Tabbyapi latest version with exllama 0.29. Any of the gemma-3 27b 4, 5, 6bpw models. Cache q4 q6 or fp16, tell a kids story with a wizard. Nonsense loop after a couple of good paragraphs
Expected behavior
Non nonsense loops
Logs
No response
Additional context
No response
Acknowledgements
- I have looked for similar issues before submitting this one.
- I understand that the developers have lives and my issue will be answered when possible.
- I understand the developers of this program are human, and I will ask my questions politely.