-
-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(translator): remove LLM <think>xxx</think>
#609
Comments
There is currently an issue, which is that if the original text actually contains |
easy way is after get the response, add a filter to delete the 'think' part and then go through the normal output. |
The current issue is how to distinguish between the |
If unable to distinguish this, it may accidentally remove the |
<think>xxx</think>
Using |
How about simply adding a checkbox to let users choose whether the model being used is a deep thinking model? 😎 |
Since the false positive rate of using |
#586 After rewriting, adding parameters becomes much more convenient. |
ollama没有关闭think输出的开关,所以只能从试着快刀斩乱麻,粗暴的正则处理了。 |
问题描述
使用ollama本地模型进行翻译,如果使用带有思考的模型比如deepseek r1会将思考部分也输入到翻译结果中,并且同时导致排版错误
可用于测试的pdf:
test.pdf
测试文档
Important
请提供用于复现测试的 PDF 文档
The text was updated successfully, but these errors were encountered: