-
Notifications
You must be signed in to change notification settings - Fork 2k
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
bugfix(Azure): fix index out of range error due to Azure Openai repon…
…ses an empty chunk at first (#820) Co-authored-by: 一帆 <[email protected]>
- Loading branch information
Showing
1 changed file
with
11 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
0b02451
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this update cause error .
error message is as follows:
INFO [pilot.model.proxy.llms.chatgpt] Send request to real model gpt-3.5-turbo-1106
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
- Avoid using
tokenizers
before the fork if possible- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
ERROR [pilot.model.cluster.worker.default_worker] Model inference error, detail: Traceback (most recent call last):
File "/Users/jinzhiliang/githubRepo/DB-GPT/pilot/model/cluster/worker/default_worker.py", line 154, in generate_stream
for output in generate_stream_func(
File "/Users/jinzhiliang/githubRepo/DB-GPT/pilot/model/llm_out/proxy_llm.py", line 38, in proxyllm_generate_stream
yield from generator_function(model, tokenizer, params, device, context_len)
File "/Users/jinzhiliang/githubRepo/DB-GPT/pilot/model/proxy/llms/chatgpt.py", line 178, in chatgpt_generate_stream
if not r.get("choices"):
AttributeError: 'ChatCompletionChunk' object has no attribute 'get'
Traceback (most recent call last):
File "/Users/jinzhiliang/githubRepo/DB-GPT/pilot/scene/base_chat.py", line 255, in nostream_call
self.prompt_template.output_parser.parse_model_nostream_resp(
File "/Users/jinzhiliang/githubRepo/DB-GPT/pilot/out_parser/base.py", line 122, in parse_model_nostream_resp
raise ValueError(
ValueError: Model server error!code=1, errmsg is LLMServer Generate Error, Please CheckErrorInfo.: 'ChatCompletionChunk' object has no attribute 'get'
ERROR [pilot.scene.base_chat] model response parase faild!Model server error!code=1, errmsg is LLMServer Generate Error, Please CheckErrorInfo.: 'ChatCompletionChunk' object has no attribute 'get'
(base) jinzhiliang@jinzhiliangdeMacBook-Pro DB-GPT % conda activate py310_dbgpt
0b02451
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I use openai api directly , not Azure. show error as above.