Skip to content

Commit

Permalink
bugfix(Azure): fix index out of range error due to Azure Openai repon…
Browse files Browse the repository at this point in the history
…ses an empty chunk at first (#820)

Co-authored-by: 一帆 <[email protected]>
  • Loading branch information
zfanswer and 一帆 authored Nov 28, 2023
1 parent d9cc102 commit 0b02451
Showing 1 changed file with 11 additions and 0 deletions.
11 changes: 11 additions & 0 deletions pilot/model/proxy/llms/chatgpt.py
Original file line number Diff line number Diff line change
Expand Up @@ -172,6 +172,11 @@ def chatgpt_generate_stream(
res = client.chat.completions.create(messages=history, **payloads)
text = ""
for r in res:
# logger.info(str(r))
# Azure Openai reponse may have empty choices body in the first chunk
# to avoid index out of range error
if not r.get("choices"):
continue
if r.choices[0].delta.content is not None:
content = r.choices[0].delta.content
text += content
Expand All @@ -186,6 +191,8 @@ def chatgpt_generate_stream(

text = ""
for r in res:
if not r.get("choices"):
continue
if r["choices"][0]["delta"].get("content") is not None:
content = r["choices"][0]["delta"]["content"]
text += content
Expand Down Expand Up @@ -220,6 +227,8 @@ async def async_chatgpt_generate_stream(
res = await client.chat.completions.create(messages=history, **payloads)
text = ""
for r in res:
if not r.get("choices"):
continue
if r.choices[0].delta.content is not None:
content = r.choices[0].delta.content
text += content
Expand All @@ -233,6 +242,8 @@ async def async_chatgpt_generate_stream(

text = ""
async for r in res:
if not r.get("choices"):
continue
if r["choices"][0]["delta"].get("content") is not None:
content = r["choices"][0]["delta"]["content"]
text += content
Expand Down

2 comments on commit 0b02451

@nuaabuaa07
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this update cause error .
error message is as follows:

INFO [pilot.model.proxy.llms.chatgpt] Send request to real model gpt-3.5-turbo-1106
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
- Avoid using tokenizers before the fork if possible
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
ERROR [pilot.model.cluster.worker.default_worker] Model inference error, detail: Traceback (most recent call last):
File "/Users/jinzhiliang/githubRepo/DB-GPT/pilot/model/cluster/worker/default_worker.py", line 154, in generate_stream
for output in generate_stream_func(
File "/Users/jinzhiliang/githubRepo/DB-GPT/pilot/model/llm_out/proxy_llm.py", line 38, in proxyllm_generate_stream
yield from generator_function(model, tokenizer, params, device, context_len)
File "/Users/jinzhiliang/githubRepo/DB-GPT/pilot/model/proxy/llms/chatgpt.py", line 178, in chatgpt_generate_stream
if not r.get("choices"):
AttributeError: 'ChatCompletionChunk' object has no attribute 'get'

Traceback (most recent call last):
File "/Users/jinzhiliang/githubRepo/DB-GPT/pilot/scene/base_chat.py", line 255, in nostream_call
self.prompt_template.output_parser.parse_model_nostream_resp(
File "/Users/jinzhiliang/githubRepo/DB-GPT/pilot/out_parser/base.py", line 122, in parse_model_nostream_resp
raise ValueError(
ValueError: Model server error!code=1, errmsg is LLMServer Generate Error, Please CheckErrorInfo.: 'ChatCompletionChunk' object has no attribute 'get'

ERROR [pilot.scene.base_chat] model response parase faild!Model server error!code=1, errmsg is LLMServer Generate Error, Please CheckErrorInfo.: 'ChatCompletionChunk' object has no attribute 'get'
(base) jinzhiliang@jinzhiliangdeMacBook-Pro DB-GPT % conda activate py310_dbgpt

@nuaabuaa07
Copy link

@nuaabuaa07 nuaabuaa07 commented on 0b02451 Nov 29, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I use openai api directly , not Azure. show error as above.

Please sign in to comment.