Skip to content

Evaluation Pipeline Issue. #38

@SummerFall1819

Description

@SummerFall1819

When I make my own implementation, I found a little bug in your code. Take qwen3vl as an example.

while True:
prediction = self.openai_chat_completions_create(
model=self.model_name,
messages=messages,
retry_times=3,
**self.runtime_conf,
)
if prediction is None:
raise Exception("Error when fetching response from clients")
try:
parsed_response = parse_action_to_structure_output(
prediction,
)
logger.info(f"Parsed response: \n{parsed_response}")
break
except Exception:
if try_times > 0:
logger.error("Error when parsing response from clients")
logger.error(traceback.format_exc())
prediction = None
try_times -= 1

This code have a retry mechanism. But after three fails it won't quit the loop. Thus constantly logging openAI API use. similar issue is reported in #32 . Would you like to make a quick fix on that?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions