-
Notifications
You must be signed in to change notification settings - Fork 328
Closed
Description
Issue encountered
When evaluating a reasoning model via litellm, it seems like reasoning_content
is ignored by lighteval as of now:
result: list[str] = [choice.message.content for choice in response.choices] |
reasoning_content
of responses as details even if they are not used for computing metrics. Inspecting the reasonings can be helpful for development.
Solution/Feature
litellm can return reasoning_content
separately from content
. https://github.com/BerriAI/litellm/blob/aac7cdf2c8b58f71042c661bc732d936fd4b09ce/litellm/types/utils.py#L575.
However, lighteval's ModelResponse
does not have an attributes for reasoning texts.
class ModelResponse: |
I guess adding an attribute to it and setting its value in
LiteLLMClient
would be sufficient for the litellm entry point. I can make a pull request for the litellm entry point if this sounds ok. Other entry points would require similar modifications, but I haven't checked.Metadata
Metadata
Assignees
Labels
No labels