Skip to content

Conversation

@github-actions
Copy link
Contributor

@github-actions github-actions bot commented Nov 14, 2025

This PR adds benchmark results for the openai/gpt-5.1 model.

The following files have been updated:

  • src/benchmark/results.json - Raw benchmark results
  • src/benchmark/validation-results.json - Validation results against human baseline

This PR was automatically generated by the benchmark workflow.

Note: If you don't want to merge this PR, close it and the model will be added to the untested list to prevent re-processing.

@alrocar


Note

Adds openai/gpt-5.1 to the benchmark suite, including raw run outputs and validation comparisons against the human baseline.

  • Config:
    • Add openai model gpt-5.1 to src/benchmark-config.json.
  • Benchmarks:
    • Append extensive gpt-5.1 run data to src/benchmark/results.json (SQL, outputs, metrics, attempts for many pipe_XX queries).
  • Validation:
    • Update src/benchmark/validation-results.json with per-query results for openai/gpt-5.1, including match statuses, distances, and aggregate summary metrics.

Written by Cursor Bugbot for commit 206a345. This will update automatically on new commits. Configure here.

@vercel
Copy link

vercel bot commented Nov 14, 2025

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Preview Comments Updated (UTC)
llm-benchmark Ready Ready Preview Comment Nov 14, 2025 0:21am

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant