Highlights
- Pro
Pinned Loading
-
Strong-AI-Lab/Logical-and-abstract-reasoning
Strong-AI-Lab/Logical-and-abstract-reasoning PublicEvaluation on Logical Reasoning and Abstract Reasoning Challenges
-
Strong-AI-Lab/counterfactual-llm-inference
Strong-AI-Lab/counterfactual-llm-inference Public -
-
-
openai/evals
openai/evals PublicEvals is a framework for evaluating LLMs and LLM systems, and an open-source registry of benchmarks.
Something went wrong, please refresh the page to try again.
If the problem persists, check the GitHub status page or contact support.
If the problem persists, check the GitHub status page or contact support.