_id
stringlengths 24
24
| id
stringlengths 5
121
| author
stringlengths 2
42
| cardData
stringlengths 2
1.07M
β | disabled
bool 2
classes | gated
null | lastModified
timestamp[ns] | likes
int64 0
6.8k
| trendingScore
float64 0
97
| private
bool 1
class | sha
stringlengths 40
40
| description
stringlengths 0
6.67k
β | downloads
int64 0
2.2M
| tags
sequencelengths 1
7.92k
| createdAt
timestamp[ns] | key
stringclasses 1
value | citation
stringlengths 0
10.7k
β | paperswithcode_id
stringclasses 645
values |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
63990f21cc50af73d29ecfa3 | fka/awesome-chatgpt-prompts | fka | {"license": "cc0-1.0", "tags": ["ChatGPT"], "task_categories": ["question-answering"], "size_categories": ["100K<n<1M"]} | false | null | 2025-01-06T00:02:53 | 6,798 | 97 | false | 68ba7694e23014788dcc8ab5afe613824f45a05c | π§ Awesome ChatGPT Prompts [CSV dataset]
This is a Dataset Repository of Awesome ChatGPT Prompts
View All Prompts on GitHub
License
CC-0
| 5,377 | [
"task_categories:question-answering",
"license:cc0-1.0",
"size_categories:n<1K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"ChatGPT"
] | 2022-12-13T23:47:45 | null | null |
|
66cbf7ef92e9f5b19fcd65aa | cfahlgren1/react-code-instructions | cfahlgren1 | {"license": "mit", "pretty_name": "React Code Instructions"} | false | null | 2025-01-11T00:23:09 | 93 | 53 | false | 92e5efb16b9457c0eb5b862c6c2c61f4074dc17c |
React Code Instructions
Popular Queries
Number of instructions by Model
Unnested Messages
Instructions Added Per Day
Dataset of Claude Artifact esque React Apps generated by Llama 3.1 70B, Llama 3.1 405B, and Deepseek Chat V3.
Examples
Virtual Fitness Trainer Website
LinkedIn Clone
iPhone Calculator
Chipotle Waitlist
Apple Store
| 451 | [
"license:mit",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"region:us"
] | 2024-08-26T03:35:11 | null | null |
|
67750882633d421965733171 | DAMO-NLP-SG/multimodal_textbook | DAMO-NLP-SG | {"license": "apache-2.0", "task_categories": ["text-generation", "summarization"], "language": ["en"], "tags": ["Pretraining", "Interleaved", "Reasoning"], "size_categories": ["1M<n<10M"]} | false | null | 2025-01-11T00:09:47 | 52 | 45 | false | 9b51910ec52c10ab5d02d4a67981e1291620188d |
Multimodal-Textbook-6.5M
Overview
This dataset is for "2.5 Years in Class: A Multimodal Textbook for Vision-Language Pretraining", containing 6.5M images interleaving with 0.8B text from instructional videos.
It contains pre-training corpus using interleaved image-text format. Specifically, our multimodal-textbook includes 6.5M keyframes extracted from instructional videos, interleaving with 0.8B ASR texts.
All the images and text are extracted from⦠See the full description on the dataset page: https://huggingface.co/datasets/DAMO-NLP-SG/multimodal_textbook. | 1,233 | [
"task_categories:text-generation",
"task_categories:summarization",
"language:en",
"license:apache-2.0",
"size_categories:1M<n<10M",
"arxiv:2501.00958",
"region:us",
"Pretraining",
"Interleaved",
"Reasoning"
] | 2025-01-01T09:18:58 | null | null |
|
6758176e04e2f15d7bfacd54 | PowerInfer/QWQ-LONGCOT-500K | PowerInfer | {"license": "apache-2.0", "language": ["en"]} | false | null | 2024-12-26T10:19:19 | 89 | 36 | false | 10a787d967281599e9be6761717147817c018424 | This repository contains approximately 500,000 instances of responses generated using QwQ-32B-Preview language model. The dataset combines prompts from multiple high-quality sources to create diverse and comprehensive training data.
The dataset is available under the Apache 2.0 license.
Over 75% of the responses exceed 8,000 tokens in length. The majority of prompts were carefully created using persona-based methods to create challenging instructions.
Bias, Risks, and Limitations⦠See the full description on the dataset page: https://huggingface.co/datasets/PowerInfer/QWQ-LONGCOT-500K. | 538 | [
"language:en",
"license:apache-2.0",
"size_categories:100K<n<1M",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | 2024-12-10T10:26:54 | null | null |
|
6763e94724dee5a47c7c77f7 | agibot-world/AgiBotWorld-Alpha | agibot-world | {"pretty_name": "AgiBot World", "size_categories": ["n>1T"], "task_categories": ["other"], "language": ["en"], "tags": ["real-world", "dual-arm", "Robotics manipulation"], "extra_gated_prompt": "### AgiBot World COMMUNITY LICENSE AGREEMENT\nAgiBot World Alpha Release Date: December 30, 2024 All the data and code within this repo are under [CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/).", "extra_gated_fields": {"First Name": "text", "Last Name": "text", "Email": "text", "Country": "country", "Affiliation": "text", "Phone": "text", "Job title": {"type": "select", "options": ["Student", "Research Graduate", "AI researcher", "AI developer/engineer", "Reporter", "Other"]}, "Research interest": "text", "geo": "ip_location", "By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the AgiBot Privacy Policy": "checkbox"}, "extra_gated_description": "The information you provide will be collected, stored, processed and shared in accordance with the AgiBot Privacy Policy.", "extra_gated_button_content": "Submit"} | false | null | 2025-01-09T02:59:03 | 156 | 33 | false | 53f3739cc041164023f988d7c7b98f6af3f0d2c0 |
Key Features π
1 million+ trajectories from 100 robots.
100+ real-world scenarios across 5 target domains.
Cutting-edge hardware: visual tactile sensors / 6-DoF dexterous hand / mobile dual-arm robots
Tasks involving:
Contact-rich manipulation
Long-horizon planning
Multi-robot collaboration
Your browser does not support the video tag.
Your browser does not support the video tag.β¦ See the full description on the dataset page: https://huggingface.co/datasets/agibot-world/AgiBotWorld-Alpha. | 8,715 | [
"task_categories:other",
"language:en",
"size_categories:10M<n<100M",
"format:webdataset",
"modality:text",
"library:datasets",
"library:webdataset",
"library:mlcroissant",
"region:us",
"real-world",
"dual-arm",
"Robotics manipulation"
] | 2024-12-19T09:37:11 | null | null |
|
66a6da71f0dc7c8df2e0f979 | OpenLeecher/lmsys_chat_1m_clean | OpenLeecher | {"language": ["en"], "size_categories": ["100K<n<1M"], "pretty_name": "Cleaned LMSYS dataset", "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "conversations", "list": [{"name": "from", "dtype": "string"}, {"name": "value", "dtype": "string"}]}, {"name": "category", "dtype": "string"}, {"name": "grounded", "dtype": "bool"}, {"name": "deepseek_response", "struct": [{"name": "moralization", "dtype": "int64"}, {"name": "reward", "dtype": "float64"}, {"name": "value", "dtype": "string"}]}, {"name": "phi-3-mini_response", "struct": [{"name": "moralization", "dtype": "int64"}, {"name": "reward", "dtype": "float64"}, {"name": "value", "dtype": "string"}]}, {"name": "flaw", "dtype": "string"}, {"name": "agreement", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 1673196622, "num_examples": 273402}], "download_size": 906472159, "dataset_size": 1673196622}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | false | null | 2024-12-31T22:35:13 | 53 | 23 | false | e9f2f6838a2dbba87c216bb6bc406e8d7ce0f389 |
Cleaning and Categorizing
A few weeks ago, I had the itch to do some data crunching, so I began this project - to clean and classify lmsys-chat-1m. The process was somewhat long and tedious, but here is the quick overview:
1. Removing Pure Duplicate Instructions
The first step was to eliminate pure duplicate instructions. This involved:
Removing whitespace and punctuation.
Ensuring that if two instructions matched after that, only one was retained.
This step⦠See the full description on the dataset page: https://huggingface.co/datasets/OpenLeecher/lmsys_chat_1m_clean. | 461 | [
"language:en",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | 2024-07-28T23:55:29 | null | null |
|
67449661149efb6edaa63b98 | HuggingFaceTB/finemath | HuggingFaceTB | {"license": "odc-by", "dataset_info": [{"config_name": "finemath-3plus", "features": [{"name": "url", "dtype": "string"}, {"name": "fetch_time", "dtype": "int64"}, {"name": "content_mime_type", "dtype": "string"}, {"name": "warc_filename", "dtype": "string"}, {"name": "warc_record_offset", "dtype": "int32"}, {"name": "warc_record_length", "dtype": "int32"}, {"name": "text", "dtype": "string"}, {"name": "token_count", "dtype": "int32"}, {"name": "char_count", "dtype": "int32"}, {"name": "metadata", "dtype": "string"}, {"name": "score", "dtype": "float64"}, {"name": "int_score", "dtype": "int64"}, {"name": "crawl", "dtype": "string"}, {"name": "snapshot_type", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "language_score", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 137764105388.93857, "num_examples": 21405610}], "download_size": 65039196945, "dataset_size": 137764105388.93857}, {"config_name": "finemath-4plus", "features": [{"name": "url", "dtype": "string"}, {"name": "fetch_time", "dtype": "int64"}, {"name": "content_mime_type", "dtype": "string"}, {"name": "warc_filename", "dtype": "string"}, {"name": "warc_record_offset", "dtype": "int32"}, {"name": "warc_record_length", "dtype": "int32"}, {"name": "text", "dtype": "string"}, {"name": "token_count", "dtype": "int32"}, {"name": "char_count", "dtype": "int32"}, {"name": "metadata", "dtype": "string"}, {"name": "score", "dtype": "float64"}, {"name": "int_score", "dtype": "int64"}, {"name": "crawl", "dtype": "string"}, {"name": "snapshot_type", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "language_score", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 39101488149.09091, "num_examples": 6699493}], "download_size": 18365184633, "dataset_size": 39101488149.09091}, {"config_name": "infiwebmath-3plus", "features": [{"name": "url", "dtype": "string"}, {"name": "metadata", "dtype": "string"}, {"name": "score", "dtype": "float64"}, {"name": "int_score", "dtype": "int64"}, {"name": "token_count", "dtype": "int64"}, {"name": "char_count", "dtype": "int64"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 96485696853.10182, "num_examples": 13882669}], "download_size": 46808660851, "dataset_size": 96485696853.10182}, {"config_name": "infiwebmath-4plus", "features": [{"name": "url", "dtype": "string"}, {"name": "metadata", "dtype": "string"}, {"name": "score", "dtype": "float64"}, {"name": "int_score", "dtype": "int64"}, {"name": "token_count", "dtype": "int64"}, {"name": "char_count", "dtype": "int64"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 40002719500.1551, "num_examples": 6296212}], "download_size": 19234328998, "dataset_size": 40002719500.1551}], "configs": [{"config_name": "finemath-3plus", "data_files": [{"split": "train", "path": "finemath-3plus/train-*"}]}, {"config_name": "finemath-4plus", "data_files": [{"split": "train", "path": "finemath-4plus/train-*"}]}, {"config_name": "infiwebmath-3plus", "data_files": [{"split": "train", "path": "infiwebmath-3plus/train-*"}]}, {"config_name": "infiwebmath-4plus", "data_files": [{"split": "train", "path": "infiwebmath-4plus/train-*"}]}]} | false | null | 2024-12-23T11:19:16 | 240 | 22 | false | 8f233cf84cff0b817b3ffb26d5be7370990dd557 |
π FineMath
What is it?
π FineMath consists of 34B tokens (FineMath-3+) and 54B tokens (FineMath-3+ with InfiMM-WebMath-3+) of mathematical educational content filtered from CommonCrawl. To curate this dataset, we trained a mathematical content classifier using annotations generated by LLama-3.1-70B-Instruct. We used the classifier to retain only the most educational mathematics content, focusing on clear explanations and step-by-step problem solving rather thanβ¦ See the full description on the dataset page: https://huggingface.co/datasets/HuggingFaceTB/finemath. | 34,245 | [
"license:odc-by",
"size_categories:10M<n<100M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"doi:10.57967/hf/3847",
"region:us"
] | 2024-11-25T15:23:13 | null | null |
|
676f70846bf205795346d2be | FreedomIntelligence/medical-o1-reasoning-SFT | FreedomIntelligence | {"license": "apache-2.0", "task_categories": ["question-answering", "text-generation"], "language": ["en"], "tags": ["medical", "biology"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "medical_o1_sft.json"}]}]} | false | null | 2025-01-04T13:01:37 | 43 | 17 | false | 06ac0b8d4960fa84ef55198ea8086266f1e3da81 |
Introduction
This dataset is used to fine-tune HuatuoGPT-o1, a medical LLM designed for advanced medical reasoning. This dataset is constructed using GPT-4o, which searches for solutions to verifiable medical problems and validates them through a medical verifier.
For details, see our paper and GitHub repository.
Citation
If you find our data useful, please consider citing our work!
@misc{chen2024huatuogpto1medicalcomplexreasoning,
title={HuatuoGPT-o1β¦ See the full description on the dataset page: https://huggingface.co/datasets/FreedomIntelligence/medical-o1-reasoning-SFT. | 387 | [
"task_categories:question-answering",
"task_categories:text-generation",
"language:en",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2412.18925",
"region:us",
"medical",
"biology"
] | 2024-12-28T03:29:08 | null | null |
|
67734d5c7ec2413faa8d3c85 | PowerInfer/LONGCOT-Refine-500K | PowerInfer | {"language": ["en"], "license": "apache-2.0"} | false | null | 2025-01-02T06:10:43 | 34 | 17 | false | 88bf8410db01197006e572a46c88311720a23577 | This repository contains approximately 500,000 instances of responses generated using Qwen2.5-72B-Instruct. The dataset combines prompts from multiple high-quality sources to create diverse and comprehensive training data.
The dataset is available under the Apache 2.0 license.
Bias, Risks, and Limitations
This dataset is mainly in English.
The dataset inherits the biases, errors, and omissions known to exist in data used for seed sources and models used for data generation.β¦ See the full description on the dataset page: https://huggingface.co/datasets/PowerInfer/LONGCOT-Refine-500K. | 256 | [
"language:en",
"license:apache-2.0",
"size_categories:100K<n<1M",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | 2024-12-31T01:48:12 | null | null |
|
677c1f196b1653e3955dbce7 | Rapidata/text-2-image-Rich-Human-Feedback | Rapidata | {"license": "apache-2.0", "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "word_scores", "dtype": "string"}, {"name": "alignment_score_norm", "dtype": "float32"}, {"name": "coherence_score_norm", "dtype": "float32"}, {"name": "style_score_norm", "dtype": "float32"}, {"name": "alignment_heatmap", "sequence": {"sequence": "float16"}}, {"name": "coherence_heatmap", "sequence": {"sequence": "float16"}}, {"name": "alignment_score", "dtype": "float32"}, {"name": "coherence_score", "dtype": "float32"}, {"name": "style_score", "dtype": "float32"}], "splits": [{"name": "train", "num_bytes": 25257389633.104, "num_examples": 13024}], "download_size": 17856619960, "dataset_size": 25257389633.104}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "task_categories": ["text-to-image", "text-classification", "image-classification", "image-to-text", "image-segmentation"], "language": ["en"], "tags": ["t2i", "preferences", "human", "flux", "midjourney", "imagen", "dalle", "heatmap", "coherence", "alignment", "style", "plausiblity"], "pretty_name": "Rich Human Feedback for Text to Image Models", "size_categories": ["1M<n<10M"]} | false | null | 2025-01-10T22:02:22 | 15 | 15 | false | 7ecb576d232b4bf63deaf8f0128e9ecc6d3f7b7d |
Building upon Google's research Rich Human Feedback for Text-to-Image Generation we have collected over 1.5 million responses from 152'684 individual humans using Rapidata via the Python API. Collection took roughly 5 days.
If you get value from this dataset and would like to see more in the future, please consider liking it.
Overview
We asked humans to evaluate AI-generated images in style, coherence and prompt alignment. For images that contained flaws, participants were⦠See the full description on the dataset page: https://huggingface.co/datasets/Rapidata/text-2-image-Rich-Human-Feedback. | 82 | [
"task_categories:text-to-image",
"task_categories:text-classification",
"task_categories:image-classification",
"task_categories:image-to-text",
"task_categories:image-segmentation",
"language:en",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2312.10240",
"region:us",
"t2i",
"preferences",
"human",
"flux",
"midjourney",
"imagen",
"dalle",
"heatmap",
"coherence",
"alignment",
"style",
"plausiblity"
] | 2025-01-06T18:21:13 | null | null |
|
66212f29fb07c3e05ad0432e | HuggingFaceFW/fineweb | HuggingFaceFW | {"license": "odc-by", "task_categories": ["text-generation"], "language": ["en"], "pretty_name": "FineWeb", "size_categories": ["n>1T"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/*/*"}]}, {"config_name": "sample-10BT", "data_files": [{"split": "train", "path": "sample/10BT/*"}]}, {"config_name": "sample-100BT", "data_files": [{"split": "train", "path": "sample/100BT/*"}]}, {"config_name": "sample-350BT", "data_files": [{"split": "train", "path": "sample/350BT/*"}]}, {"config_name": "CC-MAIN-2024-51", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-51/*"}]}, {"config_name": "CC-MAIN-2024-46", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-46/*"}]}, {"config_name": "CC-MAIN-2024-42", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-42/*"}]}, {"config_name": "CC-MAIN-2024-38", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-38/*"}]}, {"config_name": "CC-MAIN-2024-33", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-33/*"}]}, {"config_name": "CC-MAIN-2024-30", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-30/*"}]}, {"config_name": "CC-MAIN-2024-26", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-26/*"}]}, {"config_name": "CC-MAIN-2024-22", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-22/*"}]}, {"config_name": "CC-MAIN-2024-18", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-18/*"}]}, {"config_name": "CC-MAIN-2024-10", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-10/*"}]}, {"config_name": "CC-MAIN-2023-50", "data_files": [{"split": "train", "path": "data/CC-MAIN-2023-50/*"}]}, {"config_name": "CC-MAIN-2023-40", "data_files": [{"split": "train", "path": "data/CC-MAIN-2023-40/*"}]}, {"config_name": "CC-MAIN-2023-23", "data_files": [{"split": "train", "path": "data/CC-MAIN-2023-23/*"}]}, {"config_name": "CC-MAIN-2023-14", "data_files": [{"split": "train", "path": "data/CC-MAIN-2023-14/*"}]}, {"config_name": "CC-MAIN-2023-06", "data_files": [{"split": "train", "path": "data/CC-MAIN-2023-06/*"}]}, {"config_name": "CC-MAIN-2022-49", "data_files": [{"split": "train", "path": "data/CC-MAIN-2022-49/*"}]}, {"config_name": "CC-MAIN-2022-40", "data_files": [{"split": "train", "path": "data/CC-MAIN-2022-40/*"}]}, {"config_name": "CC-MAIN-2022-33", "data_files": [{"split": "train", "path": "data/CC-MAIN-2022-33/*"}]}, {"config_name": "CC-MAIN-2022-27", "data_files": [{"split": "train", "path": "data/CC-MAIN-2022-27/*"}]}, {"config_name": "CC-MAIN-2022-21", "data_files": [{"split": "train", "path": "data/CC-MAIN-2022-21/*"}]}, {"config_name": "CC-MAIN-2022-05", "data_files": [{"split": "train", "path": "data/CC-MAIN-2022-05/*"}]}, {"config_name": "CC-MAIN-2021-49", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-49/*"}]}, {"config_name": "CC-MAIN-2021-43", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-43/*"}]}, {"config_name": "CC-MAIN-2021-39", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-39/*"}]}, {"config_name": "CC-MAIN-2021-31", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-31/*"}]}, {"config_name": "CC-MAIN-2021-25", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-25/*"}]}, {"config_name": "CC-MAIN-2021-21", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-21/*"}]}, {"config_name": "CC-MAIN-2021-17", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-17/*"}]}, {"config_name": "CC-MAIN-2021-10", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-10/*"}]}, {"config_name": "CC-MAIN-2021-04", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-04/*"}]}, {"config_name": "CC-MAIN-2020-50", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-50/*"}]}, {"config_name": "CC-MAIN-2020-45", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-45/*"}]}, {"config_name": "CC-MAIN-2020-40", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-40/*"}]}, {"config_name": "CC-MAIN-2020-34", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-34/*"}]}, {"config_name": "CC-MAIN-2020-29", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-29/*"}]}, {"config_name": "CC-MAIN-2020-24", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-24/*"}]}, {"config_name": "CC-MAIN-2020-16", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-16/*"}]}, {"config_name": "CC-MAIN-2020-10", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-10/*"}]}, {"config_name": "CC-MAIN-2020-05", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-05/*"}]}, {"config_name": "CC-MAIN-2019-51", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-51/*"}]}, {"config_name": "CC-MAIN-2019-47", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-47/*"}]}, {"config_name": "CC-MAIN-2019-43", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-43/*"}]}, {"config_name": "CC-MAIN-2019-39", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-39/*"}]}, {"config_name": "CC-MAIN-2019-35", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-35/*"}]}, {"config_name": "CC-MAIN-2019-30", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-30/*"}]}, {"config_name": "CC-MAIN-2019-26", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-26/*"}]}, {"config_name": "CC-MAIN-2019-22", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-22/*"}]}, {"config_name": "CC-MAIN-2019-18", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-18/*"}]}, {"config_name": "CC-MAIN-2019-13", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-13/*"}]}, {"config_name": "CC-MAIN-2019-09", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-09/*"}]}, {"config_name": "CC-MAIN-2019-04", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-04/*"}]}, {"config_name": "CC-MAIN-2018-51", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-51/*"}]}, {"config_name": "CC-MAIN-2018-47", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-47/*"}]}, {"config_name": "CC-MAIN-2018-43", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-43/*"}]}, {"config_name": "CC-MAIN-2018-39", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-39/*"}]}, {"config_name": "CC-MAIN-2018-34", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-34/*"}]}, {"config_name": "CC-MAIN-2018-30", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-30/*"}]}, {"config_name": "CC-MAIN-2018-26", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-26/*"}]}, {"config_name": "CC-MAIN-2018-22", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-22/*"}]}, {"config_name": "CC-MAIN-2018-17", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-17/*"}]}, {"config_name": "CC-MAIN-2018-13", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-13/*"}]}, {"config_name": "CC-MAIN-2018-09", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-09/*"}]}, {"config_name": "CC-MAIN-2018-05", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-05/*"}]}, {"config_name": "CC-MAIN-2017-51", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-51/*"}]}, {"config_name": "CC-MAIN-2017-47", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-47/*"}]}, {"config_name": "CC-MAIN-2017-43", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-43/*"}]}, {"config_name": "CC-MAIN-2017-39", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-39/*"}]}, {"config_name": "CC-MAIN-2017-34", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-34/*"}]}, {"config_name": "CC-MAIN-2017-30", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-30/*"}]}, {"config_name": "CC-MAIN-2017-26", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-26/*"}]}, {"config_name": "CC-MAIN-2017-22", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-22/*"}]}, {"config_name": "CC-MAIN-2017-17", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-17/*"}]}, {"config_name": "CC-MAIN-2017-13", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-13/*"}]}, {"config_name": "CC-MAIN-2017-09", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-09/*"}]}, {"config_name": "CC-MAIN-2017-04", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-04/*"}]}, {"config_name": "CC-MAIN-2016-50", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-50/*"}]}, {"config_name": "CC-MAIN-2016-44", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-44/*"}]}, {"config_name": "CC-MAIN-2016-40", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-40/*"}]}, {"config_name": "CC-MAIN-2016-36", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-36/*"}]}, {"config_name": "CC-MAIN-2016-30", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-30/*"}]}, {"config_name": "CC-MAIN-2016-26", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-26/*"}]}, {"config_name": "CC-MAIN-2016-22", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-22/*"}]}, {"config_name": "CC-MAIN-2016-18", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-18/*"}]}, {"config_name": "CC-MAIN-2016-07", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-07/*"}]}, {"config_name": "CC-MAIN-2015-48", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-48/*"}]}, {"config_name": "CC-MAIN-2015-40", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-40/*"}]}, {"config_name": "CC-MAIN-2015-35", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-35/*"}]}, {"config_name": "CC-MAIN-2015-32", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-32/*"}]}, {"config_name": "CC-MAIN-2015-27", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-27/*"}]}, {"config_name": "CC-MAIN-2015-22", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-22/*"}]}, {"config_name": "CC-MAIN-2015-18", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-18/*"}]}, {"config_name": "CC-MAIN-2015-14", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-14/*"}]}, {"config_name": "CC-MAIN-2015-11", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-11/*"}]}, {"config_name": "CC-MAIN-2015-06", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-06/*"}]}, {"config_name": "CC-MAIN-2014-52", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-52/*"}]}, {"config_name": "CC-MAIN-2014-49", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-49/*"}]}, {"config_name": "CC-MAIN-2014-42", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-42/*"}]}, {"config_name": "CC-MAIN-2014-41", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-41/*"}]}, {"config_name": "CC-MAIN-2014-35", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-35/*"}]}, {"config_name": "CC-MAIN-2014-23", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-23/*"}]}, {"config_name": "CC-MAIN-2014-15", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-15/*"}]}, {"config_name": "CC-MAIN-2014-10", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-10/*"}]}, {"config_name": "CC-MAIN-2013-48", "data_files": [{"split": "train", "path": "data/CC-MAIN-2013-48/*"}]}, {"config_name": "CC-MAIN-2013-20", "data_files": [{"split": "train", "path": "data/CC-MAIN-2013-20/*"}]}]} | false | null | 2025-01-03T11:58:46 | 1,813 | 13 | false | e31fdfd3918d4b48e837d69d274e624a067d7091 |
π· FineWeb
15 trillion tokens of the finest data the π web has to offer
What is it?
The π· FineWeb dataset consists of more than 15T tokens of cleaned and deduplicated english web data from CommonCrawl. The data processing pipeline is optimized for LLM performance and ran on the π datatrove library, our large scale data processing library.
π· FineWeb was originally meant to be a fully open replication of π¦
RefinedWeb, with a release of the full⦠See the full description on the dataset page: https://huggingface.co/datasets/HuggingFaceFW/fineweb. | 153,058 | [
"task_categories:text-generation",
"language:en",
"license:odc-by",
"size_categories:10B<n<100B",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2306.01116",
"arxiv:2109.07445",
"arxiv:2406.17557",
"doi:10.57967/hf/2493",
"region:us"
] | 2024-04-18T14:33:13 | null | null |
|
6695831f2d25bd04e969b0a2 | AI-MO/NuminaMath-CoT | AI-MO | {"dataset_info": {"features": [{"name": "source", "dtype": "string"}, {"name": "problem", "dtype": "string"}, {"name": "solution", "dtype": "string"}, {"name": "messages", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 2495457595.0398345, "num_examples": 859494}, {"name": "test", "num_bytes": 290340.31593470514, "num_examples": 100}], "download_size": 1234351634, "dataset_size": 2495747935.355769}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "license": "apache-2.0", "task_categories": ["text-generation"], "language": ["en"], "tags": ["aimo", "math"], "pretty_name": "NuminaMath CoT"} | false | null | 2024-11-25T05:31:43 | 302 | 13 | false | 9d8d210c9f6a36c8f3cd84045668c9b7800ef517 |
Dataset Card for NuminaMath CoT
Dataset Summary
Approximately 860k math problems, where each solution is formatted in a Chain of Thought (CoT) manner. The sources of the dataset range from Chinese high school math exercises to US and international mathematics olympiad competition problems. The data were primarily collected from online exam paper PDFs and mathematics discussion forums. The processing steps include (a) OCR from the original PDFs, (b) segmentation⦠See the full description on the dataset page: https://huggingface.co/datasets/AI-MO/NuminaMath-CoT. | 3,345 | [
"task_categories:text-generation",
"language:en",
"license:apache-2.0",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"aimo",
"math"
] | 2024-07-15T20:14:23 | null | null |
|
66a1d16a27fd84b81d732482 | TEAMREBOOTT-AI/SciCap-MLBCAP | TEAMREBOOTT-AI | {"license": "cc-by-nc-sa-4.0", "task_categories": ["text-generation", "image-to-text"], "language": ["en"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "id", "dtype": "int64"}, {"name": "figure_type", "dtype": "string"}, {"name": "ocr", "dtype": "string"}, {"name": "paragraph", "dtype": "string"}, {"name": "mention", "dtype": "string"}, {"name": "figure_description", "dtype": "string"}, {"name": "mlbcap_long", "dtype": "string"}, {"name": "mlbcap_short", "dtype": "string"}, {"name": "categories", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2444177418.129, "num_examples": 47639}], "download_size": 2487129056, "dataset_size": 2444177418.129}, "size_categories": ["10K<n<100K"]} | false | null | 2025-01-07T13:56:33 | 13 | 13 | false | 44f062ec4e5ec42898326cbea2f80f147a1ba861 |
MLBCAP: Multi-LLM Collaborative Caption Generation in Scientific Documents
π PaperMLBCAP has been accepted for presentation at AI4Research @ AAAI 2025. π
π Introduction
Scientific figure captioning is a challenging task that demands contextually accurate descriptions of visual content. Existing approaches often oversimplify the task by treating it as either an image-to-text conversion or text summarization problem, leading to suboptimal results. Furthermore⦠See the full description on the dataset page: https://huggingface.co/datasets/TEAMREBOOTT-AI/SciCap-MLBCAP. | 43 | [
"task_categories:text-generation",
"task_categories:image-to-text",
"language:en",
"license:cc-by-nc-sa-4.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2501.02552",
"region:us"
] | 2024-07-25T04:15:38 | null | null |
|
673a1149a7a311f5bed5c624 | HuggingFaceTB/smoltalk | HuggingFaceTB | {"language": ["en"], "tags": ["synthetic"], "pretty_name": "SmolTalk", "size_categories": ["1M<n<10M"], "configs": [{"config_name": "all", "data_files": [{"split": "train", "path": "data/all/train-*"}, {"split": "test", "path": "data/all/test-*"}]}, {"config_name": "smol-magpie-ultra", "data_files": [{"split": "train", "path": "data/smol-magpie-ultra/train-*"}, {"split": "test", "path": "data/smol-magpie-ultra/test-*"}]}, {"config_name": "smol-constraints", "data_files": [{"split": "train", "path": "data/smol-constraints/train-*"}, {"split": "test", "path": "data/smol-constraints/test-*"}]}, {"config_name": "smol-rewrite", "data_files": [{"split": "train", "path": "data/smol-rewrite/train-*"}, {"split": "test", "path": "data/smol-rewrite/test-*"}]}, {"config_name": "smol-summarize", "data_files": [{"split": "train", "path": "data/smol-summarize/train-*"}, {"split": "test", "path": "data/smol-summarize/test-*"}]}, {"config_name": "apigen-80k", "data_files": [{"split": "train", "path": "data/apigen-80k/train-*"}, {"split": "test", "path": "data/apigen-80k/test-*"}]}, {"config_name": "everyday-conversations", "data_files": [{"split": "train", "path": "data/everyday-conversations/train-*"}, {"split": "test", "path": "data/everyday-conversations/test-*"}]}, {"config_name": "explore-instruct-rewriting", "data_files": [{"split": "train", "path": "data/explore-instruct-rewriting/train-*"}, {"split": "test", "path": "data/explore-instruct-rewriting/test-*"}]}, {"config_name": "longalign", "data_files": [{"split": "train", "path": "data/longalign/train-*"}, {"split": "test", "path": "data/longalign/test-*"}]}, {"config_name": "metamathqa-50k", "data_files": [{"split": "train", "path": "data/metamathqa-50k/train-*"}, {"split": "test", "path": "data/metamathqa-50k/test-*"}]}, {"config_name": "numina-cot-100k", "data_files": [{"split": "train", "path": "data/numina-cot-100k/train-*"}, {"split": "test", "path": "data/numina-cot-100k/test-*"}]}, {"config_name": "openhermes-100k", "data_files": [{"split": "train", "path": "data/openhermes-100k/train-*"}, {"split": "test", "path": "data/openhermes-100k/test-*"}]}, {"config_name": "self-oss-instruct", "data_files": [{"split": "train", "path": "data/self-oss-instruct/train-*"}, {"split": "test", "path": "data/self-oss-instruct/test-*"}]}, {"config_name": "systemchats-30k", "data_files": [{"split": "train", "path": "data/systemchats-30k/train-*"}, {"split": "test", "path": "data/systemchats-30k/test-*"}]}]} | false | null | 2024-11-26T11:02:25 | 275 | 13 | false | 5a40ecb185e55dd30edf3c24b77e67f6ea0d659b |
SmolTalk
Dataset description
This is a synthetic dataset designed for supervised finetuning (SFT) of LLMs. It was used to build SmolLM2-Instruct family of models and contains 1M samples.
During the development of SmolLM2, we observed that models finetuned on public SFT datasets underperformed compared to other models with proprietary instruction datasets. To address this gap, we created new synthetic datasets that improve instruction following while covering⦠See the full description on the dataset page: https://huggingface.co/datasets/HuggingFaceTB/smoltalk. | 6,397 | [
"language:en",
"size_categories:1M<n<10M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"synthetic"
] | 2024-11-17T15:52:41 | null | null |
|
6763bd205297513b0f262714 | unitreerobotics/LAFAN1_Retargeting_Dataset | unitreerobotics | {"task_categories": ["robotics"]} | false | null | 2024-12-24T04:15:25 | 29 | 13 | false | d4da0161e39e42859148a51bcdf6d74273d2bc01 |
LAFAN1 Retargeting Dataset
To make the motion of humanoid robots more natural, we retargeted LAFAN1 motion capture data to Unitree's humanoid robots, supporting three models: H1, H1_2, and G1. This retargeting was achieved through numerical optimization based on Interaction Mesh and IK, considering end-effector pose constraints, as well as joint position and velocity constraints, to prevent foot slippage. It is important to note that the retargeting only accounted for kinematic⦠See the full description on the dataset page: https://huggingface.co/datasets/unitreerobotics/LAFAN1_Retargeting_Dataset. | 369 | [
"task_categories:robotics",
"modality:3d",
"region:us"
] | 2024-12-19T06:28:48 | null | null |
|
673e9e53cdad8a9744b0bf1b | O1-OPEN/OpenO1-SFT | O1-OPEN | {"license": "apache-2.0", "task_categories": ["question-answering"], "language": ["en", "zh"], "size_categories": ["10K<n<100K"]} | false | null | 2024-12-17T02:30:09 | 319 | 12 | false | 63112de109aa755e9cdfad63a13f08a92dd7df36 |
SFT Data for CoT Activation
πππThis repository contains the dataset used for fine-tuning a language model using SFT for Chain-of-Thought Activation.
πππThe dataset is designed to enhance the model's ability to generate coherent and logical reasoning sequences.
βββBy using this dataset, the model can learn to produce detailed and structured reasoning steps, enhancing its performance on complex reasoning tasks.
Statistics
1οΈβ£Total Records: 77,685β¦ See the full description on the dataset page: https://huggingface.co/datasets/O1-OPEN/OpenO1-SFT. | 2,100 | [
"task_categories:question-answering",
"language:en",
"language:zh",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | 2024-11-21T02:43:31 | null | null |
|
676593a303cc6dbb6e857610 | Rapidata/text-2-video-human-preferences | Rapidata | {"license": "apache-2.0", "task_categories": ["text-to-video", "video-classification"], "tags": ["human", "preferences", "coherence", "plausibilty", "style", "alignment"], "language": ["en"], "pretty_name": "Human Preferences for Text to Video Models", "size_categories": ["1K<n<10K"], "dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "video1", "dtype": "string"}, {"name": "video2", "dtype": "string"}, {"name": "weighted_results1_Alignment", "dtype": "float64"}, {"name": "weighted_results2_Alignment", "dtype": "float64"}, {"name": "detailedResults_Alignment", "list": [{"name": "userDetails", "struct": [{"name": "country", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "userScore", "dtype": "float64"}]}, {"name": "votedFor", "dtype": "string"}]}, {"name": "weighted_results1_Coherence", "dtype": "float64"}, {"name": "weighted_results2_Coherence", "dtype": "float64"}, {"name": "detailedResults_Coherence", "list": [{"name": "userDetails", "struct": [{"name": "country", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "userScore", "dtype": "float64"}]}, {"name": "votedFor", "dtype": "string"}]}, {"name": "weighted_results1_Preference", "dtype": "float64"}, {"name": "weighted_results2_Preference", "dtype": "float64"}, {"name": "detailedResults_Preference", "list": [{"name": "userDetails", "struct": [{"name": "country", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "userScore", "dtype": "float64"}]}, {"name": "votedFor", "dtype": "string"}]}, {"name": "file_name1", "dtype": "string"}, {"name": "file_name2", "dtype": "string"}, {"name": "model1", "dtype": "string"}, {"name": "model2", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 478042, "num_examples": 316}], "download_size": 121718, "dataset_size": 478042}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | false | null | 2025-01-10T21:59:03 | 12 | 12 | false | 09cea3e8bab0791b3ea101535af6017ca3edd8de |
Rapidata Video Generation Preference Dataset
This dataset was collected in ~12 hours using the Rapidata Python API, accessible to anyone and ideal for large scale data annotation.
The data collected in this dataset informs our text-2-video model benchmark. We just started so currently only two models are represented in this set:
Sora
Hunyouan
Pika 2.0 is currently being evaluated and will be added next.
Explore our latest model rankings on our website.
If you get value⦠See the full description on the dataset page: https://huggingface.co/datasets/Rapidata/text-2-video-human-preferences. | 59 | [
"task_categories:text-to-video",
"task_categories:video-classification",
"language:en",
"license:apache-2.0",
"size_categories:n<1K",
"modality:video",
"library:datasets",
"library:mlcroissant",
"region:us",
"human",
"preferences",
"coherence",
"plausibilty",
"style",
"alignment"
] | 2024-12-20T15:56:19 | null | null |
|
677396c13cd7faf7e8f9dc8c | PRIME-RL/Eurus-2-RL-Data | PRIME-RL | {"license": "mit"} | false | null | 2025-01-06T11:21:52 | 19 | 12 | false | 5cbc5bc54c9c8417afd3539fb267422c33b525e6 |
Eurus-2-RL-Data
Links
π Blog
π€ PRIME Collection
Introduction
Eurus-2-RL-Data is a high-quality RL training dataset of mathematics and coding problems with outcome verifiers (LaTeX answers for math and test cases for coding).
For math, we source from NuminaMath-CoT. The problems span from Chinese high school mathematics to International Mathematical Olympiad competition questions.
For coding, we source from APPS, CodeContests, TACO, and⦠See the full description on the dataset page: https://huggingface.co/datasets/PRIME-RL/Eurus-2-RL-Data. | 149 | [
"license:mit",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2412.01981",
"region:us"
] | 2024-12-31T07:01:21 | null | null |
|
6775e1c326815bf20d874413 | fal/cosmos-openvid-1m | fal | {"size_categories": ["100K<n<1M"], "viewer": true, "license": "apache-2.0"} | false | null | 2025-01-09T02:12:51 | 17 | 12 | false | 10b41fc29006eff62ff64b8795b8ae8ef7ff9cde |
Cosmos-Tokenized OpenVid-1M
Cosmos-Tokenized OpenVid-1M
How to use
Shards are stored in parquet format.
It has 4 columns: serialized_latent, caption, fps, video.
serialized_latent is the latent vector of the video, serialized using torch.save().
Please use the following function to deserialize it:def deserialize_tensor(
serialized_tensor: bytes, device: Optional[str] = None
) -> torch.Tensor:
return torch.load(
io.BytesIO(serialized_tensor)β¦ See the full description on the dataset page: https://huggingface.co/datasets/fal/cosmos-openvid-1m. | 671 | [
"license:apache-2.0",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | 2025-01-02T00:45:55 | null | null |
|
67324e20809e988d76c9e982 | eltorio/ROCOv2-radiology | eltorio | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "image_id", "dtype": "string"}, {"name": "caption", "dtype": "string"}, {"name": "cui", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 13464639396.75, "num_examples": 59962}, {"name": "validation", "num_bytes": 2577450447, "num_examples": 9904}, {"name": "test", "num_bytes": 2584850128.125, "num_examples": 9927}], "download_size": 18621371902, "dataset_size": 18626939971.875}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "language": ["en"], "license": "cc-by-nc-sa-4.0", "pretty_name": "ROCOv2", "tags": ["medical"]} | false | null | 2024-11-13T08:49:36 | 38 | 11 | false | 80ffeef4eb8d34d27cb5c2815305f1d8aee8a83c |
ROCOv2: Radiology Object in COntext version 2
Introduction
ROCOv2 is a multimodal dataset consisting of radiological images and associated medical concepts and captions extracted from the PMC Open Access Subset. It is an updated version of the ROCO dataset, adding 35,705 new images and improving concept extraction and filtering.
Dataset Overview
The ROCOv2 dataset contains 79,789 radiological images, each with a corresponding caption and medical⦠See the full description on the dataset page: https://huggingface.co/datasets/eltorio/ROCOv2-radiology. | 1,409 | [
"language:en",
"license:cc-by-nc-sa-4.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2405.10004",
"doi:10.57967/hf/3506",
"region:us",
"medical"
] | 2024-11-11T18:34:08 | null | null |
|
67744720363e2be467b7c2b5 | qingy2024/FineQwQ-142k | qingy2024 | {"language": ["en"], "dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "response", "dtype": "string"}, {"name": "source", "dtype": "string"}], "splits": [{"name": "10k", "num_bytes": 87273156.45129532, "num_examples": 10000}, {"name": "25k", "num_bytes": 218182891.12823832, "num_examples": 25000}, {"name": "50k", "num_bytes": 436365782.25647664, "num_examples": 50000}, {"name": "100k", "num_bytes": 872731564.5129533, "num_examples": 100000}, {"name": "142k", "num_bytes": 1239278821.6083937, "num_examples": 142000}], "download_size": 1265768860, "dataset_size": 2853832215.9573574}, "configs": [{"config_name": "default", "data_files": [{"split": "10k", "path": "data/10k-*"}, {"split": "25k", "path": "data/25k-*"}, {"split": "50k", "path": "data/50k-*"}, {"split": "100k", "path": "data/100k-*"}, {"split": "142k", "path": "data/142k-*"}]}]} | false | null | 2025-01-07T18:00:44 | 12 | 11 | false | f7443bb54d207f590a5d13924c80c9eacfd66fe1 |
Shakker-Labs/FLUX.1-dev-LoRA-Logo-Design
Original Sources: qingy2024/QwQ-LongCoT-Verified-130K (amphora/QwQ-LongCoT-130K), amphora/QwQ-LongCoT-130K-2, PowerInfer/QWQ-LONGCOT-500K.
Source
Information
Rows
%
powerinfer/qwq-500k
Only coding problems kept to avoid overlap
50,899
35.84%
qwq-longcot-verified
Verified math problems
64,096
45.14%
amphora-magpie
Diverse general purpose reasoning
27,015
19.02%
| 116 | [
"language:en",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | 2024-12-31T19:33:52 | null | null |
|
677e5956e84a20259e43d869 | Rapidata/Translation-gpt4o_mini-v-gpt4o-v-deepl | Rapidata | {"dataset_info": {"features": [{"name": "original_text", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "total_responses", "dtype": "int64"}, {"name": "weighted_votes_1", "dtype": "float64"}, {"name": "weighted_votes_2", "dtype": "float64"}, {"name": "translation_model_1", "dtype": "string"}, {"name": "translation_model_2", "dtype": "string"}, {"name": "model1", "dtype": "string"}, {"name": "model2", "dtype": "string"}, {"name": "detailed_results", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 10792019, "num_examples": 746}], "download_size": 1059070, "dataset_size": 10792019}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | false | null | 2025-01-10T22:02:52 | 11 | 11 | false | d8c9ca7441cb2d5a374264713bf3b70c7c31b34f |
If you get value from this dataset and would like to see more in the future, please consider liking it.
Overview
This dataset compares the translation capabilities of GPT-4o and GPT-4o-mini against DeepL across different languages. The comparison involved 100 distinct texts in 4 languages, with each translation being rated by 100 native speakers. Texts that were translated identically across platforms were excluded from the analysis.
Results
The comparative⦠See the full description on the dataset page: https://huggingface.co/datasets/Rapidata/Translation-gpt4o_mini-v-gpt4o-v-deepl. | 20 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | 2025-01-08T10:54:14 | null | null |
|
676f70968756741d47c691df | FreedomIntelligence/medical-o1-verifiable-problem | FreedomIntelligence | {"license": "apache-2.0", "task_categories": ["question-answering", "text-generation"], "language": ["en"], "tags": ["medical", "biology"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "medical_o1_verifiable_problem.json"}]}]} | false | null | 2024-12-30T02:56:46 | 18 | 10 | false | 46d5175eb74fdef3516d51d52e8c40db04bbdf35 |
Introduction
This dataset features open-ended medical problems designed to improve LLMs' medical reasoning. Each entry includes a open-ended question and a ground-truth answer based on challenging medical exams. The verifiable answers enable checking LLM outputs, refining their reasoning processes.
For details, see our paper and GitHub repository.
Citation
If you find our data useful, please consider citing our work!β¦ See the full description on the dataset page: https://huggingface.co/datasets/FreedomIntelligence/medical-o1-verifiable-problem. | 152 | [
"task_categories:question-answering",
"task_categories:text-generation",
"language:en",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2412.18925",
"region:us",
"medical",
"biology"
] | 2024-12-28T03:29:26 | null | null |
|
6655eb19d17e141dcb546ed5 | HuggingFaceFW/fineweb-edu | HuggingFaceFW | {"license": "odc-by", "task_categories": ["text-generation"], "language": ["en"], "pretty_name": "FineWeb-Edu", "size_categories": ["n>1T"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/*/*"}], "features": [{"name": "text", "dtype": "string"}, {"name": "id", "dtype": "string"}, {"name": "dump", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "date", "dtype": "string"}, {"name": "file_path", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "language_score", "dtype": "float64"}, {"name": "token_count", "dtype": "int64"}, {"name": "score", "dtype": "float64"}, {"name": "int_score", "dtype": "int64"}]}, {"config_name": "sample-10BT", "data_files": [{"split": "train", "path": "sample/10BT/*"}]}, {"config_name": "sample-100BT", "data_files": [{"split": "train", "path": "sample/100BT/*"}]}, {"config_name": "sample-350BT", "data_files": [{"split": "train", "path": "sample/350BT/*"}]}, {"config_name": "CC-MAIN-2024-51", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-51/*"}]}, {"config_name": "CC-MAIN-2024-46", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-46/*"}]}, {"config_name": "CC-MAIN-2024-42", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-42/*"}]}, {"config_name": "CC-MAIN-2024-38", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-38/*"}]}, {"config_name": "CC-MAIN-2024-33", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-33/*"}]}, {"config_name": "CC-MAIN-2024-30", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-30/*"}]}, {"config_name": "CC-MAIN-2024-26", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-26/*"}]}, {"config_name": "CC-MAIN-2024-22", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-22/*"}]}, {"config_name": "CC-MAIN-2024-18", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-18/*"}]}, {"config_name": "CC-MAIN-2024-10", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-10/*"}]}, {"config_name": "CC-MAIN-2023-50", "data_files": [{"split": "train", "path": "data/CC-MAIN-2023-50/*"}]}, {"config_name": "CC-MAIN-2023-40", "data_files": [{"split": "train", "path": "data/CC-MAIN-2023-40/*"}]}, {"config_name": "CC-MAIN-2023-23", "data_files": [{"split": "train", "path": "data/CC-MAIN-2023-23/*"}]}, {"config_name": "CC-MAIN-2023-14", "data_files": [{"split": "train", "path": "data/CC-MAIN-2023-14/*"}]}, {"config_name": "CC-MAIN-2023-06", "data_files": [{"split": "train", "path": "data/CC-MAIN-2023-06/*"}]}, {"config_name": "CC-MAIN-2022-49", "data_files": [{"split": "train", "path": "data/CC-MAIN-2022-49/*"}]}, {"config_name": "CC-MAIN-2022-40", "data_files": [{"split": "train", "path": "data/CC-MAIN-2022-40/*"}]}, {"config_name": "CC-MAIN-2022-33", "data_files": [{"split": "train", "path": "data/CC-MAIN-2022-33/*"}]}, {"config_name": "CC-MAIN-2022-27", "data_files": [{"split": "train", "path": "data/CC-MAIN-2022-27/*"}]}, {"config_name": "CC-MAIN-2022-21", "data_files": [{"split": "train", "path": "data/CC-MAIN-2022-21/*"}]}, {"config_name": "CC-MAIN-2022-05", "data_files": [{"split": "train", "path": "data/CC-MAIN-2022-05/*"}]}, {"config_name": "CC-MAIN-2021-49", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-49/*"}]}, {"config_name": "CC-MAIN-2021-43", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-43/*"}]}, {"config_name": "CC-MAIN-2021-39", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-39/*"}]}, {"config_name": "CC-MAIN-2021-31", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-31/*"}]}, {"config_name": "CC-MAIN-2021-25", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-25/*"}]}, {"config_name": "CC-MAIN-2021-21", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-21/*"}]}, {"config_name": "CC-MAIN-2021-17", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-17/*"}]}, {"config_name": "CC-MAIN-2021-10", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-10/*"}]}, {"config_name": "CC-MAIN-2021-04", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-04/*"}]}, {"config_name": "CC-MAIN-2020-50", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-50/*"}]}, {"config_name": "CC-MAIN-2020-45", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-45/*"}]}, {"config_name": "CC-MAIN-2020-40", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-40/*"}]}, {"config_name": "CC-MAIN-2020-34", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-34/*"}]}, {"config_name": "CC-MAIN-2020-29", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-29/*"}]}, {"config_name": "CC-MAIN-2020-24", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-24/*"}]}, {"config_name": "CC-MAIN-2020-16", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-16/*"}]}, {"config_name": "CC-MAIN-2020-10", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-10/*"}]}, {"config_name": "CC-MAIN-2020-05", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-05/*"}]}, {"config_name": "CC-MAIN-2019-51", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-51/*"}]}, {"config_name": "CC-MAIN-2019-47", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-47/*"}]}, {"config_name": "CC-MAIN-2019-43", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-43/*"}]}, {"config_name": "CC-MAIN-2019-39", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-39/*"}]}, {"config_name": "CC-MAIN-2019-35", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-35/*"}]}, {"config_name": "CC-MAIN-2019-30", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-30/*"}]}, {"config_name": "CC-MAIN-2019-26", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-26/*"}]}, {"config_name": "CC-MAIN-2019-22", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-22/*"}]}, {"config_name": "CC-MAIN-2019-18", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-18/*"}]}, {"config_name": "CC-MAIN-2019-13", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-13/*"}]}, {"config_name": "CC-MAIN-2019-09", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-09/*"}]}, {"config_name": "CC-MAIN-2019-04", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-04/*"}]}, {"config_name": "CC-MAIN-2018-51", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-51/*"}]}, {"config_name": "CC-MAIN-2018-47", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-47/*"}]}, {"config_name": "CC-MAIN-2018-43", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-43/*"}]}, {"config_name": "CC-MAIN-2018-39", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-39/*"}]}, {"config_name": "CC-MAIN-2018-34", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-34/*"}]}, {"config_name": "CC-MAIN-2018-30", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-30/*"}]}, {"config_name": "CC-MAIN-2018-26", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-26/*"}]}, {"config_name": "CC-MAIN-2018-22", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-22/*"}]}, {"config_name": "CC-MAIN-2018-17", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-17/*"}]}, {"config_name": "CC-MAIN-2018-13", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-13/*"}]}, {"config_name": "CC-MAIN-2018-09", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-09/*"}]}, {"config_name": "CC-MAIN-2018-05", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-05/*"}]}, {"config_name": "CC-MAIN-2017-51", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-51/*"}]}, {"config_name": "CC-MAIN-2017-47", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-47/*"}]}, {"config_name": "CC-MAIN-2017-43", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-43/*"}]}, {"config_name": "CC-MAIN-2017-39", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-39/*"}]}, {"config_name": "CC-MAIN-2017-34", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-34/*"}]}, {"config_name": "CC-MAIN-2017-30", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-30/*"}]}, {"config_name": "CC-MAIN-2017-26", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-26/*"}]}, {"config_name": "CC-MAIN-2017-22", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-22/*"}]}, {"config_name": "CC-MAIN-2017-17", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-17/*"}]}, {"config_name": "CC-MAIN-2017-13", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-13/*"}]}, {"config_name": "CC-MAIN-2017-09", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-09/*"}]}, {"config_name": "CC-MAIN-2017-04", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-04/*"}]}, {"config_name": "CC-MAIN-2016-50", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-50/*"}]}, {"config_name": "CC-MAIN-2016-44", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-44/*"}]}, {"config_name": "CC-MAIN-2016-40", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-40/*"}]}, {"config_name": "CC-MAIN-2016-36", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-36/*"}]}, {"config_name": "CC-MAIN-2016-30", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-30/*"}]}, {"config_name": "CC-MAIN-2016-26", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-26/*"}]}, {"config_name": "CC-MAIN-2016-22", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-22/*"}]}, {"config_name": "CC-MAIN-2016-18", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-18/*"}]}, {"config_name": "CC-MAIN-2016-07", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-07/*"}]}, {"config_name": "CC-MAIN-2015-48", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-48/*"}]}, {"config_name": "CC-MAIN-2015-40", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-40/*"}]}, {"config_name": "CC-MAIN-2015-35", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-35/*"}]}, {"config_name": "CC-MAIN-2015-32", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-32/*"}]}, {"config_name": "CC-MAIN-2015-27", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-27/*"}]}, {"config_name": "CC-MAIN-2015-22", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-22/*"}]}, {"config_name": "CC-MAIN-2015-18", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-18/*"}]}, {"config_name": "CC-MAIN-2015-14", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-14/*"}]}, {"config_name": "CC-MAIN-2015-11", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-11/*"}]}, {"config_name": "CC-MAIN-2015-06", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-06/*"}]}, {"config_name": "CC-MAIN-2014-52", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-52/*"}]}, {"config_name": "CC-MAIN-2014-49", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-49/*"}]}, {"config_name": "CC-MAIN-2014-42", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-42/*"}]}, {"config_name": "CC-MAIN-2014-41", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-41/*"}]}, {"config_name": "CC-MAIN-2014-35", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-35/*"}]}, {"config_name": "CC-MAIN-2014-23", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-23/*"}]}, {"config_name": "CC-MAIN-2014-15", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-15/*"}]}, {"config_name": "CC-MAIN-2014-10", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-10/*"}]}, {"config_name": "CC-MAIN-2013-48", "data_files": [{"split": "train", "path": "data/CC-MAIN-2013-48/*"}]}, {"config_name": "CC-MAIN-2013-20", "data_files": [{"split": "train", "path": "data/CC-MAIN-2013-20/*"}]}]} | false | null | 2025-01-06T14:45:40 | 591 | 9 | false | 81fd597c805179172da5d94ac803cde08d95683d |
π FineWeb-Edu
1.3 trillion tokens of the finest educational data the π web has to offer
Paper: https://arxiv.org/abs/2406.17557
What is it?
π FineWeb-Edu dataset consists of 1.3T tokens and 5.4T tokens (FineWeb-Edu-score-2) of educational web pages filtered from π· FineWeb dataset. This is the 1.3 trillion version.
To enhance FineWeb's quality, we developed an educational quality classifier using annotations generated by LLama3-70B-Instruct. We⦠See the full description on the dataset page: https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu. | 162,536 | [
"task_categories:text-generation",
"language:en",
"license:odc-by",
"size_categories:1B<n<10B",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2406.17557",
"arxiv:2404.14219",
"arxiv:2401.10020",
"arxiv:2109.07445",
"doi:10.57967/hf/2497",
"region:us"
] | 2024-05-28T14:32:57 | null | null |
|
66bffb77453a7ef6c587560c | edinburgh-dawg/mmlu-redux-2.0 | edinburgh-dawg | {"dataset_info": [{"config_name": "abstract_algebra", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "anatomy", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "astronomy", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "business_ethics", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "clinical_knowledge", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "college_biology", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "college_chemistry", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "college_computer_science", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "college_mathematics", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "college_medicine", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "college_physics", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "computer_security", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "conceptual_physics", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "econometrics", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "electrical_engineering", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "elementary_mathematics", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "formal_logic", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "global_facts", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "high_school_biology", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "high_school_chemistry", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "high_school_computer_science", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "high_school_european_history", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "high_school_geography", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "high_school_government_and_politics", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "high_school_macroeconomics", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "high_school_mathematics", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "high_school_microeconomics", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "high_school_physics", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "high_school_psychology", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "high_school_statistics", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "high_school_us_history", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "high_school_world_history", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "human_aging", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "human_sexuality", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "international_law", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "jurisprudence", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "logical_fallacies", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "machine_learning", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "management", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "marketing", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "medical_genetics", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "miscellaneous", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "moral_disputes", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "moral_scenarios", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "nutrition", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "philosophy", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "prehistory", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "professional_accounting", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "professional_law", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "professional_medicine", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "professional_psychology", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "public_relations", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "security_studies", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "sociology", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "us_foreign_policy", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "virology", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "world_religions", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}], "configs": [{"config_name": "abstract_algebra", "data_files": [{"split": "test", "path": "abstract_algebra/data-*"}]}, {"config_name": "anatomy", "data_files": [{"split": "test", "path": "anatomy/data-*"}]}, {"config_name": "astronomy", "data_files": [{"split": "test", "path": "astronomy/data-*"}]}, {"config_name": "business_ethics", "data_files": [{"split": "test", "path": "business_ethics/data-*"}]}, {"config_name": "clinical_knowledge", "data_files": [{"split": "test", "path": "clinical_knowledge/data-*"}]}, {"config_name": "college_biology", "data_files": [{"split": "test", "path": "college_biology/data-*"}]}, {"config_name": "college_chemistry", "data_files": [{"split": "test", "path": "college_chemistry/data-*"}]}, {"config_name": "college_computer_science", "data_files": [{"split": "test", "path": "college_computer_science/data-*"}]}, {"config_name": "college_mathematics", "data_files": [{"split": "test", "path": "college_mathematics/data-*"}]}, {"config_name": "college_medicine", "data_files": [{"split": "test", "path": "college_medicine/data-*"}]}, {"config_name": "college_physics", "data_files": [{"split": "test", "path": "college_physics/data-*"}]}, {"config_name": "computer_security", "data_files": [{"split": "test", "path": "computer_security/data-*"}]}, {"config_name": "conceptual_physics", "data_files": [{"split": "test", "path": "conceptual_physics/data-*"}]}, {"config_name": "econometrics", "data_files": [{"split": "test", "path": "econometrics/data-*"}]}, {"config_name": "electrical_engineering", "data_files": [{"split": "test", "path": "electrical_engineering/data-*"}]}, {"config_name": "elementary_mathematics", "data_files": [{"split": "test", "path": "elementary_mathematics/data-*"}]}, {"config_name": "formal_logic", "data_files": [{"split": "test", "path": "formal_logic/data-*"}]}, {"config_name": "global_facts", "data_files": [{"split": "test", "path": "global_facts/data-*"}]}, {"config_name": "high_school_biology", "data_files": [{"split": "test", "path": "high_school_biology/data-*"}]}, {"config_name": "high_school_chemistry", "data_files": [{"split": "test", "path": "high_school_chemistry/data-*"}]}, {"config_name": "high_school_computer_science", "data_files": [{"split": "test", "path": "high_school_computer_science/data-*"}]}, {"config_name": "high_school_european_history", "data_files": [{"split": "test", "path": "high_school_european_history/data-*"}]}, {"config_name": "high_school_geography", "data_files": [{"split": "test", "path": "high_school_geography/data-*"}]}, {"config_name": "high_school_government_and_politics", "data_files": [{"split": "test", "path": "high_school_government_and_politics/data-*"}]}, {"config_name": "high_school_macroeconomics", "data_files": [{"split": "test", "path": "high_school_macroeconomics/data-*"}]}, {"config_name": "high_school_mathematics", "data_files": [{"split": "test", "path": "high_school_mathematics/data-*"}]}, {"config_name": "high_school_microeconomics", "data_files": [{"split": "test", "path": "high_school_microeconomics/data-*"}]}, {"config_name": "high_school_physics", "data_files": [{"split": "test", "path": "high_school_physics/data-*"}]}, {"config_name": "high_school_psychology", "data_files": [{"split": "test", "path": "high_school_psychology/data-*"}]}, {"config_name": "high_school_statistics", "data_files": [{"split": "test", "path": "high_school_statistics/data-*"}]}, {"config_name": "high_school_us_history", "data_files": [{"split": "test", "path": "high_school_us_history/data-*"}]}, {"config_name": "high_school_world_history", "data_files": [{"split": "test", "path": "high_school_world_history/data-*"}]}, {"config_name": "human_aging", "data_files": [{"split": "test", "path": "human_aging/data-*"}]}, {"config_name": "human_sexuality", "data_files": [{"split": "test", "path": "human_sexuality/data-*"}]}, {"config_name": "international_law", "data_files": [{"split": "test", "path": "international_law/data-*"}]}, {"config_name": "jurisprudence", "data_files": [{"split": "test", "path": "jurisprudence/data-*"}]}, {"config_name": "logical_fallacies", "data_files": [{"split": "test", "path": "logical_fallacies/data-*"}]}, {"config_name": "machine_learning", "data_files": [{"split": "test", "path": "machine_learning/data-*"}]}, {"config_name": "management", "data_files": [{"split": "test", "path": "management/data-*"}]}, {"config_name": "marketing", "data_files": [{"split": "test", "path": "marketing/data-*"}]}, {"config_name": "medical_genetics", "data_files": [{"split": "test", "path": "medical_genetics/data-*"}]}, {"config_name": "miscellaneous", "data_files": [{"split": "test", "path": "miscellaneous/data-*"}]}, {"config_name": "moral_disputes", "data_files": [{"split": "test", "path": "moral_disputes/data-*"}]}, {"config_name": "moral_scenarios", "data_files": [{"split": "test", "path": "moral_scenarios/data-*"}]}, {"config_name": "nutrition", "data_files": [{"split": "test", "path": "nutrition/data-*"}]}, {"config_name": "philosophy", "data_files": [{"split": "test", "path": "philosophy/data-*"}]}, {"config_name": "prehistory", "data_files": [{"split": "test", "path": "prehistory/data-*"}]}, {"config_name": "professional_accounting", "data_files": [{"split": "test", "path": "professional_accounting/data-*"}]}, {"config_name": "professional_law", "data_files": [{"split": "test", "path": "professional_law/data-*"}]}, {"config_name": "professional_medicine", "data_files": [{"split": "test", "path": "professional_medicine/data-*"}]}, {"config_name": "professional_psychology", "data_files": [{"split": "test", "path": "professional_psychology/data-*"}]}, {"config_name": "public_relations", "data_files": [{"split": "test", "path": "public_relations/data-*"}]}, {"config_name": "security_studies", "data_files": [{"split": "test", "path": "security_studies/data-*"}]}, {"config_name": "sociology", "data_files": [{"split": "test", "path": "sociology/data-*"}]}, {"config_name": "us_foreign_policy", "data_files": [{"split": "test", "path": "us_foreign_policy/data-*"}]}, {"config_name": "virology", "data_files": [{"split": "test", "path": "virology/data-*"}]}, {"config_name": "world_religions", "data_files": [{"split": "test", "path": "world_religions/data-*"}]}], "license": "cc-by-4.0", "task_categories": ["question-answering"], "language": ["en"], "pretty_name": "MMLU-Redux-2.0", "size_categories": ["1K<n<10K"]} | false | null | 2024-11-07T15:38:08 | 9 | 9 | false | 63f54ebd32c36485c679f53b8e2f576d689b9b34 |
Dataset Card for MMLU-Redux-2.0
MMLU-Redux is a subset of 5,700 manually re-annotated questions across 57 MMLU subjects.
Dataset Details
Dataset Description
Each data point in MMLU-Redux contains seven columns:
question (str): The original MMLU question.
choices (List[str]): The original list of four choices associated with the question from the MMLU dataset.
answer (int): The MMLU ground truth label in the form of an array index between 0 and⦠See the full description on the dataset page: https://huggingface.co/datasets/edinburgh-dawg/mmlu-redux-2.0. | 192 | [
"task_categories:question-answering",
"language:en",
"license:cc-by-4.0",
"size_categories:1K<n<10K",
"format:arrow",
"modality:text",
"library:datasets",
"library:mlcroissant",
"arxiv:2406.04127",
"doi:10.57967/hf/3469",
"region:us"
] | 2024-08-17T01:23:03 | null | null |
|
674dc01bf413e32210acb235 | Rapidata/human-style-preferences-images | Rapidata | {"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "image1", "dtype": "image"}, {"name": "image2", "dtype": "image"}, {"name": "votes_image1", "dtype": "int64"}, {"name": "votes_image2", "dtype": "int64"}, {"name": "model1", "dtype": "string"}, {"name": "model2", "dtype": "string"}, {"name": "detailed_results", "dtype": "string"}, {"name": "image1_path", "dtype": "string"}, {"name": "image2_path", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 26229461236, "num_examples": 63752}], "download_size": 17935847407, "dataset_size": 26229461236}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "license": "cdla-permissive-2.0", "task_categories": ["text-to-image", "image-to-text", "image-classification", "reinforcement-learning"], "language": ["en"], "tags": ["Human", "Preference", "country", "language", "flux", "midjourney", "dalle3", "stabeldiffusion", "alignment", "flux1.1", "flux1", "imagen3"], "size_categories": ["100K<n<1M"], "pretty_name": "imagen-3 vs. Flux-1.1-pro vs. Flux-1-pro vs. Dalle-3 vs. Midjourney-5.2 vs. Stabel-Diffusion-3 - Human Preference Dataset"} | false | null | 2025-01-10T21:59:31 | 12 | 9 | false | 79acd5ebcc535309c08d996ab1f88c01077a7b12 |
Rapidata Image Generation Preference Dataset
This dataset was collected in ~4 Days using the Rapidata Python API, accessible to anyone and ideal for large scale data annotation.
Explore our latest model rankings on our website.
If you get value from this dataset and would like to see more in the future, please consider liking it.
Overview
One of the largest human preference datasets for text-to-image models, this release contains over 1,200,000 human⦠See the full description on the dataset page: https://huggingface.co/datasets/Rapidata/human-style-preferences-images. | 92 | [
"task_categories:text-to-image",
"task_categories:image-to-text",
"task_categories:image-classification",
"task_categories:reinforcement-learning",
"language:en",
"license:cdla-permissive-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"Human",
"Preference",
"country",
"language",
"flux",
"midjourney",
"dalle3",
"stabeldiffusion",
"alignment",
"flux1.1",
"flux1",
"imagen3"
] | 2024-12-02T14:11:39 | null | null |
End of preview. Expand
in Dataset Viewer.
- Downloads last month
- 1,282
Data Sourcing report
powered
by
Spawning.aiNo elements in this dataset have been identified as either opted-out, or opted-in, by their creator.