Upload ./README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -32,7 +32,7 @@ This dataset is for ["2.5 Years in Class: A Multimodal Textbook for Vision-Langu
|
|
32 |
- Our code can be found in [Multimodal-Textbook](https://github.com/DAMO-NLP-SG/multimodal_textbook/tree/master).
|
33 |
|
34 |
|
35 |
-
Note: We have uploaded the annotation file (`./multimodal_textbook.json`)and image folder (`./dataset_images_interval_7.tar.gz`), which contains keyframes, processed asr and ocr texts. For more details, please refer to [Using Multimodal Textbook](#
|
36 |
|
37 |
|
38 |
<img src="./src/page_fig.png" alt="Image" style="width: 900px;">
|
@@ -83,7 +83,7 @@ Then we extract 6.5M keyframes and 0.75B text (ASR+OCR) tokens from these videos
|
|
83 |
<img src="./src/table.png" alt="Image" style="width: 900px;">
|
84 |
|
85 |
|
86 |
-
## Using Multimodal Textbook
|
87 |
### Description of Dataset
|
88 |
We provide the annotation file (json file) and corresponding images folder for textbook:
|
89 |
- Dataset json-file: `./multimodal_textbook.json` (600k samples ~ 11GB)
|
|
|
32 |
- Our code can be found in [Multimodal-Textbook](https://github.com/DAMO-NLP-SG/multimodal_textbook/tree/master).
|
33 |
|
34 |
|
35 |
+
Note: We have uploaded the annotation file (`./multimodal_textbook.json`)and image folder (`./dataset_images_interval_7.tar.gz`), which contains keyframes, processed asr and ocr texts. For more details, please refer to [Using Multimodal Textbook](#using-multimodal-textbook)
|
36 |
|
37 |
|
38 |
<img src="./src/page_fig.png" alt="Image" style="width: 900px;">
|
|
|
83 |
<img src="./src/table.png" alt="Image" style="width: 900px;">
|
84 |
|
85 |
|
86 |
+
## Using Multimodal Textbook
|
87 |
### Description of Dataset
|
88 |
We provide the annotation file (json file) and corresponding images folder for textbook:
|
89 |
- Dataset json-file: `./multimodal_textbook.json` (600k samples ~ 11GB)
|