arxiv_id
stringlengths 10
10
| reached_out_success
float64 0
0
| reached_out_note
stringclasses 0
values | num_models
float64 0
0
| num_datasets
float64 0
0
| num_spaces
float64 0
0
| title
stringlengths 39
91
| github
stringclasses 5
values | github_stars
float64 3.2k
4.72k
⌀ | conference_name
stringclasses 3
values | upvotes
int64 1
10
| num_comments
int64 0
5
| github_mention_hf
float64 0
1
| has_artifact
bool 1
class | submitted_by
stringclasses 1
value | github_issue_url
stringclasses 0
values | hf_pr_urls
stringclasses 0
values | date
stringclasses 1
value | gemini_results
dict | gemini_github_issue_url
stringclasses 4
values | gemini_github_url
stringclasses 7
values | gemini_model_name
stringclasses 1
value | gemini_new_datasets
stringclasses 2
values | gemini_new_model_checkpoints
stringclasses 3
values | gemini_note
stringclasses 4
values | gemini_project_page_url
stringclasses 4
values | gemini_reaching_out
stringclasses 6
values | gemini_reasoning
stringlengths 516
1.65k
| gemini_huggingface_pull_request_urls
null |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2305.10688 | 0 | null | 0 | 0 | 0 | MolXPT: Wrapping Molecules with Text for Generative Pre-training | null | null | ACL2023 | 1 | 0 | 0 | false | akhaliq | null | null | 2023-05-19 | {
"github_issue_url": "",
"github_url": "",
"model_name": "gemini-2.0-flash-exp",
"new_datasets": "[]",
"new_model_checkpoints": "[]",
"note": "NO_CODE",
"project_page_url": "",
"reaching_out": "Gemini decided to not reach out due to no Github URL.",
"reasoning": "The MolXPT paper introduces a novel molecule-text language model (MolXPT) pre-trained on SMILES wrapped by text. The authors mention that the model is pre-trained on 8M wrapped sequences and 30M SMILES from PubChem. However, no Github or project page URL is provided in the abstract, paper page, or the first two pages of the PDF. This indicates that no code or artifacts are currently available. Hence, the note should be `NO_CODE`. It does not make sense to reach out because no code or artifacts are yet released."
} | gemini-2.0-flash-exp | [] | [] | NO_CODE | Gemini decided to not reach out due to no Github URL. | The MolXPT paper introduces a novel molecule-text language model (MolXPT) pre-trained on SMILES wrapped by text. The authors mention that the model is pre-trained on 8M wrapped sequences and 30M SMILES from PubChem. However, no Github or project page URL is provided in the abstract, paper page, or the first two pages of the PDF. This indicates that no code or artifacts are currently available. Hence, the note should be `NO_CODE`. It does not make sense to reach out because no code or artifacts are yet released. | null |
|||
2305.10874 | 0 | null | 0 | 0 | 0 | VideoFactory: Swap Attention in Spatiotemporal Diffusions for Text-to-Video Generation | null | null | null | 1 | 0 | 0 | false | akhaliq | null | null | 2023-05-19 | {
"github_issue_url": "https://github.com/daooshee/HD-VG-130M/issues/6",
"github_url": "https://github.com/daooshee/hd-vg-130m",
"model_name": "gemini-2.0-flash-exp",
"new_datasets": "[{'dataset_name': 'HD-VG-130M', 'hosting_url': 'https://drive.google.com/drive/folders/154S6raNg9NpDGQRlRhhAaYcAx5xq1Ok8', 'task_category': 'text-to-video'}]",
"new_model_checkpoints": "[]",
"note": "NEW_ARTIFACTS",
"project_page_url": "",
"reaching_out": "{'title': 'Release HD-VG-130M dataset on Hugging Face', 'message': 'Hello @daooshee 🤗 \\n\\nI\\'m Niels and work as part of the open-source team at Hugging Face. I discovered your work through Hugging Face\\'s daily papers as yours got featured: https://huggingface.co/papers/2305.10874.\\nThe paper page lets people discuss about your paper and lets them find artifacts about it (your dataset for instance),\\nyou can also claim the paper as yours which will show up on your public profile at HF.\\n\\nWould you like to host the datasets you\\'ve released on https://huggingface.co/datasets?\\nI see you\\'re using Google Drive for it. Hosting on Hugging Face will give you more visibility/enable better discoverability, and will also allow people to do:\\n\\n```python\\nfrom datasets import load_dataset\\n\\ndataset = load_dataset(\"your-hf-org-or-username/your-dataset\")\\n```\\n\\nIf you\\'re down, leaving a guide here: https://huggingface.co/docs/datasets/loading.\\nWe also support Webdataset, useful for image/video datasets: https://huggingface.co/docs/datasets/en/loading#webdataset.\\n\\nBesides that, there\\'s the [dataset viewer](https://huggingface.co/docs/hub/en/datasets-viewer) which allows people to quickly explore the first few rows of the data in the browser.\\n\\nAfter uploaded, we can also link the datasets to the paper page (read [here](https://huggingface.co/docs/hub/en/model-cards#linking-a-paper)) so people can discover your work.\\n\\nLet me know if you\\'re interested/need any guidance.\\n\\nKind regards,\\n\\nNiels'}",
"reasoning": "The paper introduces \"VideoFactory\", a text-to-video generation approach, and also mentions a new large-scale video dataset called HD-VG-130M. The abstract mentions that this dataset is open-source and contains 130 million text-video pairs.\nA Github repository is available, and its README contains the same information, specifying a Google Drive link for the dataset.\nThe README also mentions a higher quality subset, which will be released soon, but not yet.\n\nBased on the above information, I can see that a new dataset (HD-VG-130M) is being introduced and is already available for download via a Google Drive link mentioned in the Github README. There are no model checkpoints mentioned in the paper or Github README. Therefore, the note will be `NEW_ARTIFACTS`.\nThe dataset is a text-to-video dataset so the relevant task category is \"text-to-video\".\nIt makes sense to reach out to the authors to ask them to host this dataset on the Hugging Face hub, as it is not yet available there."
} | https://github.com/daooshee/HD-VG-130M/issues/6 | https://github.com/daooshee/hd-vg-130m | gemini-2.0-flash-exp | [{'dataset_name': 'HD-VG-130M', 'hosting_url': 'https://drive.google.com/drive/folders/154S6raNg9NpDGQRlRhhAaYcAx5xq1Ok8', 'task_category': 'text-to-video'}] | [] | NEW_ARTIFACTS | {'title': 'Release HD-VG-130M dataset on Hugging Face', 'message': 'Hello @daooshee 🤗 \n\nI\'m Niels and work as part of the open-source team at Hugging Face. I discovered your work through Hugging Face\'s daily papers as yours got featured: https://huggingface.co/papers/2305.10874.\nThe paper page lets people discuss about your paper and lets them find artifacts about it (your dataset for instance),\nyou can also claim the paper as yours which will show up on your public profile at HF.\n\nWould you like to host the datasets you\'ve released on https://huggingface.co/datasets?\nI see you\'re using Google Drive for it. Hosting on Hugging Face will give you more visibility/enable better discoverability, and will also allow people to do:\n\n```python\nfrom datasets import load_dataset\n\ndataset = load_dataset("your-hf-org-or-username/your-dataset")\n```\n\nIf you\'re down, leaving a guide here: https://huggingface.co/docs/datasets/loading.\nWe also support Webdataset, useful for image/video datasets: https://huggingface.co/docs/datasets/en/loading#webdataset.\n\nBesides that, there\'s the [dataset viewer](https://huggingface.co/docs/hub/en/datasets-viewer) which allows people to quickly explore the first few rows of the data in the browser.\n\nAfter uploaded, we can also link the datasets to the paper page (read [here](https://huggingface.co/docs/hub/en/model-cards#linking-a-paper)) so people can discover your work.\n\nLet me know if you\'re interested/need any guidance.\n\nKind regards,\n\nNiels'} | The paper introduces "VideoFactory", a text-to-video generation approach, and also mentions a new large-scale video dataset called HD-VG-130M. The abstract mentions that this dataset is open-source and contains 130 million text-video pairs.
A Github repository is available, and its README contains the same information, specifying a Google Drive link for the dataset.
The README also mentions a higher quality subset, which will be released soon, but not yet.
Based on the above information, I can see that a new dataset (HD-VG-130M) is being introduced and is already available for download via a Google Drive link mentioned in the Github README. There are no model checkpoints mentioned in the paper or Github README. Therefore, the note will be `NEW_ARTIFACTS`.
The dataset is a text-to-video dataset so the relevant task category is "text-to-video".
It makes sense to reach out to the authors to ask them to host this dataset on the Hugging Face hub, as it is not yet available there. | null |
|
2305.10474 | 0 | null | 0 | 0 | 0 | Preserve Your Own Correlation: A Noise Prior for Video Diffusion Models | null | null | null | 1 | 0 | 0 | false | akhaliq | null | null | 2023-05-19 | {
"github_issue_url": "",
"github_url": "",
"model_name": "gemini-2.0-flash-exp",
"new_datasets": "[]",
"new_model_checkpoints": "[]",
"note": "NO_CODE_YET",
"project_page_url": "https://research.nvidia.com/labs/dir/pyoco/",
"reaching_out": "Gemini decided to not reach out due to no Github URL.",
"reasoning": "The PYoCo paper introduces a novel video noise prior for finetuning text-to-image diffusion models for text-to-video synthesis. The authors achieve SOTA results on several benchmarks. A project page is available which showcases the results, methodology, and model architecture.\n\nSeveral colleagues marked the paper as `NO_CODE` or `NO_ARTIFACTS` (colleague 1, 2, and 3) which is not the correct option, as there is a clear evidence in the project page: \"code and weights will be released soon\". Colleague 4 and 5 chose `NO_CODE_YET`, which is the correct choice.\nSince there is no actual code and artifacts available yet, `NO_CODE_YET` is the most appropriate note. It also means we need to reach out to the authors."
} | gemini-2.0-flash-exp | [] | [] | NO_CODE_YET | https://research.nvidia.com/labs/dir/pyoco/ | Gemini decided to not reach out due to no Github URL. | The PYoCo paper introduces a novel video noise prior for finetuning text-to-image diffusion models for text-to-video synthesis. The authors achieve SOTA results on several benchmarks. A project page is available which showcases the results, methodology, and model architecture.
Several colleagues marked the paper as `NO_CODE` or `NO_ARTIFACTS` (colleague 1, 2, and 3) which is not the correct option, as there is a clear evidence in the project page: "code and weights will be released soon". Colleague 4 and 5 chose `NO_CODE_YET`, which is the correct choice.
Since there is no actual code and artifacts available yet, `NO_CODE_YET` is the most appropriate note. It also means we need to reach out to the authors. | null |
||
2305.10841 | 0 | null | 0 | 0 | 0 | GETMusic: Generating Any Music Tracks with a Unified Representation and Diffusion Framework | https://github.com/microsoft/muzic | null | null | 2 | 1 | 0 | false | akhaliq | null | null | 2023-05-19 | {
"github_issue_url": "",
"github_url": "https://github.com/microsoft/muzic",
"model_name": "gemini-2.0-flash-exp",
"new_datasets": "[]",
"new_model_checkpoints": "[]",
"note": "NO_ARTIFACTS",
"project_page_url": "",
"reaching_out": "{'title': '', 'message': 'No need to reach out since no artifacts are being released.'}",
"reasoning": "The paper introduces GETMusic, a framework for symbolic music generation with a novel music representation GETScore and a diffusion model GETDiff. The goal is to generate any target tracks based on the source tracks. A Github repository is linked in the paper. The Github README provides an overview of the Muzic project and lists several related works, including GETMusic. However, it does not explicitly mention pre-trained models or new datasets for GETMusic. The README describes a collection of research works, and it mentions the release of `GETMusic`, but without any specific information about a new dataset or hosted model checkpoints. Hence, all colleagues chose `NO_ARTIFACTS`, which makes sense. There's no need to reach out since no new artifacts are being released."
} | https://github.com/microsoft/muzic | gemini-2.0-flash-exp | [] | [] | NO_ARTIFACTS | {'title': '', 'message': 'No need to reach out since no artifacts are being released.'} | The paper introduces GETMusic, a framework for symbolic music generation with a novel music representation GETScore and a diffusion model GETDiff. The goal is to generate any target tracks based on the source tracks. A Github repository is linked in the paper. The Github README provides an overview of the Muzic project and lists several related works, including GETMusic. However, it does not explicitly mention pre-trained models or new datasets for GETMusic. The README describes a collection of research works, and it mentions the release of `GETMusic`, but without any specific information about a new dataset or hosted model checkpoints. Hence, all colleagues chose `NO_ARTIFACTS`, which makes sense. There's no need to reach out since no new artifacts are being released. | null |
||
2305.10722 | 0 | null | 0 | 0 | 0 | Discriminative Diffusion Models as Few-shot Vision and Language Learners | https://github.com/eric-ai-lab/dsd | null | null | 3 | 0 | 1 | false | akhaliq | null | null | 2023-05-19 | {
"github_issue_url": "https://github.com/eric-ai-lab/Discffusion/issues/2",
"github_url": "https://github.com/eric-ai-lab/dsd",
"model_name": "gemini-2.0-flash-exp",
"new_datasets": "[]",
"new_model_checkpoints": "[{'model_name': 'DSD - ComVG', 'hosting_url': 'https://drive.google.com/drive/folders/13-v3zShNMVpURBceqJu5T6iGXDBJqL6p', 'pipeline_tag': 'image-text-to-text'}, {'model_name': 'DSD - Refcocog', 'hosting_url': 'https://drive.google.com/drive/folders/13-v3zShNMVpURBceqJu5T6iGXDBJqL6p', 'pipeline_tag': 'image-text-to-text'}, {'model_name': 'DSD - VQA', 'hosting_url': 'https://drive.google.com/drive/folders/13-v3zShNMVpURBceqJu5T6iGXDBJqL6p', 'pipeline_tag': 'image-text-to-text'}]",
"note": "NEW_ARTIFACTS",
"project_page_url": "",
"reaching_out": "{'title': 'Release artifacts (models, dataset) on Hugging Face', 'message': 'Hi @jkooy,\\n\\nNiels here from the open-source team at Hugging Face. I discovered your work through Hugging Face\\'s daily papers as yours got featured: https://huggingface.co/papers/2305.10722.\\nThe paper page lets people discuss about your paper and lets them find artifacts about it (your models, datasets or demo for instance), you can also claim\\nthe paper as yours which will show up on your public profile at HF.\\n\\nIt\\'d be great to make the checkpoints and dataset available on the 🤗 hub, to improve their discoverability/visibility.\\nWe can add tags so that people find them when filtering https://huggingface.co/models and https://huggingface.co/datasets.\\n\\n## Uploading models\\n\\nSee here for a guide: https://huggingface.co/docs/hub/models-uploading. \\n\\nIn this case, we could leverage the [PyTorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) class which adds `from_pretrained` and `push_to_hub` to any custom `nn.Module`. Alternatively, one can leverages the [hf_hub_download](https://huggingface.co/docs/huggingface_hub/en/guides/download#download-a-single-file) one-liner to download a checkpoint from the hub. \\n\\nWe encourage researchers to push each model checkpoint to a separate model repository, so that things like download stats also work. We can then also link the checkpoints to the paper page.\\n\\n## Uploading dataset\\n\\nWould be awesome to make the dataset available on 🤗 , so that people can do:\\n\\n```python\\nfrom datasets import load_dataset\\n\\ndataset = load_dataset(\"your-hf-org-or-username/your-dataset\")\\n```\\nSee here for a guide: https://huggingface.co/docs/datasets/loading.\\n\\nBesides that, there\\'s the [dataset viewer](https://huggingface.co/docs/hub/en/datasets-viewer) which allows people to quickly explore the first few rows of the data in the browser.\\n\\nLet me know if you\\'re interested/need any help regarding this!\\n\\nCheers,\\n\\nNiels\\nML Engineer @ HF 🤗'}",
"reasoning": "The paper introduces Discffusion, a method that leverages pre-trained text-to-image diffusion models for discriminative tasks such as image-text matching. The Github repository contains the code for replicating the experiments.\n\nThe Github README indicates that pre-trained checkpoints are available through a Google Drive link. Additionally, the README details the datasets used (ComVG, RefCOCO, VQA) and includes a Google Drive link to download some of the dataset components. It is important to note that these datasets are not introduced by the paper, but rather used for evaluation.\nThe authors do not introduce new datasets for this task.\nThe task at hand is image-text matching, which is related to an image-to-text task.\nBased on the analysis, I conclude that the paper introduces new pre-trained model checkpoints (DSD for ComVG, DSD for RefCOCO and DSD for VQA). Since this model is used for image-text matching, the relevant pipeline tag should be `image-text-to-text`.\nHence, the most appropriate note is `NEW_ARTIFACTS`. It makes sense to reach out to the authors."
} | https://github.com/eric-ai-lab/Discffusion/issues/2 | https://github.com/eric-ai-lab/dsd | gemini-2.0-flash-exp | [] | [{'model_name': 'DSD - ComVG', 'hosting_url': 'https://drive.google.com/drive/folders/13-v3zShNMVpURBceqJu5T6iGXDBJqL6p', 'pipeline_tag': 'image-text-to-text'}, {'model_name': 'DSD - Refcocog', 'hosting_url': 'https://drive.google.com/drive/folders/13-v3zShNMVpURBceqJu5T6iGXDBJqL6p', 'pipeline_tag': 'image-text-to-text'}, {'model_name': 'DSD - VQA', 'hosting_url': 'https://drive.google.com/drive/folders/13-v3zShNMVpURBceqJu5T6iGXDBJqL6p', 'pipeline_tag': 'image-text-to-text'}] | NEW_ARTIFACTS | {'title': 'Release artifacts (models, dataset) on Hugging Face', 'message': 'Hi @jkooy,\n\nNiels here from the open-source team at Hugging Face. I discovered your work through Hugging Face\'s daily papers as yours got featured: https://huggingface.co/papers/2305.10722.\nThe paper page lets people discuss about your paper and lets them find artifacts about it (your models, datasets or demo for instance), you can also claim\nthe paper as yours which will show up on your public profile at HF.\n\nIt\'d be great to make the checkpoints and dataset available on the 🤗 hub, to improve their discoverability/visibility.\nWe can add tags so that people find them when filtering https://huggingface.co/models and https://huggingface.co/datasets.\n\n## Uploading models\n\nSee here for a guide: https://huggingface.co/docs/hub/models-uploading. \n\nIn this case, we could leverage the [PyTorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) class which adds `from_pretrained` and `push_to_hub` to any custom `nn.Module`. Alternatively, one can leverages the [hf_hub_download](https://huggingface.co/docs/huggingface_hub/en/guides/download#download-a-single-file) one-liner to download a checkpoint from the hub. \n\nWe encourage researchers to push each model checkpoint to a separate model repository, so that things like download stats also work. We can then also link the checkpoints to the paper page.\n\n## Uploading dataset\n\nWould be awesome to make the dataset available on 🤗 , so that people can do:\n\n```python\nfrom datasets import load_dataset\n\ndataset = load_dataset("your-hf-org-or-username/your-dataset")\n```\nSee here for a guide: https://huggingface.co/docs/datasets/loading.\n\nBesides that, there\'s the [dataset viewer](https://huggingface.co/docs/hub/en/datasets-viewer) which allows people to quickly explore the first few rows of the data in the browser.\n\nLet me know if you\'re interested/need any help regarding this!\n\nCheers,\n\nNiels\nML Engineer @ HF 🤗'} | The paper introduces Discffusion, a method that leverages pre-trained text-to-image diffusion models for discriminative tasks such as image-text matching. The Github repository contains the code for replicating the experiments.
The Github README indicates that pre-trained checkpoints are available through a Google Drive link. Additionally, the README details the datasets used (ComVG, RefCOCO, VQA) and includes a Google Drive link to download some of the dataset components. It is important to note that these datasets are not introduced by the paper, but rather used for evaluation.
The authors do not introduce new datasets for this task.
The task at hand is image-text matching, which is related to an image-to-text task.
Based on the analysis, I conclude that the paper introduces new pre-trained model checkpoints (DSD for ComVG, DSD for RefCOCO and DSD for VQA). Since this model is used for image-text matching, the relevant pipeline tag should be `image-text-to-text`.
Hence, the most appropriate note is `NEW_ARTIFACTS`. It makes sense to reach out to the authors. | null |
|
2305.11175 | 0 | null | 0 | 0 | 0 | VisionLLM: Large Language Model is also an Open-Ended Decoder for Vision-Centric Tasks | https://github.com/opengvlab/interngpt | 3,198 | NeurIPS2023 | 3 | 5 | 1 | false | akhaliq | null | null | 2023-05-19 | {
"github_issue_url": "https://github.com/OpenGVLab/InternGPT/issues/71",
"github_url": "https://github.com/opengvlab/interngpt",
"model_name": "gemini-2.0-flash-exp",
"new_datasets": "[]",
"new_model_checkpoints": "[]",
"note": "NO_CODE_YET",
"project_page_url": "",
"reaching_out": "{'title': 'Release VisionLLM on Hugging Face', 'message': \"Hello @liu-zhy 🤗\\n\\nI'm Niels and work as part of the open-source team at Hugging Face. I discovered your work through Hugging Face's daily papers as yours got featured: https://huggingface.co/papers/2305.11175.\\nThe paper page lets people discuss about your paper and lets them find artifacts about it (your models for instance),\\nyou can also claim the paper as yours which will show up on your public profile at HF.\\n\\nLooking forward to the code and model release. Would you like to host the model you've pre-trained on https://huggingface.co/models?\\nHosting on Hugging Face will give you more visibility/enable better discoverability. We can add tags in the model cards so that people find the models easier,\\nlink it to the paper page, etc.\\n\\nIf you're down, leaving a guide [here](https://huggingface.co/docs/hub/models-uploading). If it's a custom PyTorch model, you can use the [PyTorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin)\\nclass which adds `from_pretrained` and `push_to_hub` to the model which lets you to upload the model and people to download and use models right away.\\nIf you do not want this and directly want to upload model through UI or however you want, people can also use [hf_hub_download](https://huggingface.co/docs/huggingface_hub/en/guides/download#download-a-single-file).\\n\\nAfter uploaded, we can also link the models to the paper page (read [here](https://huggingface.co/docs/hub/en/model-cards#linking-a-paper)) so people can discover your model.\\n\\nYou can also build a demo for your model on [Spaces](https://huggingface.co/spaces), we can provide you a ZeroGPU [grant](https://huggingface.co/docs/hub/en/spaces-gpus#community-gpu-grants),\\nwhich gives you A100 GPUs for free.\\n\\nLet me know if you're interested/need any guidance :)\\n\\nKind regards,\\n\\nNiels\"}",
"reasoning": "The VisionLLM paper introduces a new framework that aligns vision-centric tasks with language tasks using an LLM-based approach. The abstract clearly states that \"The code and demo shall be released\", with links provided to the respective Github repositories: https://github.com/OpenGVLab/VisionLLM for the code and https://github.com/OpenGVLab/InternGPT for the demo.\n\nThe Github README of the InternGPT repository does not contain any new artifacts, but it does mention that \"we will continue to update it\". The abstract of the VisionLLM paper explicitly states that the code and demo will be released. Therefore, this indicates that code and potentially model checkpoints will be released in the future, but they are not currently available. Therefore, the correct note is `NO_CODE_YET`. Since there's no artifact released yet, it would be useful to reach out to the authors.\n\nAll the colleagues also reached the conclusion that the note should be `NO_CODE_YET`."
} | https://github.com/OpenGVLab/InternGPT/issues/71 | https://github.com/opengvlab/interngpt | gemini-2.0-flash-exp | [] | [] | NO_CODE_YET | {'title': 'Release VisionLLM on Hugging Face', 'message': "Hello @liu-zhy 🤗\n\nI'm Niels and work as part of the open-source team at Hugging Face. I discovered your work through Hugging Face's daily papers as yours got featured: https://huggingface.co/papers/2305.11175.\nThe paper page lets people discuss about your paper and lets them find artifacts about it (your models for instance),\nyou can also claim the paper as yours which will show up on your public profile at HF.\n\nLooking forward to the code and model release. Would you like to host the model you've pre-trained on https://huggingface.co/models?\nHosting on Hugging Face will give you more visibility/enable better discoverability. We can add tags in the model cards so that people find the models easier,\nlink it to the paper page, etc.\n\nIf you're down, leaving a guide [here](https://huggingface.co/docs/hub/models-uploading). If it's a custom PyTorch model, you can use the [PyTorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin)\nclass which adds `from_pretrained` and `push_to_hub` to the model which lets you to upload the model and people to download and use models right away.\nIf you do not want this and directly want to upload model through UI or however you want, people can also use [hf_hub_download](https://huggingface.co/docs/huggingface_hub/en/guides/download#download-a-single-file).\n\nAfter uploaded, we can also link the models to the paper page (read [here](https://huggingface.co/docs/hub/en/model-cards#linking-a-paper)) so people can discover your model.\n\nYou can also build a demo for your model on [Spaces](https://huggingface.co/spaces), we can provide you a ZeroGPU [grant](https://huggingface.co/docs/hub/en/spaces-gpus#community-gpu-grants),\nwhich gives you A100 GPUs for free.\n\nLet me know if you're interested/need any guidance :)\n\nKind regards,\n\nNiels"} | The VisionLLM paper introduces a new framework that aligns vision-centric tasks with language tasks using an LLM-based approach. The abstract clearly states that "The code and demo shall be released", with links provided to the respective Github repositories: https://github.com/OpenGVLab/VisionLLM for the code and https://github.com/OpenGVLab/InternGPT for the demo.
The Github README of the InternGPT repository does not contain any new artifacts, but it does mention that "we will continue to update it". The abstract of the VisionLLM paper explicitly states that the code and demo will be released. Therefore, this indicates that code and potentially model checkpoints will be released in the future, but they are not currently available. Therefore, the correct note is `NO_CODE_YET`. Since there's no artifact released yet, it would be useful to reach out to the authors.
All the colleagues also reached the conclusion that the note should be `NO_CODE_YET`. | null |
|
2305.10912 | 0 | null | 0 | 0 | 0 | A Generalist Dynamics Model for Control | null | null | null | 1 | 0 | 0 | false | akhaliq | null | null | 2023-05-19 | {
"github_issue_url": "",
"github_url": "",
"model_name": "gemini-2.0-flash-exp",
"new_datasets": "[]",
"new_model_checkpoints": "[]",
"note": "NO_CODE",
"project_page_url": "",
"reaching_out": "Gemini decided to not reach out due to no Github URL.",
"reasoning": "The paper \"A Generalist Dynamics Model for Control\" investigates the use of transformer sequence models as dynamics models (TDMs) for control. The authors demonstrate the strong generalization capabilities of TDMs to unseen environments.\n\nThe provided information includes the Arxiv abstract, the paper page comments, and the first two pages of the Arxiv PDF, but no GitHub README or project page. There is no mention of specific model checkpoints or datasets being released in any of this material. The core focus is on demonstrating a methodology and its effectiveness, not releasing specific models or data.\n\nBased on the lack of any evidence for released code and/or artifacts, the most appropriate note is \"NO_CODE\". There is no indication that code or models/datasets are planned to be released either.\n\nIt therefore makes little sense to reach out to the authors at this stage, as there's no indication they are planning to release anything."
} | gemini-2.0-flash-exp | [] | [] | NO_CODE | Gemini decided to not reach out due to no Github URL. | The paper "A Generalist Dynamics Model for Control" investigates the use of transformer sequence models as dynamics models (TDMs) for control. The authors demonstrate the strong generalization capabilities of TDMs to unseen environments.
The provided information includes the Arxiv abstract, the paper page comments, and the first two pages of the Arxiv PDF, but no GitHub README or project page. There is no mention of specific model checkpoints or datasets being released in any of this material. The core focus is on demonstrating a methodology and its effectiveness, not releasing specific models or data.
Based on the lack of any evidence for released code and/or artifacts, the most appropriate note is "NO_CODE". There is no indication that code or models/datasets are planned to be released either.
It therefore makes little sense to reach out to the authors at this stage, as there's no indication they are planning to release anything. | null |
|||
2305.10434 | 0 | null | 0 | 0 | 0 | Learning the Visualness of Text Using Large Vision-Language Models | null | null | EMNLP2023 | 2 | 0 | 0 | false | akhaliq | null | null | 2023-05-19 | {
"github_issue_url": "",
"github_url": "",
"model_name": "gemini-2.0-flash-exp",
"new_datasets": "[]",
"new_model_checkpoints": "[]",
"note": "NO_CODE_YET",
"project_page_url": "https://gaurav22verma.github.io/text-visualness/",
"reaching_out": "Gemini decided to not reach out due to no Github URL.",
"reasoning": "The paper introduces a new task of identifying the visualness of text and curates a dataset of 3,620 English sentences with visualness scores, named TImeD. The authors also propose a fine-tuning strategy for vision-language models like CLIP. There is no mention of releasing model checkpoints, but the project page does mention \"resources coming soon\". The abstract also suggests that the code and dataset will be available. There is no Github URL provided, or a direct link to a dataset in the project page, which is not a hosting URL anyway.\n\nGiven the project page mentions \"resources coming soon\" and a new dataset is introduced, the correct note is `NO_CODE_YET`. It's also the most frequent answer given by my colleagues (3 out of 5), hence it makes sense to choose `NO_CODE_YET`.\n\nIt makes sense to reach out to the authors given that they intend to release code and datasets."
} | gemini-2.0-flash-exp | [] | [] | NO_CODE_YET | https://gaurav22verma.github.io/text-visualness/ | Gemini decided to not reach out due to no Github URL. | The paper introduces a new task of identifying the visualness of text and curates a dataset of 3,620 English sentences with visualness scores, named TImeD. The authors also propose a fine-tuning strategy for vision-language models like CLIP. There is no mention of releasing model checkpoints, but the project page does mention "resources coming soon". The abstract also suggests that the code and dataset will be available. There is no Github URL provided, or a direct link to a dataset in the project page, which is not a hosting URL anyway.
Given the project page mentions "resources coming soon" and a new dataset is introduced, the correct note is `NO_CODE_YET`. It's also the most frequent answer given by my colleagues (3 out of 5), hence it makes sense to choose `NO_CODE_YET`.
It makes sense to reach out to the authors given that they intend to release code and datasets. | null |
||
2305.10763 | 0 | null | 0 | 0 | 0 | CLAPSpeech: Learning Prosody from Text Context with Contrastive Language-Audio Pre-training | null | null | ACL2023 | 3 | 4 | 0 | false | akhaliq | null | null | 2023-05-19 | {
"github_issue_url": "",
"github_url": "",
"model_name": "gemini-2.0-flash-exp",
"new_datasets": "[]",
"new_model_checkpoints": "[]",
"note": "NO_CODE_YET",
"project_page_url": "https://clapspeech.github.io/",
"reaching_out": "Gemini decided to not reach out due to no Github URL.",
"reasoning": "The paper introduces CLAPSpeech, a cross-modal contrastive pre-training framework for improved prosody learning in text-to-speech (TTS) models. The abstract and project page both mention that source code and audio samples are available at the provided project page URL. However, there is no link to a Github repository.\n\nThe project page contains audio samples and text snippets, showcasing the CLAPSpeech method's effects on various datasets (LJSpeech, Biaobei and LibriTTS) and with different TTS baselines (FastSpeech 2 and PortaSpeech). It also includes a case study on fine-grained prosody transfer. Despite mentioning that source code is available, the page only provides audio samples and demos. No links to downloadable code or pre-trained model weights are available.\n\nThe paper page comments suggest that the community also noticed that there is no released code and are therefore asking for it to be released on Hugging Face, further confirming the absence of a publicly accessible code repository or artifacts.\n\nBased on this information, it's clear that code and models are intended to be released, as the authors mentioned source code to be available, but it is not yet available. Therefore, the `NO_CODE_YET` note is appropriate.\n\nGiven that there is a high degree of consistency across multiple colleagues' results (3 out of 5 colleagues picked `NO_CODE_YET` or equivalent), it makes sense to reach out to the authors to ask them to publish their pre-trained model checkpoints and/or datasets."
} | gemini-2.0-flash-exp | [] | [] | NO_CODE_YET | https://clapspeech.github.io/ | Gemini decided to not reach out due to no Github URL. | The paper introduces CLAPSpeech, a cross-modal contrastive pre-training framework for improved prosody learning in text-to-speech (TTS) models. The abstract and project page both mention that source code and audio samples are available at the provided project page URL. However, there is no link to a Github repository.
The project page contains audio samples and text snippets, showcasing the CLAPSpeech method's effects on various datasets (LJSpeech, Biaobei and LibriTTS) and with different TTS baselines (FastSpeech 2 and PortaSpeech). It also includes a case study on fine-grained prosody transfer. Despite mentioning that source code is available, the page only provides audio samples and demos. No links to downloadable code or pre-trained model weights are available.
The paper page comments suggest that the community also noticed that there is no released code and are therefore asking for it to be released on Hugging Face, further confirming the absence of a publicly accessible code repository or artifacts.
Based on this information, it's clear that code and models are intended to be released, as the authors mentioned source code to be available, but it is not yet available. Therefore, the `NO_CODE_YET` note is appropriate.
Given that there is a high degree of consistency across multiple colleagues' results (3 out of 5 colleagues picked `NO_CODE_YET` or equivalent), it makes sense to reach out to the authors to ask them to publish their pre-trained model checkpoints and/or datasets. | null |
||
2305.10601 | 0 | null | 0 | 0 | 0 | Tree of Thoughts: Deliberate Problem Solving with Large Language Models | https://github.com/ysymyth/tree-of-thought-llm | 4,721 | NeurIPS2023 | 10 | 1 | 0 | false | akhaliq | null | null | 2023-05-19 | {
"github_issue_url": "",
"github_url": "https://github.com/ysymyth/tree-of-thought-llm",
"model_name": "gemini-2.0-flash-exp",
"new_datasets": "[]",
"new_model_checkpoints": "[]",
"note": "NO_ARTIFACTS",
"project_page_url": "",
"reaching_out": "{'title': '', 'message': 'No need to reach out since no artifacts are being released.'}",
"reasoning": "The provided paper introduces the Tree of Thoughts (ToT) framework for language model inference, which enables exploration over coherent units of text (thoughts). It aims to improve problem-solving abilities of language models on tasks that require planning and search. The authors provide a link to their Github repository, where the code and prompts are available. There is no mention of a project page URL in the Arxiv abstract or paper page comments.\nThe Github README details how to set up the environment, install the package, and run the code. It also includes a section \"Paper Trajectories\" which specifies that the log files of the paper's experiments are available in the `logs/` folder. However, the README does not mention any new datasets or pre-trained model checkpoints being released. Therefore, the paper introduces a novel methodology and the associated code, but no new artifacts.\nSince all of my colleagues also reached the conclusion that the note should be `NO_ARTIFACTS`, I will also choose `NO_ARTIFACTS`."
} | https://github.com/ysymyth/tree-of-thought-llm | gemini-2.0-flash-exp | [] | [] | NO_ARTIFACTS | {'title': '', 'message': 'No need to reach out since no artifacts are being released.'} | The provided paper introduces the Tree of Thoughts (ToT) framework for language model inference, which enables exploration over coherent units of text (thoughts). It aims to improve problem-solving abilities of language models on tasks that require planning and search. The authors provide a link to their Github repository, where the code and prompts are available. There is no mention of a project page URL in the Arxiv abstract or paper page comments.
The Github README details how to set up the environment, install the package, and run the code. It also includes a section "Paper Trajectories" which specifies that the log files of the paper's experiments are available in the `logs/` folder. However, the README does not mention any new datasets or pre-trained model checkpoints being released. Therefore, the paper introduces a novel methodology and the associated code, but no new artifacts.
Since all of my colleagues also reached the conclusion that the note should be `NO_ARTIFACTS`, I will also choose `NO_ARTIFACTS`. | null |
||
2305.11173 | 0 | null | 0 | 0 | 0 | Going Denser with Open-Vocabulary Part Segmentation | https://github.com/facebookresearch/vlpart | null | null | 2 | 1 | 0 | false | akhaliq | null | null | 2023-05-19 | {
"github_issue_url": null,
"github_url": "https://github.com/facebookresearch/vlpart",
"model_name": "gemini-2.0-flash-exp",
"new_datasets": "[]",
"new_model_checkpoints": "[{'model_name': 'VLPart', 'hosting_url': '', 'pipeline_tag': 'image-segmentation'}]",
"note": "NEW_ARTIFACTS",
"project_page_url": "",
"reaching_out": "{'title': 'Release VLPart on Hugging Face', 'message': \"Hello @PeizeSun 🤗 \\n\\nI'm Niels and work as part of the open-source team at Hugging Face. I discovered your work through Hugging Face's daily papers as yours got featured: https://huggingface.co/papers/2305.11173.\\nThe paper page lets people discuss about your paper and lets them find artifacts about it (your models for instance),\\nyou can also claim the paper as yours which will show up on your public profile at HF.\\n\\nWould you like to host the model you've pre-trained on https://huggingface.co/models?\\nHosting on Hugging Face will give you more visibility/enable better discoverability. We can add tags in the model cards so that people find the models easier,\\nlink it to the paper page, etc.\\n\\nIf you're down, leaving a guide [here](https://huggingface.co/docs/hub/models-uploading). If it's a custom PyTorch model, you can use the [PyTorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin)\\nclass which adds `from_pretrained` and `push_to_hub` to the model which lets you to upload the model and people to download and use models right away.\\nIf you do not want this and directly want to upload model through UI or however you want, people can also use [hf_hub_download](https://huggingface.co/docs/huggingface_hub/en/guides/download#download-a-single-file).\\n\\nAfter uploaded, we can also link the models to the paper page (read [here](https://huggingface.co/docs/hub/en/model-cards#linking-a-paper)) so people can discover your model.\\n\\nYou can also build a demo for your model on [Spaces](https://huggingface.co/spaces), we can provide you a ZeroGPU [grant](https://huggingface.co/docs/hub/en/spaces-gpus#community-gpu-grants),\\nwhich gives you A100 GPUs for free.\\n\\nLet me know if you're interested/need any guidance :)\\n\\nKind regards,\\n\\nNiels\"}",
"reasoning": "The paper introduces a new detector with the ability to predict both open-vocabulary objects and their part segmentation. Several colleagues identified the presence of a \"Model Zoo\" in the Github README, indicating that pre-trained weights are available. There is no clear evidence of new datasets being introduced. Most colleagues correctly assigned the `NEW_ARTIFACTS` note, and all found at least one model checkpoint present, but with different `pipeline_tag` and `hosting_url` values.\nColleague 1 and 4 assigned `image-segmentation` and `object-detection` as the `pipeline_tag`, whereas colleague 2 assigned `image-to-text`. The model takes an image as input and produces an image segmentation mask as output, hence `image-segmentation` seems like the right pipeline tag. Colleague 1 did not include any hosting url, colleague 2 included a link to the Model Zoo file, and colleague 4 assigned an empty string as hosting URL. Since we do not want to have hosting URLs which are not direct links to model weights, we set the hosting URL to \"\".\nColleague 3 correctly identified the model zoo, however did not add any new_model_checkpoints, which is an error. Colleague 5 assigned `NO_CODE_YET`, because it did not identify any hosting URLs. Since a model zoo is mentioned in the README, and the code will be made available, it makes more sense to choose `NEW_ARTIFACTS` over `NO_CODE_YET`.\nThe most common note is thus `NEW_ARTIFACTS`, the most common `pipeline_tag` is `image-segmentation`, and the most common `hosting_url` is \"\". Since we have a link to the model zoo, it makes sense to reach out and ask the authors to upload the models to the hub."
} | null | https://github.com/facebookresearch/vlpart | gemini-2.0-flash-exp | [] | [{'model_name': 'VLPart', 'hosting_url': '', 'pipeline_tag': 'image-segmentation'}] | NEW_ARTIFACTS | {'title': 'Release VLPart on Hugging Face', 'message': "Hello @PeizeSun 🤗 \n\nI'm Niels and work as part of the open-source team at Hugging Face. I discovered your work through Hugging Face's daily papers as yours got featured: https://huggingface.co/papers/2305.11173.\nThe paper page lets people discuss about your paper and lets them find artifacts about it (your models for instance),\nyou can also claim the paper as yours which will show up on your public profile at HF.\n\nWould you like to host the model you've pre-trained on https://huggingface.co/models?\nHosting on Hugging Face will give you more visibility/enable better discoverability. We can add tags in the model cards so that people find the models easier,\nlink it to the paper page, etc.\n\nIf you're down, leaving a guide [here](https://huggingface.co/docs/hub/models-uploading). If it's a custom PyTorch model, you can use the [PyTorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin)\nclass which adds `from_pretrained` and `push_to_hub` to the model which lets you to upload the model and people to download and use models right away.\nIf you do not want this and directly want to upload model through UI or however you want, people can also use [hf_hub_download](https://huggingface.co/docs/huggingface_hub/en/guides/download#download-a-single-file).\n\nAfter uploaded, we can also link the models to the paper page (read [here](https://huggingface.co/docs/hub/en/model-cards#linking-a-paper)) so people can discover your model.\n\nYou can also build a demo for your model on [Spaces](https://huggingface.co/spaces), we can provide you a ZeroGPU [grant](https://huggingface.co/docs/hub/en/spaces-gpus#community-gpu-grants),\nwhich gives you A100 GPUs for free.\n\nLet me know if you're interested/need any guidance :)\n\nKind regards,\n\nNiels"} | The paper introduces a new detector with the ability to predict both open-vocabulary objects and their part segmentation. Several colleagues identified the presence of a "Model Zoo" in the Github README, indicating that pre-trained weights are available. There is no clear evidence of new datasets being introduced. Most colleagues correctly assigned the `NEW_ARTIFACTS` note, and all found at least one model checkpoint present, but with different `pipeline_tag` and `hosting_url` values.
Colleague 1 and 4 assigned `image-segmentation` and `object-detection` as the `pipeline_tag`, whereas colleague 2 assigned `image-to-text`. The model takes an image as input and produces an image segmentation mask as output, hence `image-segmentation` seems like the right pipeline tag. Colleague 1 did not include any hosting url, colleague 2 included a link to the Model Zoo file, and colleague 4 assigned an empty string as hosting URL. Since we do not want to have hosting URLs which are not direct links to model weights, we set the hosting URL to "".
Colleague 3 correctly identified the model zoo, however did not add any new_model_checkpoints, which is an error. Colleague 5 assigned `NO_CODE_YET`, because it did not identify any hosting URLs. Since a model zoo is mentioned in the README, and the code will be made available, it makes more sense to choose `NEW_ARTIFACTS` over `NO_CODE_YET`.
The most common note is thus `NEW_ARTIFACTS`, the most common `pipeline_tag` is `image-segmentation`, and the most common `hosting_url` is "". Since we have a link to the model zoo, it makes sense to reach out and ask the authors to upload the models to the hub. | null |
README.md exists but content is empty.
- Downloads last month
- 7