arxiv_id
stringclasses
5 values
reached_out_success
float64
reached_out_note
stringclasses
0 values
num_models
float64
0
0
num_datasets
float64
0
0
num_spaces
float64
0
0
title
stringclasses
5 values
github
stringclasses
2 values
github_stars
float64
conference_name
stringclasses
0 values
upvotes
int64
3
16
num_comments
int64
1
2
github_mention_hf
float64
0
0
has_artifact
bool
1 class
submitted_by
stringclasses
5 values
github_issue_url
stringclasses
0 values
hf_pr_urls
stringclasses
0 values
date
stringclasses
1 value
gemini_results
dict
gemini_github_issue_url
stringclasses
1 value
gemini_github_url
stringclasses
4 values
gemini_model_name
stringclasses
1 value
gemini_new_datasets
stringclasses
3 values
gemini_new_model_checkpoints
stringclasses
2 values
gemini_note
stringclasses
3 values
gemini_project_page_url
stringclasses
3 values
gemini_reaching_out
stringclasses
3 values
gemini_reasoning
stringclasses
5 values
gemini_huggingface_pull_request_urls
stringclasses
2 values
2501.05453
null
null
0
0
0
An Empirical Study of Autoregressive Pre-training from Videos
null
null
16
1
0
false
brjathu
null
null
2025-01-10
{ "github_issue_url": "", "github_url": "", "model_name": "gemini-2.0-flash-exp", "new_datasets": "[]", "new_model_checkpoints": "[]", "note": "NO_CODE", "project_page_url": "https://brjathu.github.io/toto/", "reaching_out": "Gemini decided to not reach out due to no Github URL.", "reasoning": "The paper introduces Toto, a family of autoregressive video models. The authors pre-train the models on a large dataset of images and videos.\nThe paper mentions a project page, which was also reported in the Hugging Face paper page comments.\nHowever, there is no mention of a Github repository in the abstract, nor in the paper page comments.\n\nThe project page does not contain links to a Github repository, nor does it mention whether models or datasets will be released.\nHence, there is no evidence that code or artifacts are already available. Therefore, the correct `note` is `NO_CODE`.\nReaching out to the authors is not necessary at this stage." }
gemini-2.0-flash-exp
[]
[]
NO_CODE
https://brjathu.github.io/toto/
Gemini decided to not reach out due to no Github URL.
The paper introduces Toto, a family of autoregressive video models. The authors pre-train the models on a large dataset of images and videos. The paper mentions a project page, which was also reported in the Hugging Face paper page comments. However, there is no mention of a Github repository in the abstract, nor in the paper page comments. The project page does not contain links to a Github repository, nor does it mention whether models or datasets will be released. Hence, there is no evidence that code or artifacts are already available. Therefore, the correct `note` is `NO_CODE`. Reaching out to the authors is not necessary at this stage.
null
2501.03489
null
null
0
0
0
Entropy-Guided Attention for Private LLMs
https://github.com/nandan91/entropy-guided-attention-llm
null
null
5
2
0
false
nandan523
null
null
2025-01-10
{ "github_issue_url": "", "github_url": "https://github.com/nandan91/entropy-guided-attention-llm", "model_name": "gemini-2.0-flash-exp", "new_datasets": "[]", "new_model_checkpoints": "[]", "note": "NO_ARTIFACTS", "project_page_url": "", "reaching_out": "{'title': '\"\"', 'message': '\"No need to reach out since no artifacts are being released.\"'}", "reasoning": "The paper introduces an information-theoretic framework to analyze the role of nonlinearities in decoder-only language models, with a focus on private inference. The authors propose an entropy-guided attention mechanism and explore alternatives to layer normalization. A Github repository is available, as mentioned in the abstract. The README of the Github repository only contains a brief description of the project, mentioning that it's the \"Official PyTorch Implementation\" of the paper. There is no mention of new model checkpoints or datasets. Therefore, the paper doesn't introduce new artifacts, and the note should be `NO_ARTIFACTS`. It does not make sense to reach out in this case." }
https://github.com/nandan91/entropy-guided-attention-llm
gemini-2.0-flash-exp
[]
[]
NO_ARTIFACTS
{'title': '""', 'message': '"No need to reach out since no artifacts are being released."'}
The paper introduces an information-theoretic framework to analyze the role of nonlinearities in decoder-only language models, with a focus on private inference. The authors propose an entropy-guided attention mechanism and explore alternatives to layer normalization. A Github repository is available, as mentioned in the abstract. The README of the Github repository only contains a brief description of the project, mentioning that it's the "Official PyTorch Implementation" of the paper. There is no mention of new model checkpoints or datasets. Therefore, the paper doesn't introduce new artifacts, and the note should be `NO_ARTIFACTS`. It does not make sense to reach out in this case.
null
2501.04377
null
null
0
0
0
On Computational Limits and Provably Efficient Criteria of Visual Autoregressive Models: A Fine-Grained Complexity Analysis
null
null
5
1
0
false
JamesSand
null
null
2025-01-10
{ "github_issue_url": "", "github_url": "", "model_name": "gemini-2.0-flash-exp", "new_datasets": "[]", "new_model_checkpoints": "[]", "note": "NO_CODE", "project_page_url": "", "reaching_out": "Gemini decided to not reach out due to no Github URL.", "reasoning": "The paper presents a theoretical analysis of the computational limits and efficiency criteria of Visual Autoregressive (VAR) models. The paper focuses on identifying conditions under which VAR computations can achieve sub-quadratic time complexity. There is no mention of any code or datasets being released. The authors only provide efficient constructions leveraging low-rank approximations which substantiate their theoretical analysis.\n\nThere is no Github repository, nor project page mentioned in the paper, and no mention of plans to release code or artifacts. Therefore, the most appropriate note is `NO_CODE`. It does not make sense to reach out to the authors, since no code is being released." }
gemini-2.0-flash-exp
[]
[]
NO_CODE
Gemini decided to not reach out due to no Github URL.
The paper presents a theoretical analysis of the computational limits and efficiency criteria of Visual Autoregressive (VAR) models. The paper focuses on identifying conditions under which VAR computations can achieve sub-quadratic time complexity. There is no mention of any code or datasets being released. The authors only provide efficient constructions leveraging low-rank approximations which substantiate their theoretical analysis. There is no Github repository, nor project page mentioned in the paper, and no mention of plans to release code or artifacts. Therefore, the most appropriate note is `NO_CODE`. It does not make sense to reach out to the authors, since no code is being released.
null
2501.04003
null
null
0
0
0
Are VLMs Ready for Autonomous Driving? An Empirical Study from the Reliability, Data, and Metric Perspectives
null
null
9
1
0
false
ldkong
null
null
2025-01-10
{ "github_issue_url": "", "github_url": "https://drive-bench.github.io", "model_name": "gemini-2.0-flash-exp", "new_datasets": "[{'dataset_name': 'DriveBench', 'hosting_url': 'https://huggingface.co/datasets/drive-bench/arena', 'task_category': 'image-text-to-text'}]", "new_model_checkpoints": "[]", "note": "NEW_ARTIFACTS", "project_page_url": "", "reaching_out": "{'title': '', 'message': 'No need to reach out since the artifacts are already on Hugging Face.'}", "reasoning": "The paper introduces DriveBench, a new benchmark dataset for evaluating vision-language models (VLMs) in autonomous driving. The abstract states that \"The benchmark toolkit is publicly accessible\". The paper page comments mentions a new dataset called \"DriveBench\" and the dataset is also mentioned in the first 2 pages of the Arxiv PDF as `huggingface.co/datasets/drive-bench/arena`. The Github README could not be fetched, but a Github URL is mentioned. Based on these findings, it seems that the authors have released a new dataset which is already available on Hugging Face.\nThe dataset is meant to train models for autonomous driving. It contains both image and text data, and is used for the purpose of vision-language tasks, with text as output, hence the appropriate task category is \"image-text-to-text\".\n\nSince the dataset is already on the Hugging Face Hub, there is no need to reach out to the authors." }
https://drive-bench.github.io
gemini-2.0-flash-exp
[{'dataset_name': 'DriveBench', 'hosting_url': 'https://huggingface.co/datasets/drive-bench/arena', 'task_category': 'image-text-to-text'}]
[]
NEW_ARTIFACTS
{'title': '', 'message': 'No need to reach out since the artifacts are already on Hugging Face.'}
The paper introduces DriveBench, a new benchmark dataset for evaluating vision-language models (VLMs) in autonomous driving. The abstract states that "The benchmark toolkit is publicly accessible". The paper page comments mentions a new dataset called "DriveBench" and the dataset is also mentioned in the first 2 pages of the Arxiv PDF as `huggingface.co/datasets/drive-bench/arena`. The Github README could not be fetched, but a Github URL is mentioned. Based on these findings, it seems that the authors have released a new dataset which is already available on Hugging Face. The dataset is meant to train models for autonomous driving. It contains both image and text data, and is used for the purpose of vision-language tasks, with text as output, hence the appropriate task category is "image-text-to-text". Since the dataset is already on the Hugging Face Hub, there is no need to reach out to the authors.
https://huggingface.co/datasets/drive-bench/arena/discussions/2
2501.04828
null
null
0
0
0
Building Foundations for Natural Language Processing of Historical Turkish: Resources and Models
null
null
3
1
0
false
stefan-it
null
null
2025-01-10
{ "github_issue_url": "", "github_url": "https://github.com/Ottoman-NLP", "model_name": "gemini-2.0-flash-exp", "new_datasets": "[{'dataset_name': 'HisTR', 'hosting_url': 'https://huggingface.co/datasets/BUCOLIN/HisTR', 'task_category': 'text-classification'}, {'dataset_name': 'OTA-BOUN_UD_Treebank', 'hosting_url': 'https://huggingface.co/datasets/BUCOLIN/OTA-BOUN_UD_Treebank', 'task_category': 'text-classification'}, {'dataset_name': 'OTC-Corpus', 'hosting_url': 'https://huggingface.co/datasets/BUCOLIN/OTC-Corpus', 'task_category': 'text-classification'}]", "new_model_checkpoints": "[{'model_name': 'HistBERTurk-POS-tagging', 'hosting_url': 'https://huggingface.co/BUCOLIN/HistBERTurk-POS-tagging', 'pipeline_tag': 'text-classification'}, {'model_name': 'HistBERTurk-dependency-parsing', 'hosting_url': 'https://huggingface.co/BUCOLIN/HistBERTurk-dependency-parsing', 'pipeline_tag': 'text-classification'}, {'model_name': 'HistBERTurk-NER', 'hosting_url': 'https://huggingface.co/BUCOLIN/HistBERTurk-NER', 'pipeline_tag': 'text-classification'}]", "note": "NEW_ARTIFACTS", "project_page_url": "https://huggingface.co/bucolin", "reaching_out": "{'title': '', 'message': 'No need to reach out since the artifacts are already on Hugging Face.'}", "reasoning": "The paper introduces new resources and models for NLP of historical Turkish. The authors release a new named entity recognition (NER) dataset called HisTR, a new Universal Dependencies treebank called OTA-BOUN and the Ottoman Text Corpus (OTC). They also release transformer-based models trained for NER, dependency parsing and part-of-speech tagging. The abstract states that all of the presented resources and models are made available at https://huggingface.co/bucolin, which indicates that the artifacts are hosted on the Hugging Face Hub.\n\nThe project page at https://huggingface.co/bucolin confirms that the authors have indeed uploaded the new datasets and models on the Hugging Face Hub.\n\nBased on this, the scenario is `NEW_ARTIFACTS`. Since all artifacts are already on Hugging Face, there is no need to reach out to the authors.\n\nThe model pipeline tags are determined by the tasks they are used for, i.e. named entity recognition, dependency parsing, and part-of-speech tagging. These can be mapped to `text-classification` (as those are text-based tasks) as the most common pipeline tag among all colleagues, even though some colleagues incorrectly labelled some of them as `feature-extraction` or `question-answering`.\n\nSimilarly, the task category of the datasets can be set to `text-classification`, as they are all meant for text-based tasks, and is also the most common label among all colleagues, even though some colleagues chose more specific labels for them such as `dependency-parsing`, `named-entity-recognition` and `text-generation`." }
https://github.com/Ottoman-NLP
gemini-2.0-flash-exp
[{'dataset_name': 'HisTR', 'hosting_url': 'https://huggingface.co/datasets/BUCOLIN/HisTR', 'task_category': 'text-classification'}, {'dataset_name': 'OTA-BOUN_UD_Treebank', 'hosting_url': 'https://huggingface.co/datasets/BUCOLIN/OTA-BOUN_UD_Treebank', 'task_category': 'text-classification'}, {'dataset_name': 'OTC-Corpus', 'hosting_url': 'https://huggingface.co/datasets/BUCOLIN/OTC-Corpus', 'task_category': 'text-classification'}]
[{'model_name': 'HistBERTurk-POS-tagging', 'hosting_url': 'https://huggingface.co/BUCOLIN/HistBERTurk-POS-tagging', 'pipeline_tag': 'text-classification'}, {'model_name': 'HistBERTurk-dependency-parsing', 'hosting_url': 'https://huggingface.co/BUCOLIN/HistBERTurk-dependency-parsing', 'pipeline_tag': 'text-classification'}, {'model_name': 'HistBERTurk-NER', 'hosting_url': 'https://huggingface.co/BUCOLIN/HistBERTurk-NER', 'pipeline_tag': 'text-classification'}]
NEW_ARTIFACTS
https://huggingface.co/bucolin
{'title': '', 'message': 'No need to reach out since the artifacts are already on Hugging Face.'}
The paper introduces new resources and models for NLP of historical Turkish. The authors release a new named entity recognition (NER) dataset called HisTR, a new Universal Dependencies treebank called OTA-BOUN and the Ottoman Text Corpus (OTC). They also release transformer-based models trained for NER, dependency parsing and part-of-speech tagging. The abstract states that all of the presented resources and models are made available at https://huggingface.co/bucolin, which indicates that the artifacts are hosted on the Hugging Face Hub. The project page at https://huggingface.co/bucolin confirms that the authors have indeed uploaded the new datasets and models on the Hugging Face Hub. Based on this, the scenario is `NEW_ARTIFACTS`. Since all artifacts are already on Hugging Face, there is no need to reach out to the authors. The model pipeline tags are determined by the tasks they are used for, i.e. named entity recognition, dependency parsing, and part-of-speech tagging. These can be mapped to `text-classification` (as those are text-based tasks) as the most common pipeline tag among all colleagues, even though some colleagues incorrectly labelled some of them as `feature-extraction` or `question-answering`. Similarly, the task category of the datasets can be set to `text-classification`, as they are all meant for text-based tasks, and is also the most common label among all colleagues, even though some colleagues chose more specific labels for them such as `dependency-parsing`, `named-entity-recognition` and `text-generation`.
https://huggingface.co/BUCOLIN/HistBERTurk-POS-tagging/discussions/1 https://huggingface.co/BUCOLIN/HistBERTurk-dependency-parsing/discussions/1 https://huggingface.co/BUCOLIN/HistBERTurk-NER/discussions/1 https://huggingface.co/datasets/BUCOLIN/HisTR/discussions/3 https://huggingface.co/datasets/BUCOLIN/OTA-BOUN_UD_Treebank/discussions/1 https://huggingface.co/datasets/BUCOLIN/OTC-Corpus/discussions/2
README.md exists but content is empty.
Downloads last month
0