From 95aaec702f4bf183e18da90545e26c094cedcf6d Mon Sep 17 00:00:00 2001 From: DeepSeekDDM <155411579+DeepSeekDDM@users.noreply.github.com> Date: Mon, 24 Feb 2025 11:49:25 +0800 Subject: [PATCH] Delete CITATION.cff --- CITATION.cff | 18 ------------------ 1 file changed, 18 deletions(-) delete mode 100644 CITATION.cff diff --git a/CITATION.cff b/CITATION.cff deleted file mode 100644 index 901783e..0000000 --- a/CITATION.cff +++ /dev/null @@ -1,18 +0,0 @@ -cff-version: 1.2.0 -message: "If you use this work, please cite it using the following metadata." -title: "DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning" -authors: - - name: "DeepSeek-AI" -year: 2025 -identifiers: - - type: doi - value: 10.48550/arXiv.2501.12948 - - type: arXiv - value: 2501.12948 -url: "https://arxiv.org/abs/2501.12948" -categories: - - "cs.CL" -repository-code: "https://github.com/deepseek-ai/DeepSeek-R1" -license: "MIT" -abstract: > - We introduce our first-generation reasoning models, DeepSeek-R1-Zero and DeepSeek-R1. DeepSeek-R1-Zero, a model trained via large-scale reinforcement learning (RL) without supervised fine-tuning (SFT) as a preliminary step, demonstrates remarkable reasoning capabilities. Through RL, DeepSeek-R1-Zero naturally emerges with numerous powerful and intriguing reasoning behaviors. However, it encounters challenges such as poor readability, and language mixing. To address these issues and further enhance reasoning performance, we introduce DeepSeek-R1, which incorporates multi-stage training and cold-start data before RL. DeepSeek-R1 achieves performance comparable to OpenAI-o1-1217 on reasoning tasks. To support the research community, we open-source DeepSeek-R1-Zero, DeepSeek-R1, and six dense models (1.5B, 7B, 8B, 14B, 32B, 70B) distilled from DeepSeek-R1 based on Qwen and Llama.