Text Generation
Transformers
Safetensors
English
qwen2
conversational
text-generation-inference
Inference Endpoints

Model Details:

  • Base Model: Qwen/Qwen2-0.5B-Instruct
  • Teacher Model: Qwen/QwQ-32B-Preview
  • Distillation Framework: Instruction Tuning
  • Task Type: Conversational AI / Causal Language Modeling
  • Parameters: 0.5B
  • Special Features:
    • Integrated gradient checkpointing for efficient training
    • Step-by-step reasoning capabilities for better problem-solving

Training:

QwQ-0.5B-Distilled was trained using the QwQ-LongCoT-130K dataset, a carefully curated collection of long-context examples designed for reasoning and conversational AI tasks. The GKD framework ensures that the student model mimics the teacher model’s outputs, aligning its predictions with high-quality responses.

Training Progress:

[â–“â–“â–“â–“â–“â–“â–“â–“â–“â–“] 100%

Training Script:

import os
import argparse
import torch
from datasets import Dataset
from trl import SFTConfig, SFTTrainer, DataCollatorForCompletionOnlyLM
from transformers import (
    AutoModelForCausalLM,
    AutoTokenizer,
)
from datasets import load_dataset
from peft import LoraConfig

parser = argparse.ArgumentParser()
parser.add_argument("--max_length", type=int, default = 4096)
parser.add_argument("--output_dir", type=str, default="gkd-model")
parser.add_argument("--per_device_train_batch_size", type=int, default=1)
parser.add_argument("--gradient_accumulation_steps", type=int, default=16)
parser.add_argument("--gradient_checkpointing", action="store_true", default=False)
parser.add_argument("--resume_from_checkpoint", action="store_true", default=False)
parser.add_argument("--lora", action="store_true")
args = parser.parse_args()

qwq_dataset = load_dataset("amphora/QwQ-LongCoT-130K-2", split = "train")
messages = []
for each in qwq_dataset:
    msg = [
        {"role": "system", "content": "You are a helpful and harmless assistant. You are Qwen developed by Alibaba. You should think step-by-step."},
        {"role": "user", "content": each["problem"]},
        {"role": "assistant", "content": each["qwq"]},
    ]
    messages.append(msg)

TRAIN_SPLIT_RATIO = 0.9
train_size = int(TRAIN_SPLIT_RATIO * len(messages))
eval_size = len(messages) - train_size

tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2.5-0.5B-Instruct")

# The model to optimise
model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2.5-0.5B-Instruct", torch_dtype=torch.bfloat16, device_map="auto") 



### Real Dataset
train_dataset = Dataset.from_dict({"messages":messages[:train_size]})
eval_dataset = Dataset.from_dict({"messages":messages[train_size:]})
training_args = SFTConfig(
    output_dir=args.output_dir,
    max_seq_length=args.max_length,
    per_device_train_batch_size=args.per_device_train_batch_size,
    gradient_accumulation_steps=args.gradient_accumulation_steps,
    gradient_checkpointing = args.gradient_checkpointing,
    save_steps = 100,
    save_total_limit = 5
    )

lora_config = LoraConfig(
    r=16,
    lora_alpha=32,
    lora_dropout=0.05,
    bias="none",
    task_type="CAUSAL_LM",
)

response_template = "<|im_start|>assistant\n"

collator = DataCollatorForCompletionOnlyLM(response_template, tokenizer=tokenizer)

trainer = SFTTrainer(
    model=model,
    args=training_args,
    processing_class=tokenizer,
    train_dataset=train_dataset,
    eval_dataset=eval_dataset,
    peft_config=lora_config if args.lora else None,
    data_collator=collator,
)
trainer.train(resume_from_checkpoint=args.resume_from_checkpoint)

Dataset:

  • Source: amphora/QwQ-LongCoT-130K
  • Split: 90% Training, 10% Evaluation

Example Usage:

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
# Model name
model_name = "kz919/QwQ-0.5B-Distilled-SFT"
# Load the model
print(f"Starting to load the model {model_name} into memory")
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype=torch.bfloat16,
    device_map={"": 0}
)
# Load the tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Define the prompt
prompt = "How many r in strawberry."
messages = [
    {"role": "system", "content": "You are a helpful and harmless assistant. You are Qwen developed by Alibaba. You should think step-by-step."},
    {"role": "user", "content": prompt}
]
# Tokenize the input
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# Generate a response
generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=4096
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
# Decode the response
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)

Applications:

  1. Conversational Assistants:
    Suitable for AI chatbots that require reasoning and long-context understanding.

  2. Educational Tools:
    Provides step-by-step explanations, making it ideal for learning environments.

  3. Creative Writing:
    Assists in generating coherent, contextually aware long-form content.

  4. Technical Support:
    Handles complex customer queries with precision and clarity.


Limitations:

  • While distilled for efficiency, performance on highly complex reasoning tasks may slightly trail the teacher model.
  • This model could still be under trained, merely a proof of concept. Don't yell at me if it's outputing nonesense.

Citation:

If you use this model in your research or applications, please cite it as:

@model{qwq_0.5B_distilled,
  author = {Kaizhao Liang},
  title = {Mini-QwQ: A Reasoning Model for Edge Devices},
  year = {2024},
  publisher = {Hugging Face},
  version = {1.0}
}

This model is an example of how efficient fine-tuning and distillation methods can deliver robust conversational AI capabilities in a smaller, more manageable footprint.

Downloads last month
163
Safetensors
Model size
494M params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for kz919/QwQ-0.5B-Distilled-SFT

Base model

Qwen/Qwen2.5-0.5B
Finetuned
(125)
this model
Quantizations
2 models

Datasets used to train kz919/QwQ-0.5B-Distilled-SFT

Space using kz919/QwQ-0.5B-Distilled-SFT 1