user
stringlengths 3
28
| created_at
timestamp[us] | body
stringlengths 1
173k
| issue_number
int64 1
2.55k
|
---|---|---|---|
HuggingFaceDocBuilderDev | 2025-01-08T18:24:42 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2550). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,550 |
qgallouedec | 2025-01-07T20:24:14 | Thanks! | 2,549 |
HuggingFaceDocBuilderDev | 2025-01-07T17:10:49 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2548). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,548 |
qgallouedec | 2025-01-07T17:34:53 | From their demo code, this is what I get as input for the model:
```
<|start_header_id|>user<|end_header_id|>
[CONTEXT]
<turn> user
Ellipsis
<turn> assistant
Ellipsis
<turn> user
Ellipsis
[RESPONSE A] BBBB [RESPONSE B] CCCC<|eot_id|>
```
doesn't make much sense to me:
- numerous unnecessary whitespaces
- Why `<|start_header_id|>user<|end_header_id|>`?
- Why responses aren't surrounded by `\n` as well?
- Why `<eot_id>` if you want to further generate?
Why not something like this instead:
```
[CONTEXT]
<turn> user
Ellipsis
<turn> assistant
Ellipsis
<turn> user
Ellipsis
[RESPONSE A]
BBBB
[RESPONSE B]
CCCC
[BEST REPONSE]
``` | 2,548 |
kashif | 2025-01-07T17:41:15 | you are using the instructions from here: https://huggingface.co/RLHFlow/pair-preference-model-LLaMA3-8B right? | 2,548 |
qgallouedec | 2025-01-07T17:42:24 |
> you are using the instructions from here: https://huggingface.co/RLHFlow/pair-preference-model-LLaMA3-8B right?
precisely
| 2,548 |
HuggingFaceDocBuilderDev | 2025-01-07T13:56:00 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2547). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,547 |
HuggingFaceDocBuilderDev | 2025-01-06T15:18:43 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2544). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,544 |
qgallouedec | 2025-01-06T14:17:24 | Both points sounds valid to me. For 1. I'd go for a warning in the doc (not in the function). Would you like to open a PR?
| 2,543 |
HuggingFaceDocBuilderDev | 2025-01-04T16:42:49 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2542). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,542 |
qgallouedec | 2025-01-04T18:42:36 | Fun feature! Do you have a demo repo? | 2,542 |
qgallouedec | 2025-01-04T18:44:19 | Have you tried with the HF api? It could be a free alternative | 2,542 |
August-murr | 2025-01-04T19:22:33 | > Fun feature! Do you have a demo repo?
Just pushed it to my [own fork](https://github.com/August-murr/trl/issues) | 2,542 |
qgallouedec | 2025-01-06T14:19:45 | I'll open a batch of issues to test it | 2,542 |
August-murr | 2025-01-06T17:59:38 | > Have you tried with the HF api? It could be a free alternative
Honestly, this was really effortless since I simply forked a mostly functional actions extension. Modifying it to work with the HF API will require much more effort. also it uses GPT-4o, there aren't many open-source models that are this accurate.
If it's absolutely necessary, then I can do it, but I honestly don't think it's worth the effort.
However, if you believe it is important, then I'll go ahead and do it. | 2,542 |
qgallouedec | 2025-01-06T19:05:17 | It doesn't seem like a big deal to me. Probably something like this could work
```python
from huggingface_hub import InferenceClient
client = InferenceClient(model="meta-llama/Llama-3.2-1B-Instruct", token="your_token")
content = "Find the label among these: question, issue."
completion = client.chat_completion(messages=[{"role": "user", "content": content}], max_tokens=256)
response = completion.choices[0].message.content
```
> there aren't many open-source models that are this accurate.
This task is very simple, I don't think we absolutely need GPT-4o here. And even if the labeled fail, it's not a big deal.
| 2,542 |
August-murr | 2025-01-06T19:37:15 | > It doesn't seem like a big deal to me. Probably something like this could work
>
> ```python
> from huggingface_hub import InferenceClient
>
> client = InferenceClient(model="meta-llama/Llama-3.2-1B-Instruct", token="your_token")
> content = "Find the label among these: question, issue."
> completion = client.chat_completion(messages=[{"role": "user", "content": content}], max_tokens=256)
> response = completion.choices[0].message.content
> ```
>
> > there aren't many open-source models that are this accurate.
>
> This task is very simple, I don't think we absolutely need GPT-4o here. And even if the labeled fail, it's not a big deal.
ok got it | 2,542 |
qgallouedec | 2025-01-06T20:24:34 | Do you know if you can access the tag description? It could help the model in its prediction | 2,542 |
August-murr | 2025-01-07T05:13:37 | > Do you know if you can access the tag description? It could help the model in its prediction
tag description as in the label description?
like:
`🚀 deepspeed` --> `Related to deepspeed`
If so, yes, it is part of the prompt. | 2,542 |
August-murr | 2025-01-07T07:14:14 | I tried using the Llama 1B model, and it "functioned," but for the TRL, I switched to the 70B model. However, I couldn't test it with the 70B because it requires a subscription.
Don't forget to add the `HF_API_KEY` to the secrets.
I got a context length error (limit of 4096 tokens) when using the Llama 1B model, which was weird since it supports up to 128k tokens. Since I can't use the 70B model, I'm unsure if it's a problem or not. | 2,542 |
August-murr | 2025-01-04T06:17:04 | here's how to fix it:
`train_dataset = load_dataset('json', data_files=dataset_file_path, split="train") `
I suggest you get quick fixes for simpler issues simply by using ChatGPT or Copilot first as they can save you a lot of time!
| 2,541 |
degen2 | 2025-01-04T16:41:07 | I already tried that and still get the same KeyError. Even when loading a dataset from the hub. I also tried adding a ‚text‘ key field to the data. | 2,541 |
qgallouedec | 2025-01-04T19:09:33 | `split="train"` is the solution. If you still encounter the error please provide a MRE | 2,541 |
HuggingFaceDocBuilderDev | 2025-01-03T09:44:45 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2540). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,540 |
gp1702 | 2025-01-07T21:20:17 | I tried running the demo command without qlora, and got the following error:
``
Traceback (most recent call last):
File "/home/gandharvp_google_com/dpo/example.py", line 159, in <module>
main(script_args, training_args, model_args)
File "/home/gandharvp_google_com//dpo/example.py", line 134, in main
trainer.train()
File "/opt/conda/envs/trl/lib/python3.10/site-packages/transformers/trainer.py", line 2164, in train
return inner_training_loop(
File "/opt/conda/envs/trl/lib/python3.10/site-packages/transformers/trainer.py", line 2524, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs, num_items_in_batch)
File "/opt/conda/envs/trl/lib/python3.10/site-packages/transformers/trainer.py", line 3687, in training_step
self.accelerator.backward(loss, **kwargs)
File "/opt/conda/envs/trl/lib/python3.10/site-packages/accelerate/accelerator.py", line 2248, in backward
loss.backward(**kwargs)
File "/home/gandharvp_google_com/.local/lib/python3.10/site-packages/torch/_tensor.py", line 492, in backward
torch.autograd.backward(
File "/home/gandharvp_google_com/.local/lib/python3.10/site-packages/torch/autograd/__init__.py", line 251, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/home/gandharvp_google_com/.local/lib/python3.10/site-packages/torch/utils/checkpoint.py", line 1071, in unpack_hook
frame.recompute_fn(*args)
File "/home/gandharvp_google_com/.local/lib/python3.10/site-packages/torch/utils/checkpoint.py", line 1194, in recompute_fn
fn(*args, **kwargs)
File "/home/gandharvp_google_com/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/gandharvp_google_com/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/opt/conda/envs/trl/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py", line 623, in forward
hidden_states, self_attn_weights, present_key_value = self.self_attn(
File "/home/gandharvp_google_com/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/gandharvp_google_com/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/opt/conda/envs/trl/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py", line 544, in forward
attn_output = torch.nn.functional.scaled_dot_product_attention(
RuntimeError: The expanded size of the tensor (1460) must match the existing size (730) at non-singleton dimension 3. Target sizes: [4, 14, 730, 1460]. Tensor sizes: [4, 1, 730, 730]``
@faaany, I am wondering if you were able to replicate or fix this.
I am attaching the trainer code for reference.
[fsdp_dpo_trainer.txt](https://github.com/user-attachments/files/18338571/fsdp_dpo_trainer.txt)
| 2,539 |
faaany | 2025-01-08T09:06:36 | > Thanks a lot for the fix @faaany - overall it looks great!
>
> Would you mind confirming that the following demo command works with your PR (once activation checkpointing is removed):
>
> ```shell
> accelerate launch --config_file=examples/accelerate_configs/fsdp_qlora.yaml --num_processes=NUM_GPUS trl/scripts/dpo.py trl/scripts/dpo.py \
> --dataset_name trl-lib/ultrafeedback_binarized \
> --model_name_or_path Qwen/Qwen2-0.5B-Instruct \
> --learning_rate 5.0e-7 \
> --num_train_epochs 1 \
> --per_device_train_batch_size 2 \
> --gradient_accumulation_steps 8 \
> --gradient_checkpointing \
> --logging_steps 25 \
> --eval_strategy steps \
> --eval_steps 50 \
> --output_dir Qwen2-0.5B-DPO \
> --no_remove_unused_columns
> ```
>
> If it runs without error, can you please rename `fsdp_qlora.yaml` to `fsdp.yaml` so it runs for both modes?
>
> A question for @qgallouedec: should this helper function live in a `utils` module somewhere so we don't have to copy it around to all other trainers?
both mode work. | 2,539 |
faaany | 2025-01-08T09:12:16 | > I tried running the demo command without qlora, and got the following error: ` Traceback (most recent call last): File "/home/gandharvp_google_com/dpo/example.py", line 159, in <module> main(script_args, training_args, model_args) File "/home/gandharvp_google_com//dpo/example.py", line 134, in main trainer.train() File "/opt/conda/envs/trl/lib/python3.10/site-packages/transformers/trainer.py", line 2164, in train return inner_training_loop( File "/opt/conda/envs/trl/lib/python3.10/site-packages/transformers/trainer.py", line 2524, in _inner_training_loop tr_loss_step = self.training_step(model, inputs, num_items_in_batch) File "/opt/conda/envs/trl/lib/python3.10/site-packages/transformers/trainer.py", line 3687, in training_step self.accelerator.backward(loss, **kwargs) File "/opt/conda/envs/trl/lib/python3.10/site-packages/accelerate/accelerator.py", line 2248, in backward loss.backward(**kwargs) File "/home/gandharvp_google_com/.local/lib/python3.10/site-packages/torch/_tensor.py", line 492, in backward torch.autograd.backward( File "/home/gandharvp_google_com/.local/lib/python3.10/site-packages/torch/autograd/__init__.py", line 251, in backward Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass File "/home/gandharvp_google_com/.local/lib/python3.10/site-packages/torch/utils/checkpoint.py", line 1071, in unpack_hook frame.recompute_fn(*args) File "/home/gandharvp_google_com/.local/lib/python3.10/site-packages/torch/utils/checkpoint.py", line 1194, in recompute_fn fn(*args, **kwargs) File "/home/gandharvp_google_com/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/gandharvp_google_com/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/opt/conda/envs/trl/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py", line 623, in forward hidden_states, self_attn_weights, present_key_value = self.self_attn( File "/home/gandharvp_google_com/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/gandharvp_google_com/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/opt/conda/envs/trl/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py", line 544, in forward attn_output = torch.nn.functional.scaled_dot_product_attention( RuntimeError: The expanded size of the tensor (1460) must match the existing size (730) at non-singleton dimension 3. Target sizes: [4, 14, 730, 1460]. Tensor sizes: [4, 1, 730, 730]`
>
> @faaany, I am wondering if you were able to replicate or fix this.
>
> I am attaching the trainer code for reference. [fsdp_dpo_trainer.txt](https://github.com/user-attachments/files/18338571/fsdp_dpo_trainer.txt)
does it work in other distributed mode, e.g. deepspeed? | 2,539 |
qgallouedec | 2025-01-08T13:29:50 | > should this helper function live in a utils module somewhere so we don't have to copy it around to all other trainers?
I think it would make sense to have it in `trainer/utils.py` yes. | 2,539 |
gp1702 | 2025-01-08T15:08:08 | > > I tried running the demo command without qlora, and got the following error: ` Traceback (most recent call last): File "/home/gandharvp_google_com/dpo/example.py", line 159, in <module> main(script_args, training_args, model_args) File "/home/gandharvp_google_com//dpo/example.py", line 134, in main trainer.train() File "/opt/conda/envs/trl/lib/python3.10/site-packages/transformers/trainer.py", line 2164, in train return inner_training_loop( File "/opt/conda/envs/trl/lib/python3.10/site-packages/transformers/trainer.py", line 2524, in _inner_training_loop tr_loss_step = self.training_step(model, inputs, num_items_in_batch) File "/opt/conda/envs/trl/lib/python3.10/site-packages/transformers/trainer.py", line 3687, in training_step self.accelerator.backward(loss, **kwargs) File "/opt/conda/envs/trl/lib/python3.10/site-packages/accelerate/accelerator.py", line 2248, in backward loss.backward(**kwargs) File "/home/gandharvp_google_com/.local/lib/python3.10/site-packages/torch/_tensor.py", line 492, in backward torch.autograd.backward( File "/home/gandharvp_google_com/.local/lib/python3.10/site-packages/torch/autograd/__init__.py", line 251, in backward Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass File "/home/gandharvp_google_com/.local/lib/python3.10/site-packages/torch/utils/checkpoint.py", line 1071, in unpack_hook frame.recompute_fn(*args) File "/home/gandharvp_google_com/.local/lib/python3.10/site-packages/torch/utils/checkpoint.py", line 1194, in recompute_fn fn(*args, **kwargs) File "/home/gandharvp_google_com/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/gandharvp_google_com/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/opt/conda/envs/trl/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py", line 623, in forward hidden_states, self_attn_weights, present_key_value = self.self_attn( File "/home/gandharvp_google_com/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/gandharvp_google_com/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/opt/conda/envs/trl/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py", line 544, in forward attn_output = torch.nn.functional.scaled_dot_product_attention( RuntimeError: The expanded size of the tensor (1460) must match the existing size (730) at non-singleton dimension 3. Target sizes: [4, 14, 730, 1460]. Tensor sizes: [4, 1, 730, 730]`
> > @faaany, I am wondering if you were able to replicate or fix this.
> > I am attaching the trainer code for reference. [fsdp_dpo_trainer.txt](https://github.com/user-attachments/files/18338571/fsdp_dpo_trainer.txt)
>
> does it work in other distributed mode, e.g. deepspeed?
I have not tried other modes, but I am observing this problem for fsdp. | 2,539 |
qgallouedec | 2025-01-08T09:42:39 | That's a good point!
In the past, truncation mode was only used for the prompt, and it seems that completion was only truncated for the encoder-decoder. This has been corrected with #2209. In any case, this is a good opportunity to bring this issue up again.
- Should `truncation_mode` apply to prompt truncation?
- Should `truncation_mode` apply to completion truncation?
Without having thought about it in detail, I'd say `truncation_mode` should only apply to prompt truncation, but I'm curious what you think. | 2,538 |
anakin87 | 2025-01-08T10:07:32 | Ah, I'm not an expert, unfortunately.
However, I took a cursory look, and it seems that `truncation_mode` is applied to the prompt in the following trainers: BCO, CPO, KTO, and ORPO.
In the Iterative SFT Trainer, it is implemented somewhat differently.
For consistency, it might make sense to align DPO with the other Preference Optimization trainers and apply `truncation_mode` to the prompt in the same way. | 2,538 |
qgallouedec | 2025-01-08T10:21:45 | Ok so we are aligned on this.
It would probably only require the following line change:
```python
if max_prompt_length is not None:
if truncation_mode == "keep_end":
prompt_input_ids = prompt_input_ids[:max_prompt_length]
elif truncation_mode == "keep_start":
prompt_input_ids = prompt_input_ids[-max_prompt_length:]
else:
raise ValueError(f"Unknown truncation_mode: {truncation_mode}")
```
Are you willing to open a PR to fix it? | 2,538 |
anakin87 | 2025-01-08T10:23:51 | I would be happy to open a PR in the next few days! | 2,538 |
qgallouedec | 2025-01-08T09:47:31 | It's not very clear what code you're using. Because you seem to be using a command (`swift rlhf`) that I'm not familiar with and code that you provide doesn't take any arguments.
Plus, the system info that you provide aren't enough (I don't see the trl version among other). Can you copy-paste the output of `trl env`?
What is `map_instruction`? What model are you using? Qwen2 doesn't have a 32B version. Is it Qwen2.5?
Currently It's very hard for me to reproduce it. | 2,536 |
maoulee | 2025-01-08T10:34:17 | > It's not very clear what code you're using. Because you seem to be using a command (`swift rlhf`) that I'm not familiar with and code that you provide doesn't take any arguments. Plus, the system info that you provide aren't enough (I don't see the trl version among other). Can you copy-paste the output of `trl env`? What is `map_instruction`? What model are you using? Qwen2 doesn't have a 32B version. Is it Qwen2.5? Currently It's very hard for me to reproduce it.
Here is the trl env info:
- Platform: Linux-5.15.0-60-generic-x86_64-with-glibc2.31
- Python version: 3.10.14
- PyTorch version: 2.4.1
- CUDA device(s): NVIDIA A100-SXM4-40GB, NVIDIA A100-SXM4-40GB
- Transformers version: 4.47.1
- Accelerate version: 1.2.1
- Accelerate config: not found
- Datasets version: 3.0.1
- HF Hub version: 0.25.1
- TRL version: 0.13.0
- bitsandbytes version: 0.45.0
- DeepSpeed version: 0.15.4
- Diffusers version: not installed
- Liger-Kernel version: not installed
- LLM-Blender version: not installed
- OpenAI version: 1.50.2
- PEFT version: 0.13.2
To print trl env:
I upgrade:
TRL 0.11.4-> 0.13.0 for
transformers: 4.45.2->4.47.1
tokenizers: 0.20.3->0.20.1
The map_instruction function is used to map the dataset.
Here is the used model and dataset:
model: unsloth/Qwen2.5-32B-Instruct-bnb-4bit
dataset: llamafactory/ultrafeedback_binarized
Here is the complete code:
import torch
import os
from transformers import (
AutoModelForCausalLM,
AutoTokenizer,
GenerationConfig,
BitsAndBytesConfig
)
from trl import (
LogCompletionsCallback,
ModelConfig,
DPOConfig,
DPOTrainer,
TrlParser,
get_peft_config,
)
from peft import LoraConfig,get_peft_model,prepare_model_for_kbit_training
import data
from trl.trainer.utils import SIMPLE_CHAT_TEMPLATE
from dataclasses import dataclass, field
from typing import Optional, List, Dict
from datasets import load_dataset
os.environ["WANDB_DISABLED"]="true"
def map_instruction(example):
instruction = example['instruction']
return {'prompt': instruction}
def main():
train_dataset = load_dataset("/root/.cache/modelscope/hub/datasets/llamafactory___ultrafeedback_binarized/", split="train")
train_dataset = train_dataset.shuffle(seed=42)
train_dataset = train_dataset.select(range(2000))
train_dataset=train_dataset.map(map_instruction)
test_dataset=load_dataset("/root/.cache/modelscope/hub/datasets/llamafactory___ultrafeedback_binarized/", split="test")
test_dataset = test_dataset.shuffle(seed=42)
test_dataset = test_dataset.select(range(200))
test_dataset=test_dataset.map(map_instruction)
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_compute_dtype=torch.bfloat16,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
)
model = AutoModelForCausalLM.from_pretrained(
pretrained_model_name_or_path="/root/.cache/modelscope/hub/unsloth/Qwen2___5-32B-Instruct-bnb-4bit/",
quantization_config=bnb_config,
torch_dtype=torch.bfloat16,
use_cache= True,
device_map="auto",
)
model=prepare_model_for_kbit_training(model)
#model.gradient_checkpointing_enable()
peft_config = LoraConfig(
r=4,
target_modules=["q_proj", "k_proj", "v_proj", "o_proj",
"gate_proj", "up_proj", "down_proj"],
lora_alpha=32,
lora_dropout=0.05,
bias="none",
task_type="CAUSAL_LM"
)
tokenizer=AutoTokenizer.from_pretrained("/root/.cache/modelscope/hub/unsloth/Qwen2___5-32B-Instruct-bnb-4bit/")
EOS_TOKEN = tokenizer.eos_token
# Tokenizer settings
if tokenizer.chat_template is None:
tokenizer.chat_template = SIMPLE_CHAT_TEMPLATE
if tokenizer.pad_token_id is None:
tokenizer.pad_token = tokenizer.eos_token
tokenizer.padding_side = "left"
training_args = DPOConfig(
output_dir="/llm/checkpoint/", # Output directory for checkpoints and final model
per_device_train_batch_size=2, # Batch size per device during training
gradient_accumulation_steps=8, # Number of gradient accumulation steps
num_train_epochs=4, # Total number of training epochs
learning_rate=5e-7, # Learning rate
logging_dir="./logs", # Directory for storing logs
logging_steps=500, # Log every X updates steps
save_steps=500, # Save checkpoint every X updates steps
eval_strategy="no", # Evaluation is done (and logged) every `eval_steps`
beta=0.1, # The beta parameter for DPO loss
loss_type="sigmoid",
optim = "adamw_torch",
lr_scheduler_type="cosine",
max_prompt_length=500,
max_target_length=1500,
)
model.enable_input_require_grads()
trainer=DPOTrainer(
model=model,
args=training_args,
peft_config=peft_config,
train_dataset=train_dataset,
eval_dataset=test_dataset,
tokenizer=tokenizer
)
# Configure generation for evaluation
if training_args.eval_strategy != "no":
generation_config = GenerationConfig(
max_new_tokens=2048,
do_sample=True,
temperature=1.0
)
completions_callback = LogCompletionsCallback(trainer, generation_config, num_prompts=8)
trainer.add_callback(completions_callback)
# Train the model
trainer.train()
# Save the final model
final_model_path = training_args.output_dir
trainer.save_model(final_model_path)
print(f"Final model saved to {final_model_path}")
if __name__ == "__main__":
main()
| 2,536 |
qgallouedec | 2025-01-08T13:24:24 | I was able to reproduce the speed. I don't know how swift is different form trl (it's built upon trl as far as I understand). You should probably ask swift community here | 2,536 |
maoulee | 2025-01-08T14:12:57 |
> I was able to reproduce the speed. I don't know how swift is different form trl (it's built upon trl as far as I understand). You should probably ask swift community here
Thank you for your response. I have identified the key issue:
When I load the model and pass the peft_config directly into DPOTrainer, the fine-tuning speed is 600 seconds per iteration.
However, when I use model = get_peft_model(model, peft_config) before passing it to the trainer, the fine-tuning speed improves significantly to 30.2 seconds per iteration.
The logic of the two seems to be the same, but the speed difference is large.
| 2,536 |
qgallouedec | 2025-01-08T14:16:30 | It's probably because when you pass a peft model, it gets merged and unload (`merge_and_unload`). Those two settings should be equivalent though. It's probably an issue with the `DPOTrainer`. If you manage to fix it, feel free to open a PR | 2,536 |
qgallouedec | 2025-01-08T13:46:15 | In general we are open to any contribution yes. The easiest way is to open an issue per proposal to keep the discussion sperate and clear. But I'll answer everything here.
> Use SGLang to do rollout (generation) phase instead of vanilla HuggingFace model / DeepSpeed model. This seems to speed up generation a lot.
How much is a lot? I'm not familiar with SGLang, but what I can gather from the doc is that SGLang is whole serving framework, but we're just interested in the _Fast Backend Runtime_ here.
From the doc:
> Fast Backend Runtime: Provides efficient serving with RadixAttention for prefix caching, jump-forward constrained decoding, overhead-free CPU scheduler, continuous batching, token attention (paged attention), tensor parallelism, FlashInfer kernels, chunked prefill, and quantization (FP8/INT4/AWQ/GPTQ).
It would be interesting to know which component makes inference faster and consider adding just it.
> Delete ref & reward model from GPU when they are not used, such that gpu memory is not occupied, and can support larger batch size or larger models.
> For refactor/cleanup, just standard tiny things like:
For refactoring, RLOO and PPO differ quite a lot from other trainer in the way they are implementing. It make the maintenance pretty challenging. So if someone have time to dedicate on refactoring, I'd either recommend working on aligning with other trainers.
> In addition, I would appreciate it if I could know some comparisons between TRL and OpenRLHF/verl/DeepSpeed-Chat.
What do you want to compare? Have you led such comparison?
| 2,535 |
fzyzcjy | 2025-01-09T00:14:22 | Hi thanks for the reply!
> The easiest way is to open an issue per proposal to keep the discussion sperate and clear.
Ok I will try to do that in the future issues.
> How much is a lot?
https://github.com/huggingface/trl/pull/1628 says "... preliminary testing shows it's ~8x faster" for vllm. I personally find sglang >1.5x faster than vllm in my personal case (but other GPU and model can be surely different). So I expect sglang to be around >8x faster than vanilla generate for my case, and at least ~8x for general case.
> SGLang is whole serving framework, but we're just interested in the Fast Backend Runtime here
Maybe just take the part that inputs a string and outputs a string (called "Engine" in sglang, and "LLM" in vllm)
> For refactoring, RLOO and PPO differ quite a lot from other trainer in the way they are implementing. It make the maintenance pretty challenging. So if someone have time to dedicate on refactoring, I'd either recommend working on aligning with other trainers.
Thanks, but it seems that the primary difference is the advantage estimation, and other parts does not have a lot of differences.
> What do you want to compare? Have you led such comparison?
Yes I have compared (and also asked other libraries the same question and got some replies). I just want to know your opinions, because I definitely knows less about trl than you who create and maintain it! | 2,535 |
faaany | 2025-01-03T02:37:30 | @qgallouedec @lewtun @yao-matrix | 2,533 |
qgallouedec | 2025-01-07T20:28:08 | Can you confirm that these changes are enough for XPU backends? I'm not able to test it? | 2,533 |
HuggingFaceDocBuilderDev | 2025-01-07T20:32:06 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2533). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,533 |
faaany | 2025-01-08T02:17:49 | Thanks for the suggestions! Code Updated. @qgallouedec | 2,533 |
yiyepiaoling0715 | 2024-12-30T08:03:56 | ![image](https://github.com/user-attachments/assets/382ffa50-f3c6-4cd9-aabd-27e882409ed3)
| 2,532 |
qgallouedec | 2024-12-30T14:49:29 | > * [x] Any code provided is minimal, complete, and reproducible ([more on MREs](https://docs.github.com/en/get-started/writing-on-github/working-with-advanced-formatting/creating-and-highlighting-code-blocks))
can you please minimise your code? It seems like the error occurs at generation; what the input of the model here?:
```
| | 2024-12-30 10:53:44.559 | [rank4]: File "/opt/conda/lib/python3.11/site-packages/transformers/generation/utils.py", line 3254, in _sample |
| | 2024-12-30 10:53:44.559 | [rank4]: outputs = model_forward(**model_inputs, return_dict=True) |
```
Can you reproduce the error without all the training logic? | 2,532 |
HuggingFaceDocBuilderDev | 2025-01-08T14:05:59 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2531). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,531 |
dawidm | 2025-01-08T19:53:54 | Update: I've incorrectly stated that "step, which is now equivalent of episodes". Actually, steps are equivalent to iterations of the main training loop. But the fix is still valid. | 2,531 |
yiyepiaoling0715 | 2024-12-30T04:55:00 | same question,how to resolve thie? | 2,529 |
qgallouedec | 2025-01-08T14:22:46 | The solution that you're suggesting sounds good to me, feel free to open a PR | 2,529 |
HuggingFaceDocBuilderDev | 2024-12-28T13:27:00 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2527). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,527 |
qgallouedec | 2025-01-08T14:33:24 | Can you tell me again why it's needed? | 2,527 |
kashif | 2025-01-08T14:34:41 | i was mistaken... in orpo the nll loss is over the prompt + completion | 2,527 |
qgallouedec | 2025-01-08T14:38:01 | Maye we can add a comment here so that we don't revert the reversion in the future ;) | 2,527 |
August-murr | 2024-12-28T06:35:20 | I recommend using GitHub Actions since they run the tests more reliably. Just enable it on your fork, push your changes, and it’ll automatically trigger the tests. | 2,524 |
AMindToThink | 2024-12-28T19:48:24 | Does this mean that my environment is not set up incorrectly? | 2,524 |
AMindToThink | 2024-12-29T03:05:06 | Thank you, took a while to figure out, but the tests that were triggered when I made an empty .py file in trl/trl worked. Somewhat bothersome that it tries and fails to post the results to slack, but the tests themselves pass.
`Error: Need to provide at least one botToken or webhookUrl`
I would appreciate if the [contributing](https://github.com/huggingface/trl/blob/main/CONTRIBUTING.md) document explained that the tests may not run properly locally and are auto-run by Github when changes are pushed to main.
My workflow will be:
Make changes to a branch of my fork.
When I want to test, I'll merge my branch into main.
Github will run the tests
They'll fail.
If on inspection the failure is because of the slack upload attempt, then everything is fine.
If on inspection there was an error before the slack upload attempt, then there's a problem with my code.
If my code is fine and my feature is ready, I can make a pull request. | 2,524 |
qgallouedec | 2024-12-29T10:19:28 | Which tests fail locally? | 2,524 |
AMindToThink | 2024-12-30T19:00:10 | Oddly, it says 6 failed when I only see 5.
I'm on this commit:
`commit aed5da580e9fcba6517460daf65106bc42fb6167 (upstream/main, origin/sac, sac)
Author: Quentin Gallouédec <[email protected]>
Date: Sun Dec 22 12:44:07 2024 +0100`
` 📦 Packing documentation (#2503)`
These are the failures:
```
[gw2] FAILED tests/test_dpo_trainer.py::DPOTrainerTester::test_dpo_lora_bf16_autocast_llama
[gw11] FAILED tests/test_gkd_trainer.py::GKDTrainerTester::test_gkd_trainer
[gw12] FAILED tests/test_callbacks.py::WinRateCallbackTester::test_basic
[gw11] FAILED tests/test_peft_models.py::PeftModelTester::test_create_bnb_peft_model_from_config
[gw15] FAILED tests/test_xpo_trainer.py::TestXPOTrainer::test_training_with_peft | 0/50 [00:00<?, ?it/s]
================== 6 failed, 345 passed, 25 skipped, 242 warnings, 45 rerun in 113.62s (0:01:53) ===================
```
| 2,524 |
umbilnm | 2024-12-27T09:11:21 | Fixes #2400 | 2,521 |
umbilnm | 2024-12-29T13:33:30 | @qgallouedec Hello, can you merge? or something else needed from me? | 2,521 |
HuggingFaceDocBuilderDev | 2025-01-08T14:38:13 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2521). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,521 |
HuggingFaceDocBuilderDev | 2024-12-26T19:07:53 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2520). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,520 |
qgallouedec | 2025-01-07T19:20:37 | ## Regression test:
```python
from datasets import load_dataset
from transformers import AutoModelForCausalLM, AutoTokenizer
from trl import DPOConfig, DPOTrainer
import torch
model_id = "Qwen/Qwen2-0.5B-Instruct"
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16, attn_implementation="flash_attention_2")
tokenizer = AutoTokenizer.from_pretrained(model_id)
dataset = load_dataset("trl-lib/ultrafeedback_binarized", split="train[:10%]")
# dataset = load_dataset("trl-internal-testing/zen", "standard_preference", split="train")
training_args = DPOConfig(output_dir="Qwen2-0.5B-DPO-no_pf", max_prompt_length=128, max_completion_length=128, logging_steps=10, padding_free=False)
trainer = DPOTrainer(model=model, args=training_args, train_dataset=dataset, processing_class=tokenizer)
trainer.train()
```
Is the new `padding_free=False` (`no_pf` in screenshot) equivalent to DPO on current main branch (`main` in screenshot)? -> yes
<img width="2132" alt="Screenshot 2025-01-07 at 20 15 31" src="https://github.com/user-attachments/assets/f2019381-722d-46bb-8258-6a99b75861d8" />
Does `padding_free=True` (`pf` in screenshot) results match `padding_free=False` (`no_pf` in screenshot) results? -> Yes
<img width="2132" alt="Screenshot 2025-01-07 at 20 19 41" src="https://github.com/user-attachments/assets/4920afd1-92e8-415e-80fd-655604ef45bf" />
(note: on screenshots its written "Gemma" but it's actually a Qwen model trained)
| 2,520 |
oliveiraeliel | 2024-12-28T02:19:22 | Hi, I have the same question as you do.
I think that there must be some easy way to simply write a reward function as a `nn.Module`, so we don't have to refactor anything, but I didn't tried it yet.
But I also think that `PPOTrainer` should accept a `custom_get_reward_function` as an optional parameter. In this case, anyone could define its own reward function, and would be a clean solution. | 2,518 |
nityadav | 2024-12-29T19:24:39 | @yananchen1989 Thanks for posting this as I was stuck with a similar issue (but for `OnlineDPOTrainer`). The easiest workaround for me was to subclass the trainer class (`OnlineDPOTrainer`) and override the `training_step` with my custom `get_reward` logic, and rest of the implementation being the same as in the original method. | 2,518 |
August-murr | 2024-12-30T18:28:29 | @yananchen1989 @oliveiraeliel @nityadav @hwhyyds @schmidtj3
This has been a recurring question, so before implementing a solution, I would like to ask you all for examples of when you would need this feature so that we can think of a good solution. | 2,518 |
yananchen1989 | 2024-12-30T18:51:22 | correct me if i am wrong.
would like to know the primary motivation to rewrite the dpo from older version to current trainer unified version. maybe for better efficiency ?
i understand that recent TRL versions wants to unify the pipeline in a more neat and organized manner across these different RL methods, where Trainer is the pivotal module and kick off the trainer.train() and all set.
so for some methods like ppo the reward module is needed, it is also directed passed into the trainer. while for say dpo, sft, there is no provision for reward module.
however. this could cause excessive encapsulation since it if hard to modularize the the reward module.
the core reason is that in practical cases, reward module can be of any form not just a single torch.nn module which just score the whole output. the reward module may be a mixture and may be of dependence on external parameters, the prompt and most importantly it can not score the ppo trainer' outputs in a batch mode.
anyway the flexibility is significantly reduced.
although as you know the current unified pipeline is very fine with other mothods such as dpo as they do not have the reward concerns and the reward module is implicitly expressed within the algorithm.
in my view, there is no need to rigidly transfer these rl methods into a unified training framework.
pls advise. | 2,518 |
August-murr | 2024-12-31T06:52:17 | Ultimately, TRL is a Hugging Face library built on top of Transformers and is part of the Hugging Face ecosystem. If the Trainer does limit flexibility, then Transformers will need to adapt; otherwise, we will have to maintain a much larger and more complex codebase.
We'll come up with a way to add these features and prepare a PR soon! | 2,518 |
August-murr | 2024-12-31T06:52:44 | @qgallouedec, do you want to comment?
| 2,518 |
qgallouedec | 2024-12-31T07:30:41 | Maybe having a `reward_func` arg of type `Callable` is an option.
Alternatively, releasing the type of `reward_model` to accept any `Callable` is also an option. But given that a custom reward func won't return the same type/shape as proper `reward_model` I'm a bit afraid that it would require overcomplicated logic.
In any case, I believe that the best approach is to discuss around a PR if anyone is willing to propose their approach | 2,518 |
yananchen1989 | 2024-12-31T12:57:59 | i hear u. thanks | 2,518 |
qgallouedec | 2025-01-07T09:22:08 | So we were wrong in https://github.com/huggingface/trl/pull/2433? | 2,516 |
dawidm | 2025-01-07T12:51:04 | > So we were wrong in #2433?
Yes, it looks like it. I'm not sure why it seemed to work. | 2,516 |
dawidm | 2024-12-27T20:40:20 | Update: this approach (PR #2516) introduce another problem, because incrementing `self.state.global_step` by more than 1 needs parameters like `logging_steps` be divisible by the value of the increment. Solutions for this are:
1. Require `logging_steps` etc. to be divisible by `args.num_mini_batches * args.num_ppo_epochs`.
2. Change convention for what `step` is in RLOO - don't multiply `self.state.max_steps` by `args.num_mini_batches * args.num_ppo_epochs` (making `step` an equivalent of `episode`).
I prefer the second one because it's simpler, but I'd appreciate comments on this. I'll update the PR.
edit: 2. is also consistent with documentation:
> episode: episode: The current global step or episode count in the training process. | 2,515 |
dawidm | 2024-12-29T13:12:02 | Of course there's also 3. solution: update `global_step` after actual optimizer step (inside minibatch PPO loop), but also logging should have been moved here in this case. This will keep the most "correct" (i think) convention of steps but it requires the most changes. | 2,515 |
qgallouedec | 2025-01-08T14:11:41 | Yes 2. make probably more sense | 2,515 |
dawidm | 2025-01-08T19:49:36 | Sorry, I made a mistake saying that 2. will make `step` equivalent of `episode`. Same for my PR #2531. But apart from this, both PRs are still valid and fix what they are supposed to.
I've updated PR for this issue and this is how it looks with both PRs: PPO and RLOO follow the same convention for `steps` (not affected by `num_mini_batches` and `num_ppo_epochs`). `steps` are actually iterations of the main training loop (that is: episodes divided by global batch size). Actual number of episodes is logged correctly. | 2,515 |
SwayamInSync | 2024-12-21T19:58:52 | This accounted with `SFTTrainer` if this is a general issue with `Trainer` from transformers, can be re-located there | 2,514 |
HuggingFaceDocBuilderDev | 2024-12-21T12:12:26 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2513). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,513 |
HuggingFaceDocBuilderDev | 2024-12-21T00:10:35 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2512). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,512 |
HuggingFaceDocBuilderDev | 2024-12-20T23:42:15 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2511). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,511 |
HuggingFaceDocBuilderDev | 2024-12-20T21:43:27 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2510). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,510 |
HuggingFaceDocBuilderDev | 2024-12-20T16:10:32 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2509). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,509 |
metric-space | 2024-12-21T21:30:33 | @aivolcano There is a notebook that is related to this. The updated notebook is here: https://github.com/huggingface/trl/blob/main/examples/notebooks/best_of_n.ipynb | 2,508 |
aivolcano | 2024-12-27T08:53:25 | thank u so much
| 2,508 |
HuggingFaceDocBuilderDev | 2024-12-20T11:30:43 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2507). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,507 |
Mecoli1219 | 2024-12-20T06:46:11 | Wait for https://github.com/linkedin/Liger-Kernel/pull/492 | 2,506 |
HuggingFaceDocBuilderDev | 2025-01-03T16:00:20 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2506). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,506 |
kashif | 2025-01-03T19:54:41 | needs: https://github.com/linkedin/Liger-Kernel/pull/510 | 2,506 |
metric-space | 2024-12-21T21:33:11 | @nguyenhoa-uit I can help out with this as this was code I wrote more than a year ago. Mind you, I'll be very very slow. Let me take a look | 2,505 |
metric-space | 2024-12-23T09:46:31 | @nguyenhoa-uit could you try this bit : https://github.com/huggingface/trl/blob/main/trl/trainer/ddpo_config.py#L64 ? | 2,505 |
nguyenhoa-uit | 2024-12-25T02:18:37 |
> @nguyenhoa-uit could you try this bit : https://github.com/huggingface/trl/blob/main/trl/trainer/ddpo_config.py#L64 ?
When I used checkpoint resume from in config file, I ran and had a bug at https://github.com/huggingface/trl/blob/main/trl/trainer/ddpo_trainer.py#L541C20-L541C42
When I passed with try catch, it didnot use the parameters from this checkpoint but base model.
| 2,505 |
ggbetz | 2024-12-20T15:19:13 | It seems @philschmid has in implementation here: https://github.com/philschmid/deep-learning-pytorch-huggingface/blob/391f19ba06c128a2a290b3bdcb717ad6ff794fd7/training/scripts/run_sft.py#L54-L77 and the question is maybe just what's the best cleanest way to integrate this natively in trl? | 2,504 |
anakin87 | 2024-12-21T16:25:29 | This would be great and would prevent users from making mistakes in the manual implementation of this method: for example, [the code for integration with other libraries reported in the official repo](https://github.com/cognitivecomputations/spectrum?tab=readme-ov-file) has some problems. In contrast, the simple implementation in [my tutorial](https://huggingface.co/blog/anakin87/spectrum) and Philipp's code should be correct.
BTW, Spectrum is quite agnostic with respect to training method (SFT, DPO...): the [models by VAGO solutions](https://huggingface.co/VAGOsolutions) show that it works well for DPO too.
LMK what's the better way to proceed and help with this integration. | 2,504 |
HuggingFaceDocBuilderDev | 2024-12-19T10:50:45 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2503). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,503 |
HuggingFaceDocBuilderDev | 2024-12-19T10:13:19 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2502). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,502 |
qgallouedec | 2024-12-23T12:38:06 | Can you screenshot a result? | 2,501 |
HuggingFaceDocBuilderDev | 2024-12-23T12:41:22 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2501). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,501 |
yaricom | 2024-12-23T12:43:48 | Sure, here is s screenshot from my account at Comet.
<img width="2106" alt="Screenshot 2024-12-23 at 14 42 20" src="https://github.com/user-attachments/assets/69629fdb-77de-4a2d-b1d2-087889d96a4c" />
| 2,501 |
End of preview. Expand
in Dataset Viewer.
Stars
import requests
from datetime import datetime
from datasets import Dataset
import pyarrow as pa
import os
def get_stargazers(owner, repo, token):
# Initialize the count and the page number
page = 1
stargazers = []
while True:
# Construct the URL for the stargazers with pagination
stargazers_url = f"https://api.github.com/repos/{owner}/{repo}/stargazers?page={page}&per_page=100"
# Send the request to GitHub API with appropriate headers
headers = {"Accept": "application/vnd.github.v3.star+json", "Authorization": "token " + token}
response = requests.get(stargazers_url, headers=headers)
if response.status_code != 200:
raise Exception(f"Failed to fetch stargazers with status code {response.status_code}: {response.text}")
stargazers_page = response.json()
if not stargazers_page: # Exit the loop if there are no more stargazers to process
break
stargazers.extend(stargazers_page)
page += 1 # Move to the next page
return stargazers
token = os.environ.get("GITHUB_PAT")
stargazers = get_stargazers("huggingface", "trl", token)
stargazers = {key: [stargazer[key] for stargazer in stargazers] for key in stargazers[0].keys()}
dataset = Dataset.from_dict(stargazers)
def clean(example):
starred_at = datetime.strptime(example["starred_at"], "%Y-%m-%dT%H:%M:%SZ")
starred_at = pa.scalar(starred_at, type=pa.timestamp("s", tz="UTC"))
return {"starred_at": starred_at, "user": example["user"]["login"]}
dataset = dataset.map(clean, remove_columns=dataset.column_names)
dataset.push_to_hub("qgallouedec/trl-metrics", config_name="stargazers")
Pypi downloads
from datasets import Dataset
from google.cloud import bigquery
import os
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = "propane-tree-432413-4c3e2b5e6b3c.json"
# Initialize a BigQuery client
client = bigquery.Client()
# Define your query
query = """
#standardSQL
WITH daily_downloads AS (
SELECT
DATE(timestamp) AS day,
COUNT(*) AS num_downloads
FROM
`bigquery-public-data.pypi.file_downloads`
WHERE
file.project = 'trl'
-- Filter for the last 12 months
AND DATE(timestamp) BETWEEN DATE_SUB(CURRENT_DATE(), INTERVAL 54 MONTH) AND CURRENT_DATE()
GROUP BY
day
)
SELECT
day,
num_downloads
FROM
daily_downloads
ORDER BY
day DESC
"""
# Execute the query
query_job = client.query(query)
# Fetch the results
results = query_job.result()
# Convert the results to a pandas DataFrame and then to a Dataset
df = results.to_dataframe()
dataset = Dataset.from_pandas(df)
dataset.push_to_hub("qgallouedec/trl-metrics", config_name="pypi_downloads")
Models tagged
from huggingface_hub import HfApi
from datasets import Dataset
api = HfApi()
models = api.list_models(tags="trl")
dataset_list = [{"id": model.id, "created_at": model.created_at, "likes": model.likes, "downloads": model.downloads, "tags": model.tags} for model in models]
dataset_dict = {key: [d[key] for d in dataset_list] for key in dataset_list[0].keys()}
dataset = Dataset.from_dict(dataset_dict)
dataset.push_to_hub("qgallouedec/trl-metrics", config_name="models")
Issues and comments
import requests
from datetime import datetime
import os
from datasets import Dataset
from tqdm import tqdm
token = os.environ.get("GITHUB_PAT")
def get_full_response(url, headers, params=None):
page = 1
output = []
params = params or {}
while True:
params = {**params, "page": page, "per_page": 100}
response = requests.get(url, headers=headers, params=params)
if response.status_code != 200:
raise Exception(f"Failed to fetch issues: {response.text}")
batch = response.json()
if len(batch) == 0:
break
output.extend(batch)
page += 1
return output
# GitHub API URL for issues (closed and open)
issues_url = f"https://api.github.com/repos/huggingface/trl/issues"
# Set up headers for authentication
headers = {"Authorization": f"token {token}", "Accept": "application/vnd.github.v3+json"}
# Make the request
issues = get_full_response(issues_url, headers, params={"state": "all"})
issues_dataset_dict = {
"number": [],
"title": [],
"user": [],
"state": [],
"created_at": [],
"closed_at": [],
"comments_count": [],
}
comments_dataset_dict = {
"user": [],
"created_at": [],
"body": [],
"issue_number": [],
}
for issue in tqdm(issues):
# Extract relevant information
issue_number = issue["number"]
title = issue["title"]
created_at = datetime.strptime(issue["created_at"], "%Y-%m-%dT%H:%M:%SZ")
comments_count = issue["comments"]
comments_url = issue["comments_url"]
comments = get_full_response(comments_url, headers=headers)
for comment in comments:
comments_dataset_dict["user"].append(comment["user"]["login"])
comments_dataset_dict["created_at"].append(datetime.strptime(comment["created_at"], "%Y-%m-%dT%H:%M:%SZ"))
comments_dataset_dict["body"].append(comment["body"])
comments_dataset_dict["issue_number"].append(issue_number)
issues_dataset_dict["number"].append(issue_number)
issues_dataset_dict["title"].append(title)
issues_dataset_dict["user"].append(issue["user"]["login"])
issues_dataset_dict["state"].append(issue["state"])
issues_dataset_dict["created_at"].append(datetime.strptime(issue["created_at"], "%Y-%m-%dT%H:%M:%SZ"))
issues_dataset_dict["closed_at"].append(datetime.strptime(issue["closed_at"], "%Y-%m-%dT%H:%M:%SZ") if issue["closed_at"] else None)
issues_dataset_dict["comments_count"].append(comments_count)
issues_dataset = Dataset.from_dict(issues_dataset_dict)
comments_dataset = Dataset.from_dict(comments_dataset_dict)
issues_dataset.push_to_hub("qgallouedec/trl-metrics", config_name="issues")
comments_dataset.push_to_hub("qgallouedec/trl-metrics", config_name="issue_comments")
- Downloads last month
- 260