David Berenstein's picture

David Berenstein

davidberenstein1957

AI & ML interests

Everything data

Recent Activity

updated a dataset about 21 hours ago
davidberenstein1957/my-distiset-df9db7bc
liked a Space about 21 hours ago
argilla/synthetic-data-generator
updated a Space about 21 hours ago
argilla/synthetic-data-generator
View all activity

Articles

Organizations

Hugging Face's profile picture SomosNLP's profile picture Tools's profile picture Webhooks Explorers (BETA)'s profile picture Argilla's profile picture Blog-explorers's profile picture Argilla Explorers's profile picture distilabel-internal-testing's profile picture Data Is Better Together's profile picture Social Post Explorers's profile picture argilla-internal-testing's profile picture Dataset Viber's profile picture Argilla Warehouse's profile picture Dataset Tools's profile picture Uplimit's profile picture Data Is Better Together Contributor's profile picture FeeL (Feedback Loop)'s profile picture

davidberenstein1957's activity

upvoted an article about 22 hours ago
view article
Article

Beyond Image Preferences - Rich Human Feedback for Text-to-Image Generation

12
replied to davanstrien's post 1 day ago
view reply

Open collaboration is key for democratising AI.

reacted to davanstrien's post with 🤝❤️🚀 1 day ago
view post
Post
1030
The data-is-better-together/fineweb-c dataset is growing!

This week a few more languages have got 1,000 annotations for the educational quality of data from HuggingFaceFW/fineweb-2.

Why should you care?

The quality of pre-training data can have a big impact on the performance of downstream language models trained on that data ( HuggingFaceFW/blogpost-fineweb-v1).

Being able to filter by educational quality is on way of improving the quality of the data you use for training an LLM. Very importantly this approach can also reduce the amount of data needed for pertaining.

Why not use an LLM?

LLMs can be used to annotate educational quality for a subset of data. This data can then be used to train a smaller encoder only model to label the full dataset. However, this may not work well for languages outside of english. This is where fineweb-c (community) comes in.

The community is annotating the educational quality of fineweb2 data. Currently 114 languages have some annotations. These annotations will enable a number of things:

- Evaluate whether an LLM can label the educational quality for texts in that language well
- Directly be used for training quality classifiers
- Help discover other rules and huerisitcs for refining fineweb2 further for different languages.

This week the following languages where done:

Swedish thanks to: @Lauler @AntonVic @ohallstrom @bjarlestam @menbom @Ekgren @apsod

Ukrainian thanks to: @hannayukhymenko @robinhad @realPivo @RabotiahovDmytro @reciprocate

Assamese thanks to: @moyoor97 @Arpanjyoti @nawaf-helmi123 @pahigogoi1 @aelhence @kishorekashyap

Want to learn more: https://huggingface.co/blog/davanstrien/fineweb2-community

Contribute yourself here: data-is-better-together/fineweb-c
  • 1 reply
·