-
Notifications
You must be signed in to change notification settings - Fork 452
No more text2text
#1590
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
No more text2text
#1590
Conversation
Heads up this is pure vibe-coding _pre-LLM_, i.e. I'm not sure what I'm doing but I'm still doing it, manually (though I tried to take inspiration from #457) The goal is to address https://discuss.huggingface.co/t/no-0-models-returned-by-text2text-search-filter/161546 following huggingface-internal/moon-landing#14258
just dropping this here, anyone feel free to push on top of it x) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"Looks ok"
@@ -1,7 +1,6 @@ | |||
import type { TaskDataCustom } from "../index.js"; | |||
|
|||
const taskData: TaskDataCustom = { | |||
canonicalId: "text2text-generation", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
shouldn't we replace by
canonicalId: "text-generation",
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
well no, because since those examples were created, we added dedicated pipelines for both summarization and translation, iiuc, cc @SBrandeis
or is it not how it works? ^^'
@@ -1,7 +1,6 @@ | |||
import type { TaskDataCustom } from "../index.js"; | |||
|
|||
const taskData: TaskDataCustom = { | |||
canonicalId: "text2text-generation", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same here ?
Nice! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nice!
[Text-to-Text generation models](https://huggingface.co/models?pipeline_tag=text2text-generation&sort=downloads) have a separate pipeline called `text2text-generation`. This pipeline takes an input containing the sentence including the task and returns the output of the accomplished task. | ||
|
||
```python | ||
from transformers import pipeline | ||
|
||
text2text_generator = pipeline("text2text-generation") | ||
text2text_generator("question: What is 42 ? context: 42 is the answer to life, the universe and everything") | ||
[{'generated_text': 'the answer to life, the universe and everything'}] | ||
|
||
text2text_generator("translate from English to French: I'm very happy") | ||
[{'generated_text': 'Je suis très heureux'}] | ||
``` | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This still exists in transformers, so I'm not entirely sure we should remove it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd suggest we keep it here but add a "historical note" comment. cc @LysandreJik @ArthurZucker for viz
{ | ||
type: "text2text-generation", | ||
name: "Text2Text Generation", | ||
}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
or we could just remove it, but fine to keep it as a subtask i guess
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no strong opinion, just copied what had been done for conversational
in #457
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok to keep
Heads up this is pure vibe-coding pre-LLM, i.e. I'm not sure what I'm doing but I'm still doing it, manually (though I tried to take inspiration from #457)
The goal is to address https://discuss.huggingface.co/t/no-0-models-returned-by-text2text-search-filter/161546 following https://github.com/huggingface-internal/moon-landing/pull/14258