Skip to content
This repository was archived by the owner on Oct 31, 2023. It is now read-only.

Fine-tuning the XLM-R on the dev set of each language #12

Open
nooralahzadeh opened this issue Apr 10, 2020 · 0 comments
Open

Fine-tuning the XLM-R on the dev set of each language #12

nooralahzadeh opened this issue Apr 10, 2020 · 0 comments

Comments

@nooralahzadeh
Copy link

Hi,
Have you tried to fine-tune the XLM-R model after pre-trained on English on the dev set of other languages (few-shot learning) and then evaluate on its test set?
The strange thing is that the performance on XLM-R is lower in few-shot learning compare to the zero-shot setting.

Thanks

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant