Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Some questions with baselines #7

Open
TaoCesc opened this issue Apr 17, 2022 · 3 comments
Open

Some questions with baselines #7

TaoCesc opened this issue Apr 17, 2022 · 3 comments

Comments

@TaoCesc
Copy link

TaoCesc commented Apr 17, 2022

Your work is very good and effective. But I have some questions about the baseline approach. I tried different hyperparameters to adjust supervised contrastivelearning or unsupervised contrastive learning to fine-tune BERT, and then to classify. But I've never been able to do anything better than just Cross-Entropy. I wonder what I didn't take into account? I've seen a lot of papers that contrastive learning can help improve classification results, but here I always get the opposite. Maybe I want to know the hyperparameters you set when you ran the comparison.

@wangqian97
Copy link

Hello, I tried to use author code, but ACC is stable to 50%, I want to know how you do the training.

@TaoCesc
Copy link
Author

TaoCesc commented May 26, 2022

Hello, I tried to use author code, but ACC is stable to 50%, I want to know how you do the training.

Hi,When I reproduced the experiment, I did not modify other parameters. Basically, I used the parameters given by the author, and only modified the size of batch_size. I can reproduce the author's results and get good results on my own dataset, you may need to recheck the parameters or download a new code to try it out.

@DominicSlw
Copy link

Your work is very good and effective. But I have some questions about the baseline approach. I tried different hyperparameters to adjust supervised contrastivelearning or unsupervised contrastive learning to fine-tune BERT, and then to classify. But I've never been able to do anything better than just Cross-Entropy. I wonder what I didn't take into account? I've seen a lot of papers that contrastive learning can help improve classification results, but here I always get the opposite. Maybe I want to know the hyperparameters you set when you ran the comparison.

Hi, can you explicitly explain what your hyperparameters are when using supervised contrastive learning to fine tune the baseline? I get the same result with you. Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants