-
Notifications
You must be signed in to change notification settings - Fork 28
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unable to replicate Results #4
Comments
Apologies for the delayed response. We have used the official CLIP models released by open-ai (see the beginning of models/clip/clip.py for the links to the weights). I suppose the weights released by huggingface are different compared to open-ai. That should be the main issue. But if the issue somehow still persists, please elaborate on which particular results seem to not match the ones in the paper. |
Have you reproduced the outcome yet? |
Hello,
I am currently trying to test your model's weights with some data, but I am encountering some issues. I have set up the final linear layer as per your design, and am using the Clip-Vit-L-14 weights from Hugging Face. Despite following the methodology outlined in your paper and implementing it in my own model, the results don't seem to match the expected outcomes.
Here is a brief outline of my current setup:
As can be seen, I've frozen all the weights of the encoder. When running the code, the initial Binary Cross Entropy (BCE) Loss is surprisingly high, starting around 5.
I'd appreciate any guidance or suggestions on what could be the possible reasons for this issue, and how to resolve it.
Thank you in advance
Edit.: This is not the working code, just an simple implementation idea
The text was updated successfully, but these errors were encountered: