You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Many finetunes do use a frozen encoder, and take feature maps at different "heights" plus the enconder patch embeddings. These are the same regardless of the application.
We currently have the option to save the average embeddings (or all patch embeddings).
I think it would make sense to have an option to save feature maps at all heights alongside patch embeddings.
The idea being the same as with mean embeddings. If we have a large set of files that are used for several downstream applications, it seems to make sense to save both time and space with creating and saving these.
Even if these are all feture maps and all patch embeddings, they are still an extremely small % of the size of the input images.
Does this make sense?
Can we mae an inference option to save all these?
The text was updated successfully, but these errors were encountered:
offline @yellowcap told me that this would mean several times the size of the input, which means it´s more efficient to re-create them when needed with the encoder.
Many finetunes do use a frozen encoder, and take feature maps at different "heights" plus the enconder patch embeddings. These are the same regardless of the application.
We currently have the option to save the average embeddings (or all patch embeddings).
I think it would make sense to have an option to save feature maps at all heights alongside patch embeddings.
The idea being the same as with mean embeddings. If we have a large set of files that are used for several downstream applications, it seems to make sense to save both time and space with creating and saving these.
Even if these are all feture maps and all patch embeddings, they are still an extremely small % of the size of the input images.
Does this make sense?
Can we mae an inference option to save all these?
The text was updated successfully, but these errors were encountered: