You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi @suruoxi, thanks for your interest in our work.
SimVQ has a similar motivation to IBQ, but we adopt different methods to optimize all codebook embeddings. SimVQ optimizes a linear transformation space $W \in R^{D \times D}$, while our IBQ optimizes the codes ($C \in R^K\times D$) themselves, where $K\gg D$.
It can not compare SimVQ with IBQ directly, because SimVQ is trained on ImageNet 128x128 and IBQ is trained on ImageNet 256x256. Moreover, SimVQ only shows the reconstruction performance and does not give the generation results. But we can provide some results from the papers for reference.
Have you compared IBQ to SimVQ? I think SimVQ has the similar motivation of "optimize all codebook embeddings".
The text was updated successfully, but these errors were encountered: