You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
First of all, thanks for open-sourcing this great project! I'm trying to replicate the evaluation setup described in the paper for Light-A-Video (or whichever name your paper has). Specifically, I want to reproduce the quantitative metrics (FID, CLIP Score, optical-flow-based motion preservation, etc.) and use the same video test dataset setup as mentioned in the paper.
I’ve checked the README and existing issues but haven’t found a clear guide for replicating the exact evaluation protocol (metrics, dataset splits, frame sampling, etc.).
Any guidance or example scripts would be greatly appreciated! Thanks in advance. @YujieOuO
The text was updated successfully, but these errors were encountered:
In the upcoming version of the paper, we will provide more quantitative comparisons and descriptions. Specific evaluations code will not be included in this repository, as it is primarily focused on applications. Feel free to contact us via email for further discussion.
First of all, thanks for open-sourcing this great project! I'm trying to replicate the evaluation setup described in the paper for Light-A-Video (or whichever name your paper has). Specifically, I want to reproduce the quantitative metrics (FID, CLIP Score, optical-flow-based motion preservation, etc.) and use the same video test dataset setup as mentioned in the paper.
I’ve checked the
README
and existing issues but haven’t found a clear guide for replicating the exact evaluation protocol (metrics, dataset splits, frame sampling, etc.).Any guidance or example scripts would be greatly appreciated! Thanks in advance. @YujieOuO
The text was updated successfully, but these errors were encountered: