-
Notifications
You must be signed in to change notification settings - Fork 105
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance gap on NERF synthetic dataset #4
Comments
Hi @walsvid, this is indeed the case, thanks for pointing out! Although the renders look "good", the PSNR is not as good as reported values in Instant-NGP. Note: You should look at the testset PSNR (not the training PSNR). I have just pushed some code to print these values. Pull the latest Here's an example with Lego dataset: PSNR on test set = video.mp4To be honest, I am not sure why this is the case. We can different values for |
A possible reason is that instant-ngp loads all train/val/test data to train (see here) for nerf_synthetic dataset? |
Seems that if we add the TV_loss's iteration, PSNR could rise. (in run_nerf.py line 876) |
so, can someone give a benchmark for this repo v.s. official instant-ngp repo? it would be very useful! |
Hi @yashbhalgat , may I know what is the expected difference in training speed of This repo, pytorch-nerf and instant-ngp? |
@Feynman1999 regarding benchmarks v/s Instant-NGP, I will try to get this ready, but it won't be any time soon as I am currently caught up with other projects. If you or someone else can work on this, feel free to open a pull request. :) @shreyk25,
Chair.Convergence.mp4Hope this helps. :) |
have you tried this tvloss? how it affect psnr? |
The author just clarified to me that they only use the train split to train: kwea123/ngp_pl#1 |
Thanks for the great work @yashbhalgat , do you now have any idea about the performance (numerical results) gap between instant and this implementation? I guess there may be some lack of essential implementation details in this repo, but I am not able to find it out... |
@zParquet I am wondering the same and searching for quite a while now, because the gap is quite large. Two differences I found so far are:
Further ideas are welcomed :D This work seems to reach atleast PSNR 34, however they are using an own Cuda version of the encoding. |
After reading the appendix E.3, it seems a huge benefit comes from a (very) large number of rays in each batch, at the cost of fewer samples. However these fewer samples seem to be possible because of their additional nested occupancy grids.
EDIT: Author of the original paper states something similar: NVlabs/instant-ngp#118 |
As mentioned in another issue #2, using the default config like nerf-pytorch does not get comparable performance to Instant-NGP.
The text was updated successfully, but these errors were encountered: