We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
grid_encode.py
log2
std::log2
scale
jt.nn.softplus
alpha
cumprod
The text was updated successfully, but these errors were encountered:
I met No.3 error,loss increases sharply during training. do you have any solution to fix it?
Sorry, something went wrong.
refer to my code, it might help you.
No branches or pull requests
grid_encode.py
,log2
in Python is not exactly same withstd::log2
in C++. This can lead to a mismatch inscale
in some cases.jt.nn.softplus
is not the same as the softplus in PyTorch. I don't know if this will cause any problems.alpha
in sdf, JNeRF uses safe_clip(0.0, 1.0). I think clamp_(0.0, 1.0) should be used.cumprod
function in jittor is unsafe! Have a look at PyTorch implementation or mine in Jeuralangelo.The text was updated successfully, but these errors were encountered: