-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issue on grad #7
Comments
Just upgraded with the original implementation upgrade with pip and try again! |
Experiencing the same issue using current main version |
I meet the same question in Megatron when training distributed model... |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
File "/home/phu/Desktop/gatedtabtransformer/sophia_custom.py", line 46, in step hessian_estimate = self.hutchinson(p, grad) File "/home/phu/Desktop/gatedtabtransformer/sophia_custom.py", line 61, in hutchinson hessian_vector_product = torch.autograd.grad(grad.dot(u), p, retain_graph=True)[0] File "/home/phu/miniconda3/envs/ner-py38-conda-env/lib/python3.8/site-packages/torch/autograd/__init__.py", line 303, in grad return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
I also tried to use
torch.sum(grad * u)
but it did not work!Upvote & Fund
The text was updated successfully, but these errors were encountered: