-
Notifications
You must be signed in to change notification settings - Fork 46
Cooperative Vector API #384
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
7c65d4b
to
cd67909
Compare
89499a0
to
0247195
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review part 1 (three big files left to review)
a, b = t(1), t(2) | ||
dr.enable_grad(a, b) | ||
z = nn.CoopVec(a, b) # pack | ||
assert dr.grad_enabled(z) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can also test that dr.enable_grad(z)
raises as expected
dr.schedule(x.grad, y.grad) | ||
assert x.grad == 4 | ||
assert y.grad == 5 | ||
assert dr.grad_enabled(z) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can also test dr.detach(z)
z + 3 | ||
) | ||
b = nn.cast(a, dr.float32_array_t(t)) | ||
c = nn.cast(b, dr.float16_array_t(t)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Test grad enabled / disabled and grad propagation through casts?
Ideally, gradients would just be converted to the new precision. It's an important use-case to have most of a differentiable pipeline in fp32 and locally convert to fp16 for the MLP.
One thing that will come up when we add the hash grid encoding, but good to keep in mind in general: atomic addition of |
2d46aae
to
f80af5a
Compare
Cooperative vectors enable efficient compilation and evaluation of expressions involving matrix multiplication. They cater to a specific use case, where each execution thread performs a sequence of independent multiplications by reasonably small matrices (e.g., 64x64). This enables the fully fused evaluation of small multilayer perceptrons within a larger program. That said, the feature isn't specific to MLPs and could also be used in other ways. On NVIDIA GPUs (Turing or newer), cooperative vectors map to the OptiX cooperative vector API leveraging the builtin tensor core for acceleration. On the CPU (LLVM) backend, Dr.Jit compiles cooperative vector operations using available instruction set extensions (AVX512, NEON, etc.). For further details on this new API and now to use it, refer to the documentation in ``docs/coop_vec.rst``.
This commit improves handling of evaluated loops with grad-enabled state variables. Previously, the AD variable ID of each differentiable state variable changed in every iteration, even if the loop did not touch that variable. This is an implementation detail of the loop evaluation code, that should, however, not leak into user code. This commit fixes this behavior.
This commit fixes bugs in the compilation of reverse-mode derivatives of simple loops (i.e, loops with max_iterations==-1) and updates the test suite to cover problematic cases.
This commit fixes bugs and adds tests to ensure that matrix multiplication can be correctly differentiated in reverse-mode when it occurs inside a "simple" loop (i.e., a loop with max_iterations==-1).
Dr.Jit-Core always generates the f16x2 assembly operation, even when only scattering a single Right now, packet atomics are ignored by the CUDA backend. I think that Blackwell is the first consumer architecture that really supports these besides the f16x2 special case. In any case, such changes are out of scope for this already very big PR. |
This feature adds cooperative vector support to Dr.Jit. They enable efficient compilation and evaluation of expressions involving matrix multiplication and cater to situations where each execution thread performs a sequence of independent multiplications by reasonably small matrices (e.g., 64x64). This enables the fully fused evaluation of small multilayer perceptrons within a larger program. That said, the feature isn't specific to MLPs and could also be used in other ways.
On NVIDIA GPUs (Turing or newer), cooperative vectors map to the OptiX cooperative vector API leveraging the builtin tensor core for acceleration. On the CPU (LLVM) backend, Dr.Jit compiles cooperative vector operations using available instruction set extensions (AVX512, NEON, etc.).
For further details on this new API and now to use it, refer to the documentation: