-
Notifications
You must be signed in to change notification settings - Fork 779
[SYCL][NVPTX][AMDGCN] Move devicelib cmath to header #18706
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: sycl
Are you sure you want to change the base?
Conversation
@bader this is a proof of concept for moving C++ library handling from libdevice code into headers. It allows us to remove the hack blocking LLVM intrinsic generation for standard math built-ins, since we intercept them earlier in the header for device side, which is in-line with what clang cuda does. Only for cmath and for Nvidia and AMD for now. I've currently placed the header into the This still needs a ton of work which is why it's a draft, but let me know if you have any feedback on the approach. It would be good to know if this would be interesting for non-AOT targets as well, there's a lot of logic in the driver to conditionally link libdevice libraries, I suspect in theory most of that could be replaced with this header approach, but I haven't looked into this much so I'm not 100% sure if this is something we'd want. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@npmiller, thanks for working on this.
It allows us to remove the hack blocking LLVM intrinsic generation for standard math built-ins, since we intercept them earlier in the header for device side, which is in-line with what clang cuda does.
I discussed this approach with Johannes Doerfert a few years ago. He told me that he doesn't like "what clang cuda does" and plans to change it. I think clang still uses the header solution, but it may be worth to double check with LLVM community is doing any work in that direction.
I've currently placed the header into the stl_wrappers directory, it might be better as a clang header, but at least on CUDA the clang header is always included whereas with the stl wrappers it will only be included when the matching standard library header is included.
Interesting... I thought that clang only adds path to the clang headers at the beginning of the search paths list to make sure that clang wrapper header is included before STL one. I didn't know that CUDA compiler always includes clang wrapper headers.
It would be good to know if this would be interesting for non-AOT targets as well, there's a lot of logic in the driver to conditionally link libdevice libraries, I suspect in theory most of that could be replaced with this header approach, but I haven't looked into this much so I'm not 100% sure if this is something we'd want.
@AlexeySachkov, could you take into SPIR-V part, please?
The change looks to be aligned with the community approach. The only concern I have is compile time, but potential increase should be negligible.
cc @Naghasan just to keep in the loop.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
don't forget to add tests and documentation for the attribute before undrafting :)
Yeah in the driver here it does: CC1Args.push_back("-include");
CC1Args.push_back("__clang_cuda_runtime_wrapper.h"); And Using our stl wrappers solution should allow us to be a little more conservative about when we include all of this stuff. |
This patch experiments with moving standard library math built-ins from libdevice into headers. This is based on the way clang handles this for CUDA and HIP. In these languages you can define device functions as overloads. This allows re-defining standard library functions specifically for the device in a header, so that we can provide a device specific implementations of certain built-ins while still using the regular standard library headers. By default SYCL doesn't do overloads for device functions, so this patch introduces a new `sycl_device_only` attribute, this attribute will make a function device only and allow it to overload with existing functions.
We don't support malloc in SYCL, silence warnings for host compilation with `sycl_device_only`. Fix failing clang test with new attribute.
This test was relying on the hack preventing LLVM intrinsics from being emitted so it doesn't work at all with the new approach.
This reverts commit af224a0.
This doesn't map to a spir-v built-in
Overview
Currently to support C++ builtins in SYCL kernels, we rely on
libdevice
whichprovides implementations for standard library builtins. This library is built
either to bitcode or SPIR-V and linked in our kernels.
On some targets this causes issues because clang sometimes turns standard
library calls into LLVM intrinsics that not all targets support. Specifically on
NVPTX and AMDGCN we can't easily support these intrinsics because we currently
use implementations provided by CUDA and HIP in the form of a bitcode library,
which is not something we can use from the LLVM backend.
In upstream LLVM for CUDA and HIP kernels, the way this is handled is that they
have clang headers providing device-side overloads of C++ library functions that
hook into the target specific versions of the builtins (for example
std::sin
to
__nv_sin
). This way on the device side C++ builtins are hijacked beforeclang can turn them to intrinsics which solves the issue mentioned above.
This patch is adding the infrastructure to support handling C++ builtins in SYCL
in the same way as it is done for CUDA and HIP in upstream LLVM. And is using it
to support
cmath
in NVPTX and AMDGCN compilation.Breakdown
sycl_device_only
attribute: This new attribute allows functions markedwith it to be treated as device-side overload of existing functions. This is
what allows us to overload C++ library functions for device in SYCL.
builtins for NVPTX and AMDGCN. In theory since this is only moving
cmath
, thehack could still be needed, but it looks fine in testing and if we run into
issues we should just move the problematic builtins to this solution. The test
sycl-libdevice-cmath.cpp
was testing this hack, so it was removed.cmath
support for NVPTX and AMDGCN inlibdevice
was disabled. To limit thescope of the patch
libdevice
is still fully wired up for these targets, but itjust won't provide the
cmath
functions.cmath-fallback.h
header providing the device-side math functionoverloads. They are defined using SPIR-V builtins, so in theory this header
could be used as-is for other targets.
cmath
stl wrapper to includecmath-fallback.h
for NVPTXand AMDGCN. In upstream LLVM
clang-cuda
always includes with-include
theheader with these overloads, using the stl wrappers is a bit more selective.
rint
to device lib tests and stl wrapper, this was added in[SYCL][Devicelib] Implement cmath rintf wrapper with __spirv_ocl_rint #18857 but wasn't in E2E testing.
Compile-time performance
A quick check of compile-time shows that this seems to provide a small performance improvement. Using two samples, one using cmath (the E2E
cmath_test.cpp
), and a sample not using cmath, over 10 iterations, I'm getting the following results:Which suggest that the no cmath compile time performance is pretty much equivalent, and the cmath compile-time performance is faster by roughly ~0.12s.
And this is with the whole
libdevice
setup still in place, so it's possible this approach could be even more beneficial with more work.Future work
cmath-fallback.h
, theseweren't defined in libdevice, we should either remove the commented out lines or
implement them properly.
cmath
andmath.h
, the currentcmath-fallback.h
implements bothwhich seems to work fine, but ideally we should split it up.
nearbyint
, this was only implemented for NVPTX and AMDGCN inlibdevice
, this patch keeps it the same, but we should look into propersupport and testing for this.
libdevice
into headers (complex, assert, crt, etc ...).