Skip to content

Conversation

@jberchtold-nvidia
Copy link
Collaborator

@jberchtold-nvidia jberchtold-nvidia commented Oct 10, 2025

Description

Follow-up of PR #2219 to apply warmup initialization to the rest of the TE/JAX primitives.

Motivation, identical to previous PR:

If CUDA modules are loaded during any live NCCL kernels, this can cause a hang. When lazy module loading is enabled, the CUDA modules of TE/JAX's custom ops won't be loaded until the XLA "execute" stage, which may have active NCCL kernels.
To prevent this, we register an XLA "initialize" stage for these primitives that does a dry-run of the custom op inside a stream capture that is immediately destroyed to trigger loading of the CUDA modules before the "execute" stage of the XLA program starts.

Type of change

  • Documentation change (change only to the documentation, either a fix or a new content)
  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • Infra/Build change
  • Code refactoring

Changes

  • Add stream capture warmup to the XLA initialize stage of the remaining primitives. This includes quantize, dequantize, softmax, fused attention, gemm, and grouped gemm.

Checklist:

  • I have read and followed the contributing guidelines
  • The functionality is complete
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • My changes generate no new warnings
  • I have added tests that prove my fix is effective or that my feature works
  • New and existing unit tests pass locally with my changes

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant