Skip to content

Conversation

zobeideThePlayer
Copy link

Description

Fixes a bug that causes precision issues in mix-precision training.

Current implementation of copy_ method in QuantizedTensor class does not properly pass the dst.dtype information when src is a QuantizedTensor and dst is not. This may cause precision concerns under certain circumstances, for instance:
(1) main-stream precision is bfloat16
(2) model is initialized with FP8 format
(3) master weights in optimizers are kept at high precision, i.e., float32
(4) when continue training from a checkpoint but optimizer stats are not loaded/provided, master weights must be initialized from model weights

In above conditions and alike, model weights will be dequantized to bfloat16 (the dtype recorded in quantizer object) and then copied to master weight (of float32 precision), where the trailing 16 bits info are lost.

Type of change

  • Documentation change (change only to the documentation, either a fix or a new content)
  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • Infra/Build change
  • Code refactoring

Changes

Please list the changes introduced in this PR:

  • adds dst.dtype information in the dequantize() function call of copy_ method.

Checklist:

  • I have read and followed the contributing guidelines
  • The functionality is complete
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • My changes generate no new warnings
  • I have added tests that prove my fix is effective or that my feature works
  • New and existing unit tests pass locally with my changes

timmoon10
timmoon10 previously approved these changes Aug 26, 2025
Copy link
Collaborator

@timmoon10 timmoon10 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks for the fix.

@timmoon10
Copy link
Collaborator

/te-ci pytorch

@timmoon10 timmoon10 self-requested a review August 26, 2025 19:09
@ptrendx
Copy link
Member

ptrendx commented Sep 17, 2025

/te-ci pytorch

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants