-
Notifications
You must be signed in to change notification settings - Fork 389
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is it necessary to perform layer replacement on te.xx? If not, is it effective to use te.fp8.autocast directly #1556
Comments
Something like this:
Note: Non TE modules are unaffected by |
Because I am doing lora fine-tuning on someone else's base model, if torch.nn Change Linear to te Will Linear lead to inconsistent model structure? Does this mean that the method cannot be applied to Lora fine-tuning? |
I did not replace te.xx, but directly replaced it with te.fp8.autocast at the mixed precision position (amp.autocast). Can fp8 be enabled for calculation?If I want to enable FP8 mixed precision calculation, how can I modify it?
for example:
class WanAttentionBlock(nn.Module):
The text was updated successfully, but these errors were encountered: