-
Notifications
You must be signed in to change notification settings - Fork 89
fix(diffusers): add mindone/tests/diffusers_tests/lora and lora_scale from each PEFT layer #1187
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix(diffusers): add mindone/tests/diffusers_tests/lora and lora_scale from each PEFT layer #1187
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @Cui-yshoho, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request undertakes a significant refactoring effort by migrating the Parameter-Efficient Fine-Tuning (PEFT) modules, particularly those related to LoRA, from an internal mindone/diffusers/_peft path to a dedicated mindone/peft module. This change aims to centralize PEFT functionalities, making them more accessible and maintainable across the mindone ecosystem. Concurrently, it introduces explicit lora_scale management for individual PEFT layers, providing finer control over model adjustments during inference. The changes span across numerous model definitions, pipelines, and utility functions, ensuring a consistent and updated codebase.
Highlights
- Refactoring PEFT Modules: Migrated LoRA-related functionalities and utilities from the internal
mindone/diffusers/_peftpath to a new, centralizedmindone/peftmodule. This improves modularity and reusability across the library. - LoRA Scale Handling: Introduced explicit handling and propagation of
lora_scalefor individual PEFT layers within various pipelines and models, allowing for more granular control over LoRA application during inference. - Codebase Modernization: Updated numerous example scripts and core
diffuserscomponents to reflect the newmindone/peftimport paths. This also involved the removal of the deprecatedmindone/diffusers/_peftdirectory and its contents. - Pipeline Enhancements: Several pipelines, including those for AnimateDiff, AuraFlow, CogVideoX, Flux, HunyuanVideo, Kandinsky, LTX, Lumina, Mochi, OmniGen, PixArt-Alpha, Sana, and Stable Diffusion variants, have been updated to support the new PEFT structure and
lora_scalehandling. Additionally, copyright years were updated to 2025 across many files. - New Loader Utilities: Introduced new loader utilities and conversion functions, such as
_expand_input_ids_with_image_tokensand specific converters for HiDream and LTX-Video LoRA models, enhancing compatibility and functionality for various model types.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments or fill out our survey to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request is a significant refactoring that moves the PEFT implementation to a top-level mindone.peft module, which is a great architectural improvement for better modularity and maintainability. Additionally, it introduces a more flexible way to handle lora_scale within pipeline calls and adds support for several new models and LoRA formats.
The changes are extensive but generally look solid. I've identified a couple of instances of duplicated code that should be removed and a potential logic issue in how ControlNet-Union models are handled. I've also noted a few good bug fixes and enhancements.
| # ControlNet-Union with multiple conditions | ||
| # only load one ControlNet for saving memories | ||
| if len(self.nets) == 1 and self.nets[0].union: | ||
| if len(self.nets) == 1: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Removing the self.nets[0].union check broadens the condition to any single ControlNet, not just a "Union" type. However, the comment and the subsequent loop logic for accumulating control_block_samples seem specific to a ControlNet-Union model handling multiple conditions.
If a non-union ControlNet is used with multiple conditions, this might lead to unexpected behavior. Could you please confirm if this change is intentional and if the logic inside the loop is generic enough for all single ControlNet models when multiple conditions are passed? If not, it might be safer to keep the self.nets[0].union check.
fc9b7a7 to
ed86ee5
Compare
2c805e8 to
2d67f65
Compare
2d67f65 to
1e810c4
Compare
2f2e070 to
8e21802
Compare
8e21802 to
3f7977d
Compare
… from each PEFT layer (mindspore-lab#1187) * feat(diffusers): upgrade mindone.diffusers from v0.33.1 to v0.34.0 * refactor(peft): move peft module from `mindone.diffusers._peft` to `mindone.peft` * feat(diffusers): add loaders tests * fix bugs * feat(peft): upgrade PEFT from v0.8.2 to v0.15.2 (0718 alpha) --------- Co-authored-by: townwish <[email protected]>
What does this PR do?
mindone/tests/diffusers_tests/loralora_scalefrom each PEFT layerRely on peft_dev from townwish4git and v0.34.0 diffusers from Cui-yshoho.
Before submitting
What's New. Here are thedocumentation guidelines
Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@xxx