Skip to content

Conversation

@Cui-yshoho
Copy link
Contributor

@Cui-yshoho Cui-yshoho commented Jul 24, 2025

What does this PR do?

  1. Add mindone/tests/diffusers_tests/lora
  2. Add lora_scale from each PEFT layer

Rely on peft_dev from townwish4git and v0.34.0 diffusers from Cui-yshoho.

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you read the contributor guideline?
  • Did you make sure to update the documentation with your changes? E.g. record bug fixes or new features in What's New. Here are the
    documentation guidelines
  • Did you build and run the code without any errors?
  • Did you report the running environment (NPU type/MS version) and performance in the doc? (better record it for data loading, model inference, or training tasks)
  • Did you write any new necessary tests?

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.

@xxx

@Cui-yshoho Cui-yshoho changed the title Loaders test dev fix(diffusers): add mindone/tests/diffusers_tests/lora and lora_scale from each PEFT layer Jul 24, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @Cui-yshoho, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request undertakes a significant refactoring effort by migrating the Parameter-Efficient Fine-Tuning (PEFT) modules, particularly those related to LoRA, from an internal mindone/diffusers/_peft path to a dedicated mindone/peft module. This change aims to centralize PEFT functionalities, making them more accessible and maintainable across the mindone ecosystem. Concurrently, it introduces explicit lora_scale management for individual PEFT layers, providing finer control over model adjustments during inference. The changes span across numerous model definitions, pipelines, and utility functions, ensuring a consistent and updated codebase.

Highlights

  • Refactoring PEFT Modules: Migrated LoRA-related functionalities and utilities from the internal mindone/diffusers/_peft path to a new, centralized mindone/peft module. This improves modularity and reusability across the library.
  • LoRA Scale Handling: Introduced explicit handling and propagation of lora_scale for individual PEFT layers within various pipelines and models, allowing for more granular control over LoRA application during inference.
  • Codebase Modernization: Updated numerous example scripts and core diffusers components to reflect the new mindone/peft import paths. This also involved the removal of the deprecated mindone/diffusers/_peft directory and its contents.
  • Pipeline Enhancements: Several pipelines, including those for AnimateDiff, AuraFlow, CogVideoX, Flux, HunyuanVideo, Kandinsky, LTX, Lumina, Mochi, OmniGen, PixArt-Alpha, Sana, and Stable Diffusion variants, have been updated to support the new PEFT structure and lora_scale handling. Additionally, copyright years were updated to 2025 across many files.
  • New Loader Utilities: Introduced new loader utilities and conversion functions, such as _expand_input_ids_with_image_tokens and specific converters for HiDream and LTX-Video LoRA models, enhancing compatibility and functionality for various model types.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments or fill out our survey to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request is a significant refactoring that moves the PEFT implementation to a top-level mindone.peft module, which is a great architectural improvement for better modularity and maintainability. Additionally, it introduces a more flexible way to handle lora_scale within pipeline calls and adds support for several new models and LoRA formats.

The changes are extensive but generally look solid. I've identified a couple of instances of duplicated code that should be removed and a potential logic issue in how ControlNet-Union models are handled. I've also noted a few good bug fixes and enhancements.

# ControlNet-Union with multiple conditions
# only load one ControlNet for saving memories
if len(self.nets) == 1 and self.nets[0].union:
if len(self.nets) == 1:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Removing the self.nets[0].union check broadens the condition to any single ControlNet, not just a "Union" type. However, the comment and the subsequent loop logic for accumulating control_block_samples seem specific to a ControlNet-Union model handling multiple conditions.

If a non-union ControlNet is used with multiple conditions, this might lead to unexpected behavior. Could you please confirm if this change is intentional and if the logic inside the loop is generic enough for all single ControlNet models when multiple conditions are passed? If not, it might be safer to keep the self.nets[0].union check.

@Cui-yshoho Cui-yshoho force-pushed the loaders_test_dev branch 16 times, most recently from fc9b7a7 to ed86ee5 Compare July 25, 2025 10:02
@Cui-yshoho Cui-yshoho force-pushed the loaders_test_dev branch 2 times, most recently from 2c805e8 to 2d67f65 Compare August 11, 2025 07:50
@Cui-yshoho Cui-yshoho force-pushed the loaders_test_dev branch 17 times, most recently from 2f2e070 to 8e21802 Compare October 11, 2025 07:04
@zhanghuiyao zhanghuiyao added this pull request to the merge queue Oct 15, 2025
Merged via the queue into mindspore-lab:master with commit 2694071 Oct 15, 2025
3 checks passed
vigo999 added a commit that referenced this pull request Nov 2, 2025
- Added mindone.peft v0.15.2 upgrade (#1194)
- Added Qwen2.5-Omni LoRA finetuning script (#1218)
- Added PEFT layer fixes (#1187)
- PEFT is an important parameter-efficient fine-tuning library
zackcxb pushed a commit to zackcxb/mindone that referenced this pull request Nov 21, 2025
… from each PEFT layer (mindspore-lab#1187)

* feat(diffusers): upgrade mindone.diffusers from v0.33.1 to v0.34.0

* refactor(peft): move peft module from `mindone.diffusers._peft` to `mindone.peft`

* feat(diffusers): add loaders tests

* fix bugs

* feat(peft): upgrade PEFT from v0.8.2 to v0.15.2 (0718 alpha)

---------

Co-authored-by: townwish <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants