Skip to content

Conversation

@tomguluson92
Copy link

When I load Flux trained lora through:

from diffusers import AutoPipelineForText2Image, FluxPipeline
from safetensors.torch import load_file

pipe = FluxPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16).to('cuda')
pipe.load_lora_weights("model_qk_text.safetensors")

It raised this problem:

    pipe.load_lora_weights("model_qk_text.safetensors")
  File "/output/diffusers/src/diffusers/loaders/lora_pipeline.py", line 1848, in load_lora_weights
    self.load_lora_into_transformer(
  File "/output/diffusers/src/diffusers/loaders/lora_pipeline.py", line 1951, in load_lora_into_transformer
    incompatible_keys = set_peft_model_state_dict(transformer, state_dict, adapter_name, **peft_kwargs)
  File "/usr/local/lib/python3.8/site-packages/peft/utils/save_and_load.py", line 458, in set_peft_model_state_dict
    load_result = model.load_state_dict(peft_model_state_dict, strict=False, assign=True)

After remove assign=True, it all works.

When I load Flux trained lora through:
```
from diffusers import AutoPipelineForText2Image, FluxPipeline
from safetensors.torch import load_file

pipe = FluxPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16).to('cuda')
pipe.load_lora_weights("model_qk_text.safetensors")
```

It raised this problem: 
```
    pipe.load_lora_weights("model_qk_text.safetensors")
  File "/output/diffusers/src/diffusers/loaders/lora_pipeline.py", line 1848, in load_lora_weights
    self.load_lora_into_transformer(
  File "/output/diffusers/src/diffusers/loaders/lora_pipeline.py", line 1951, in load_lora_into_transformer
    incompatible_keys = set_peft_model_state_dict(transformer, state_dict, adapter_name, **peft_kwargs)
  File "/usr/local/lib/python3.8/site-packages/peft/utils/save_and_load.py", line 458, in set_peft_model_state_dict
    load_result = model.load_state_dict(peft_model_state_dict, strict=False, assign=True)
```

After remove `assign=True`, it all works.
@tomguluson92 tomguluson92 changed the title fixs: bugs of assign=True Fix: bugs of assign=True Nov 28, 2024
@tomguluson92 tomguluson92 changed the title Fix: bugs of assign=True FIX: bugs of assign=True in load lora Nov 28, 2024
@BenjaminBossan
Copy link
Member

BenjaminBossan commented Nov 28, 2024

Thanks for reporting this error. We cannot change the argument just like that, as this will lead to failure in loading other models. Instead, let's try to debug why Flux fails in this case. As a first step, could you please check if loading while passing low_cpu_mem_usage=False to load_lora_weights resolves your error?

@tomguluson92
Copy link
Author

tomguluson92 commented Nov 28, 2024

low_cpu_mem_usage=False is work, so what's your opinion on this problem? Should we add a special flag for peft Flux compatible?

@BenjaminBossan
Copy link
Member

Should we add a special flag for peft Flux compatible?

Before we do that, we need to first understand why this adapter causes the issue, while others work. Then we can think of the best solution. I'll take a look at it when I have a bit of time on my hands.

@BenjaminBossan
Copy link
Member

I have a bit of time to investigate the issue this week. Do you know of a publicly available LoRA Flux adapter that causes the issue you described (only safetensors)? That way, I can try to reproduce the error.

@github-actions
Copy link

This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.

@BenjaminBossan
Copy link
Member

not stale

@BenjaminBossan
Copy link
Member

@tomguluson92 Did you see my last question?

@nsbg
Copy link

nsbg commented Oct 11, 2025

@BenjaminBossan Hi, is this still in progress? It looks like the work might have stopped midway, so if this is still a valid issue, I'd like to continue working on it. I'm leaving this comment to check.

@BenjaminBossan
Copy link
Member

@nsbg A first step would be to find a way to replicate the error. If you're interested in working on this, I'd be happy to assist.

@nsbg
Copy link

nsbg commented Oct 15, 2025

Okay. That sounds like an interesting task. I'll start by writing and testing some example code, using the exact same model as the user who first reported this issue.

@nsbg
Copy link

nsbg commented Oct 17, 2025

I ran the code in Colab and encountered two different scenarios.

The first scenario occurred when I ran the original user's code exactly as they provided it: I got an error indicating that the file 'model_qk_text.safetensors' could not be found.

image

I searched for this file on Hugging Face and GitHub, but it doesn't seem to exist anywhere. I suspect the original user might have uploaded this file locally. Therefore, I can't reproduce the exact error with the provided code.

The second scenario happened when I arbitrarily modified the contents of the load_lora_weights method. I referenced black-forest-labs/FLUX.1-Depth-dev-lora and modified the code as shown below.

import torch

from diffusers import AutoPipelineForText2Image, FluxPipeline
from safetensors.torch import load_file

pipe = FluxPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16).to('cuda')
pipe.load_lora_weights("black-forest-labs/FLUX.1-Depth-dev-lora", adapter_name="depth")

In this case, the model loaded successfully, but an error occurred during the image generation process.

Given the initial issue, it seems like an error should have occurred during the model loading process, but it's proving difficult to reproduce.

@BenjaminBossan
Copy link
Member

In this case, the model loaded successfully, but an error occurred during the image generation process.

Could you please show the code and error message.

Given the initial issue, it seems like an error should have occurred during the model loading process, but it's proving difficult to reproduce.

Yes, exactly, that is why I asked about access to the weight earlier. @tomguluson92 do you still have that issue with latest versions of PEFT and diffusers?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants