Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[bug]: AttributeError: 'LayerNorm' object has no attribute 'get_num_patches' #7588

Open
1 task done
licyk opened this issue Jan 23, 2025 · 0 comments
Open
1 task done
Labels
bug Something isn't working

Comments

@licyk
Copy link

licyk commented Jan 23, 2025

Is there an existing issue for this problem?

  • I have searched the existing issues

Operating system

Windows

GPU vendor

Nvidia (CUDA)

GPU model

RTX 4060 laptop

GPU VRAM

8GB

Version number

5.6.0

Browser

Firefox 134.0.2

Python dependencies

{
  "accelerate": "1.0.1",
  "compel": "2.0.2",
  "cuda": "12.4",
  "diffusers": "0.31.0",
  "numpy": "1.26.4",
  "opencv": "4.9.0.80",
  "onnx": "1.16.1",
  "pillow": "10.4.0",
  "python": "3.10.11",
  "torch": "2.4.1+cu124",
  "torchvision": "0.19.1+cu124",
  "transformers": "4.46.3",
  "xformers": "0.0.28.post1"
}

What happened

When I was using Invoke canvas, I generated images using the Illustrious-XL model and the Artist Style: rurudo (LYCORIS) model, but encountered the error: AttributeError: 'LayerNorm' object has no attribute 'get_num_patches'.

I tried to remove Artist Style: rurudo (LYCORIS) model, Invoke was able to generate images normally.

Console logs

[2025-01-23 12:31:36,046]::[InvokeAI]::INFO --> Patchmatch initialized
[2025-01-23 12:31:36,912]::[InvokeAI]::INFO --> Using torch device: NVIDIA GeForce RTX 4060 Laptop GPU
[2025-01-23 12:31:38,288]::[InvokeAI]::INFO --> cuDNN version: 90100
[2025-01-23 12:31:38,304]::[InvokeAI]::INFO --> InvokeAI version 5.6.0
[2025-01-23 12:31:38,304]::[InvokeAI]::INFO --> Root directory = E:\Softwares\InvokeAI\invokeai
[2025-01-23 12:31:38,304]::[InvokeAI]::INFO --> Initializing database at E:\Softwares\InvokeAI\invokeai\databases\invokeai.db
[2025-01-23 12:31:38,415]::[ModelManagerService]::INFO --> [MODEL CACHE] Calculated model RAM cache size: 13936.53 MB. Heuristics applied: [1].
[2025-01-23 12:31:38,624]::[InvokeAI]::INFO --> Pruned 2 finished queue items
[2025-01-23 12:31:40,577]::[InvokeAI]::INFO --> Cleaned database (freed 0.13MB)
[2025-01-23 12:31:40,577]::[InvokeAI]::INFO --> Invoke running on http://127.0.0.1:9090 (Press CTRL+C to quit)
[2025-01-23 12:31:49,828]::[InvokeAI]::INFO --> Executing queue item 2966, session 50ac144e-544b-49a0-9d79-229f73bc363f
Fetching 17 files: 100%|███████████████████████████████████████████████████████████████████████| 17/17 [00:00<?, ?it/s]
Loading pipeline components...: 100%|████████████████████████████████████████████████████| 7/7 [00:00<00:00, 13.87it/s]
[2025-01-23 12:31:52,174]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model '8f1e23bf-b9d0-4287-88f7-4c70f84507de:vae' (AutoencoderKL) onto cuda device in 0.21s. Total model size: 159.56MB, VRAM: 159.56MB (100.0%)
E:\Softwares\InvokeAI\venv\lib\site-packages\diffusers\models\attention_processor.py:2383: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:555.)
  hidden_states = F.scaled_dot_product_attention(
[2025-01-23 12:31:53,205]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model '8f1e23bf-b9d0-4287-88f7-4c70f84507de:text_encoder' (CLIPTextModel) onto cuda device in 0.13s. Total model size: 234.72MB, VRAM: 234.72MB (100.0%)
[2025-01-23 12:31:53,205]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model '8f1e23bf-b9d0-4287-88f7-4c70f84507de:tokenizer' (CLIPTokenizer) onto cuda device in 0.00s. Total model size: 0.00MB, VRAM: 0.00MB (0.0%)
[2025-01-23 12:31:53,919]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model '8f1e23bf-b9d0-4287-88f7-4c70f84507de:text_encoder_2' (CLIPTextModelWithProjection) onto cuda device in 0.49s. Total model size: 1324.96MB, VRAM: 1324.96MB (100.0%)
[2025-01-23 12:31:53,929]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model '8f1e23bf-b9d0-4287-88f7-4c70f84507de:tokenizer_2' (CLIPTokenizer) onto cuda device in 0.00s. Total model size: 0.00MB, VRAM: 0.00MB (0.0%)
[2025-01-23 12:31:54,089]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model '8f1e23bf-b9d0-4287-88f7-4c70f84507de:text_encoder' (CLIPTextModel) onto cuda device in 0.00s. Total model size: 234.72MB, VRAM: 234.72MB (100.0%)
[2025-01-23 12:31:54,093]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model '8f1e23bf-b9d0-4287-88f7-4c70f84507de:tokenizer' (CLIPTokenizer) onto cuda device in 0.00s. Total model size: 0.00MB, VRAM: 0.00MB (0.0%)
Token indices sequence length is longer than the specified maximum sequence length for this model (149 > 77). Running this sequence through the model will result in indexing errors
[2025-01-23 12:31:54,304]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model '8f1e23bf-b9d0-4287-88f7-4c70f84507de:text_encoder_2' (CLIPTextModelWithProjection) onto cuda device in 0.00s. Total model size: 1324.96MB, VRAM: 1324.96MB (100.0%)
[2025-01-23 12:31:54,310]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model '8f1e23bf-b9d0-4287-88f7-4c70f84507de:tokenizer_2' (CLIPTokenizer) onto cuda device in 0.00s. Total model size: 0.00MB, VRAM: 0.00MB (0.0%)
Token indices sequence length is longer than the specified maximum sequence length for this model (149 > 77). Running this sequence through the model will result in indexing errors
[2025-01-23 12:31:56,762]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model '8f1e23bf-b9d0-4287-88f7-4c70f84507de:unet' (UNet2DConditionModel) onto cuda device in 2.25s. Total model size: 4897.05MB, VRAM: 4897.05MB (100.0%)
[2025-01-23 12:31:56,793]::[InvokeAI]::ERROR --> Error while invoking session 50ac144e-544b-49a0-9d79-229f73bc363f, invocation 559c5fb8-d8bc-4385-815a-a385db37a717 (denoise_latents): 'LayerNorm' object has no attribute 'get_num_patches'
[2025-01-23 12:31:56,793]::[InvokeAI]::ERROR --> Traceback (most recent call last):
  File "E:\Softwares\InvokeAI\venv\lib\site-packages\invokeai\app\services\session_processor\session_processor_default.py", line 129, in run_node
    output = invocation.invoke_internal(context=context, services=self._services)
  File "E:\Softwares\InvokeAI\venv\lib\site-packages\invokeai\app\invocations\baseinvocation.py", line 300, in invoke_internal
    output = self.invoke(context)
  File "E:\Softwares\InvokeAI\venv\lib\site-packages\invokeai\app\invocations\denoise_latents.py", line 824, in invoke
    return self._old_invoke(context)
  File "E:\Softwares\InvokeAI\venv\lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
  File "D:\Softwares\Python310\lib\contextlib.py", line 79, in inner
    return func(*args, **kwds)
  File "E:\Softwares\InvokeAI\venv\lib\site-packages\invokeai\app\invocations\denoise_latents.py", line 1011, in _old_invoke
    with (
  File "D:\Softwares\Python310\lib\contextlib.py", line 135, in __enter__
    return next(self.gen)
  File "E:\Softwares\InvokeAI\venv\lib\site-packages\invokeai\backend\patches\layer_patcher.py", line 37, in apply_smart_model_patches
    LayerPatcher.apply_smart_model_patch(
  File "E:\Softwares\InvokeAI\venv\lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
  File "E:\Softwares\InvokeAI\venv\lib\site-packages\invokeai\backend\patches\layer_patcher.py", line 114, in apply_smart_model_patch
    elif module.get_num_patches() > 0:
  File "E:\Softwares\InvokeAI\venv\lib\site-packages\torch\nn\modules\module.py", line 1729, in __getattr__
    raise AttributeError(f"'{type(self).__name__}' object has no attribute '{name}'")
AttributeError: 'LayerNorm' object has no attribute 'get_num_patches'

[2025-01-23 12:31:56,839]::[InvokeAI]::INFO --> Graph stats: 50ac144e-544b-49a0-9d79-229f73bc363f
                          Node   Calls   Seconds  VRAM Used
             sdxl_model_loader       1    0.014s     0.000G
                           i2l       1    3.207s     1.662G
                         noise       1    0.000s     0.164G
                 core_metadata       1    0.000s     0.164G
                 lora_selector       1    0.009s     0.164G
                       collect       3    0.002s     1.687G
   sdxl_lora_collection_loader       1    0.000s     0.164G
            sdxl_compel_prompt       2    1.431s     1.696G
               denoise_latents       1    2.279s     4.818G
TOTAL GRAPH EXECUTION TIME:   6.942s
TOTAL GRAPH WALL TIME:   6.946s
RAM used by InvokeAI process: 8.45G (+7.616G)
RAM used to load models: 6.54G
VRAM in use: 4.790G
RAM cache statistics:
   Model cache hits: 15
   Model cache misses: 2
   Models cached: 8
   Models cleared from cache: 0
   Cache high water mark: 6.54/0.00G

What you expected to happen

Using the Illustrious-XL model and the Artist Style: rurudo (LYCORIS) model to generate images and no error.

How to reproduce the problem

  1. Add Illustrious-XL model and the Artist Style: rurudo (LYCORIS) model to Invoke.
  2. Open Invoke canvas, selecting Illustrious-XL model in the model option and select Artist Style: rurudo (LYCORIS) model in the concepts option.
  3. Click Invoke.

Additional context

No response

Discord username

No response

@licyk licyk added the bug Something isn't working label Jan 23, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant