Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

torch.OutOfMemoryError: Allocation on device #388

Open
KeenSting0712 opened this issue Feb 20, 2025 · 1 comment
Open

torch.OutOfMemoryError: Allocation on device #388

KeenSting0712 opened this issue Feb 20, 2025 · 1 comment

Comments

@KeenSting0712
Copy link

KeenSting0712 commented Feb 20, 2025

I have Tesla T4 with 16G vram, using hunyuan_video_FastVideo_720_fp8_e4m3fn.safetensors model whitch are said can be running with 8G vram, but I keep getting Error info as follows , help!

Image Image

`

管理器

Show Image Feed 🐍
1

torch.OutOfMemoryError 5 HyVideoDecode
high quality anime style movie featuring a dog running in forest
HyVideoDecode
Allocation on device

ComfyUI Error Report

Error Details

  • Node ID: 5
  • Node Type: HyVideoDecode
  • Exception Type: torch.OutOfMemoryError
  • Exception Message: Allocation on device

Stack Trace

  File "/data/ml/ComfyUI-master/execution.py", line 327, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)

  File "/data/ml/ComfyUI-master/execution.py", line 202, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)

  File "/data/ml/ComfyUI-master/execution.py", line 174, in _map_node_over_list
    process_inputs(input_dict, i)

  File "/data/ml/ComfyUI-master/execution.py", line 163, in process_inputs
    results.append(getattr(obj, func)(**inputs))

  File "/data/ml/ComfyUI-master/custom_nodes/comfyui-hunyuanvideowrapper/nodes.py", line 1369, in decode
    video = vae.decode(

  File "/usr/local/lib/python3.10/dist-packages/diffusers/utils/accelerate_utils.py", line 46, in wrapper
    return method(self, *args, **kwargs)

  File "/data/ml/ComfyUI-master/custom_nodes/comfyui-hunyuanvideowrapper/hyvideo/vae/autoencoder_kl_causal_3d.py", line 345, in decode
    decoded = self._decode(z).sample

  File "/data/ml/ComfyUI-master/custom_nodes/comfyui-hunyuanvideowrapper/hyvideo/vae/autoencoder_kl_causal_3d.py", line 315, in _decode
    dec = self.decoder(z)

  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)

  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1750, in _call_impl
    return forward_call(*args, **kwargs)

  File "/data/ml/ComfyUI-master/custom_nodes/comfyui-hunyuanvideowrapper/hyvideo/vae/vae.py", line 299, in forward
    sample = up_block(sample, latent_embeds)

  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)

  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1750, in _call_impl
    return forward_call(*args, **kwargs)

  File "/data/ml/ComfyUI-master/custom_nodes/comfyui-hunyuanvideowrapper/hyvideo/vae/unet_causal_3d_blocks.py", line 795, in forward
    hidden_states = upsampler(hidden_states)

  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)

  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1750, in _call_impl
    return forward_call(*args, **kwargs)

  File "/data/ml/ComfyUI-master/custom_nodes/comfyui-hunyuanvideowrapper/hyvideo/vae/unet_causal_3d_blocks.py", line 194, in forward
    hidden_states = self.conv(hidden_states)

  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)

  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1750, in _call_impl
    return forward_call(*args, **kwargs)

  File "/data/ml/ComfyUI-master/custom_nodes/comfyui-hunyuanvideowrapper/hyvideo/vae/unet_causal_3d_blocks.py", line 78, in forward
    return self.conv(x)

  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)

  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1750, in _call_impl
    return forward_call(*args, **kwargs)

  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/conv.py", line 725, in forward
    return self._conv_forward(input, self.weight, self.bias)

  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/conv.py", line 720, in _conv_forward
    return F.conv3d(

System Information

  • ComfyUI Version: 0.3.14
  • Arguments: main.py
  • OS: posix
  • Python Version: 3.10.12 (main, Nov 6 2024, 20:22:13) [GCC 11.4.0]
  • Embedded Python: false
  • PyTorch Version: 2.6.0+cu124

Devices

  • Name: cuda:0 Tesla T4 : cudaMallocAsync
    • Type: cuda
    • VRAM Total: 15655829504
    • VRAM Free: 14970828594
    • Torch VRAM Total: 939524096
    • Torch VRAM Free: 435926834

Logs

2025-02-20T10:25:10.426219 - Using pytorch attention
2025-02-20T10:25:12.742068 - ComfyUI version: 0.3.14
2025-02-20T10:25:12.745750 - [Prompt Server] web root: /data/ml/ComfyUI-master/web
2025-02-20T10:25:13.849696 - 
2025-02-20T10:25:13.849775 - �[92m[rgthree-comfy] Loaded 42 extraordinary nodes. 🎉�[00m2025-02-20T10:25:13.849819 - 
2025-02-20T10:25:13.849860 - 
2025-02-20T10:25:14.055064 - Total VRAM 14931 MB, total RAM 31324 MB
2025-02-20T10:25:14.055207 - pytorch version: 2.6.0+cu124
2025-02-20T10:25:14.055461 - Set vram state to: NORMAL_VRAM
2025-02-20T10:25:14.055623 - Device: cuda:0 Tesla T4 : cudaMallocAsync
2025-02-20T10:25:14.374422 - (pysssss:WD14Tagger) [DEBUG] Available ORT providers: AzureExecutionProvider, CPUExecutionProvider2025-02-20T10:25:14.374476 - 
2025-02-20T10:25:14.374507 - (pysssss:WD14Tagger) [DEBUG] Using ORT providers: CUDAExecutionProvider, CPUExecutionProvider2025-02-20T10:25:14.374548 - 
2025-02-20T10:25:14.916601 - ### Loading: ComfyUI-Manager (V3.17.7)
2025-02-20T10:25:14.918548 - ### ComfyUI Revision: UNKNOWN (The currently installed ComfyUI is not a Git repository)
2025-02-20T10:25:15.672976 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json
2025-02-20T10:25:16.140195 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json
2025-02-20T10:25:16.201189 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json
2025-02-20T10:25:16.410311 - �[34m[ComfyUI-Easy-Use] server: �[0mv1.2.8 �[92mLoaded�[0m2025-02-20T10:25:16.410369 - 
2025-02-20T10:25:16.410420 - �[34m[ComfyUI-Easy-Use] web root: �[0m/data/ml/ComfyUI-master/custom_nodes/ComfyUI-Easy-Use-main/web_version/v2 �[92mLoaded�[0m2025-02-20T10:25:16.410455 - 
2025-02-20T10:25:17.197069 - Building prefix dict from the default dictionary ...
2025-02-20T10:25:17.197134 - Building prefix dict from the default dictionary ...
2025-02-20T10:25:17.197511 - Loading model from cache /tmp/jieba.cache
2025-02-20T10:25:17.197541 - Loading model from cache /tmp/jieba.cache
2025-02-20T10:25:17.800196 - Loading model cost 0.603 seconds.
2025-02-20T10:25:17.800253 - Loading model cost 0.603 seconds.
2025-02-20T10:25:17.800556 - Prefix dict has been built successfully.
2025-02-20T10:25:17.800582 - Prefix dict has been built successfully.
2025-02-20T10:25:17.800637 - Word segmentation module jieba initialized.
2025-02-20T10:25:17.800667 - 
2025-02-20T10:25:18.859645 - PyTorch version 2.6.0 available.
2025-02-20T10:25:18.861087 - JAX version 0.5.0 available.
2025-02-20T10:25:19.497350 - �[36;20m[comfyui_controlnet_aux] | INFO -> Using ckpts path: /data/ml/ComfyUI-master/custom_nodes/comfyui_controlnet_aux/ckpts�[0m
2025-02-20T10:25:19.497614 - �[36;20m[comfyui_controlnet_aux] | INFO -> Using symlinks: False�[0m
2025-02-20T10:25:19.497813 - �[36;20m[comfyui_controlnet_aux] | INFO -> Using ort providers: ['CUDAExecutionProvider', 'DirectMLExecutionProvider', 'OpenVINOExecutionProvider', 'ROCMExecutionProvider', 'CPUExecutionProvider', 'CoreMLExecutionProvider']�[0m
2025-02-20T10:25:19.521694 - /data/ml/ComfyUI-master/custom_nodes/comfyui_controlnet_aux/node_wrappers/dwpose.py:26: UserWarning: DWPose: Onnxruntime not found or doesn't come with acceleration providers, switch to OpenCV with CPU device. DWPose might run very slowly
  warnings.warn("DWPose: Onnxruntime not found or doesn't come with acceleration providers, switch to OpenCV with CPU device. DWPose might run very slowly")
2025-02-20T10:25:19.753333 - 
Import times for custom nodes:
2025-02-20T10:25:19.753494 -    0.0 seconds: /data/ml/ComfyUI-master/custom_nodes/websocket_image_save.py
2025-02-20T10:25:19.755880 -    0.0 seconds: /data/ml/ComfyUI-master/custom_nodes/AIGODLIKE-COMFYUI-TRANSLATION
2025-02-20T10:25:19.755953 -    0.0 seconds: /data/ml/ComfyUI-master/custom_nodes/sdxl_prompt_styler
2025-02-20T10:25:19.756010 -    0.0 seconds: /data/ml/ComfyUI-master/custom_nodes/comfyui-wd14-tagger
2025-02-20T10:25:19.756076 -    0.0 seconds: /data/ml/ComfyUI-master/custom_nodes/comfyui-custom-scripts
2025-02-20T10:25:19.756131 -    0.0 seconds: /data/ml/ComfyUI-master/custom_nodes/comfyui_ultimatesdupscale
2025-02-20T10:25:19.756184 -    0.0 seconds: /data/ml/ComfyUI-master/custom_nodes/rgthree-comfy
2025-02-20T10:25:19.756236 -    0.0 seconds: /data/ml/ComfyUI-master/custom_nodes/ComfyUI_essentials-main
2025-02-20T10:25:19.756288 -    0.0 seconds: /data/ml/ComfyUI-master/custom_nodes/ComfyUI-Manager
2025-02-20T10:25:19.756339 -    0.0 seconds: /data/ml/ComfyUI-master/custom_nodes/comfyui_controlnet_aux
2025-02-20T10:25:19.756391 -    0.1 seconds: /data/ml/ComfyUI-master/custom_nodes/comfyui-hunyuanvideowrapper
2025-02-20T10:25:19.756443 -    0.1 seconds: /data/ml/ComfyUI-master/custom_nodes/ComfyUI_bitsandbytes_NF4
2025-02-20T10:25:19.756495 -    0.2 seconds: /data/ml/ComfyUI-master/custom_nodes/ComfyUI-LatentSyncWrapper-main
2025-02-20T10:25:19.756546 -    0.2 seconds: /data/ml/ComfyUI-master/custom_nodes/comfyui-videohelpersuite
2025-02-20T10:25:19.756597 -    0.3 seconds: /data/ml/ComfyUI-master/custom_nodes/ComfyUI-KJNodes-main
2025-02-20T10:25:19.756649 -    0.5 seconds: /data/ml/ComfyUI-master/custom_nodes/ComfyUI-Whisper-master
2025-02-20T10:25:19.756701 -    1.5 seconds: /data/ml/ComfyUI-master/custom_nodes/ComfyUI-Easy-Use-main
2025-02-20T10:25:19.756759 -    3.0 seconds: /data/ml/ComfyUI-master/custom_nodes/ComfyUI-F5-TTS
2025-02-20T10:25:19.756814 - 
2025-02-20T10:25:19.773066 - Starting server

2025-02-20T10:25:19.773393 - To see the GUI go to: http://127.0.0.1:8188
2025-02-20T10:25:22.262680 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json
2025-02-20T10:25:34.158633 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json
2025-02-20T10:25:39.230632 - FETCH ComfyRegistry Data: 5/342025-02-20T10:25:39.230682 - 
2025-02-20T10:25:44.269806 - FETCH ComfyRegistry Data: 10/342025-02-20T10:25:44.269860 - 
2025-02-20T10:25:49.035353 - FETCH ComfyRegistry Data: 15/342025-02-20T10:25:49.035412 - 
2025-02-20T10:25:49.565381 - FETCH DATA from: /data/ml/ComfyUI-master/custom_nodes/ComfyUI-Manager/extension-node-map.json2025-02-20T10:25:49.565438 - 2025-02-20T10:25:49.575824 -  [DONE]2025-02-20T10:25:49.575881 - 
2025-02-20T10:25:53.962018 - FETCH ComfyRegistry Data: 20/342025-02-20T10:25:53.962083 - 
2025-02-20T10:25:58.536235 - FETCH ComfyRegistry Data: 25/342025-02-20T10:25:58.536299 - 
2025-02-20T10:26:03.886499 - FETCH ComfyRegistry Data: 30/342025-02-20T10:26:03.886561 - 
2025-02-20T10:26:08.539673 - FETCH ComfyRegistry Data [DONE]2025-02-20T10:26:08.539732 - 
2025-02-20T10:26:08.594872 - [ComfyUI-Manager] default cache updated: https://api.comfy.org/nodes
2025-02-20T10:26:08.603078 - nightly_channel: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/remote
2025-02-20T10:26:08.603191 - FETCH DATA from: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json2025-02-20T10:26:08.603221 - 2025-02-20T10:26:27.280632 -  [DONE]2025-02-20T10:26:27.280695 - 
2025-02-20T10:26:27.313531 - [ComfyUI-Manager] All startup tasks have been completed.
2025-02-20T10:28:37.270878 - got prompt
2025-02-20T10:28:42.213731 - encoded latents shape2025-02-20T10:28:42.213799 -  2025-02-20T10:28:42.213867 - torch.Size([1, 16, 1, 32, 32])2025-02-20T10:28:42.213913 - 
2025-02-20T10:28:42.216067 - Loading text encoder model (clipL) from: /data/ml/ComfyUI-master/models/clip/clip-vit-large-patch14
2025-02-20T10:28:44.064273 - Text encoder to dtype: torch.float16
2025-02-20T10:28:44.065963 - Loading tokenizer (clipL) from: /data/ml/ComfyUI-master/models/clip/clip-vit-large-patch14
2025-02-20T10:28:44.219539 - Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
2025-02-20T10:28:44.223266 - You are using the default legacy behaviour of the <class 'transformers.models.llama.tokenization_llama_fast.LlamaTokenizerFast'>. This is expected, and simply means that the `legacy` (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set `legacy=False`. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565 - if you loaded a llama tokenizer from a GGUF file you can ignore this message.
2025-02-20T10:28:45.453919 - Loading text encoder model (vlm) from: /data/ml/ComfyUI-master/models/LLM/llava-llama-3-8b-v1_1-transformers
2025-02-20T10:30:10.246098 - 
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [01:24<00:00, 18.78s/it]2025-02-20T10:30:10.246372 - 
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [01:24<00:00, 21.01s/it]2025-02-20T10:30:10.246416 - 
2025-02-20T10:30:10.341059 - Text encoder to dtype: torch.bfloat16
2025-02-20T10:30:10.341276 - Loading tokenizer (vlm) from: /data/ml/ComfyUI-master/models/LLM/llava-llama-3-8b-v1_1-transformers
2025-02-20T10:30:10.822928 - !!! Exception during processing !!! unsupported operand type(s) for //: 'int' and 'NoneType'
2025-02-20T10:30:10.823973 - Traceback (most recent call last):
  File "/data/ml/ComfyUI-master/execution.py", line 327, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "/data/ml/ComfyUI-master/execution.py", line 202, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "/data/ml/ComfyUI-master/execution.py", line 174, in _map_node_over_list
    process_inputs(input_dict, i)
  File "/data/ml/ComfyUI-master/execution.py", line 163, in process_inputs
    results.append(getattr(obj, func)(**inputs))
  File "/data/ml/ComfyUI-master/custom_nodes/comfyui-hunyuanvideowrapper/nodes.py", line 904, in process
    prompt_embeds, negative_prompt_embeds, attention_mask, negative_attention_mask = encode_prompt(self,
  File "/data/ml/ComfyUI-master/custom_nodes/comfyui-hunyuanvideowrapper/nodes.py", line 829, in encode_prompt
    text_inputs = text_encoder.text2tokens(prompt,
  File "/data/ml/ComfyUI-master/custom_nodes/comfyui-hunyuanvideowrapper/hyvideo/text_encoder/__init__.py", line 253, in text2tokens
    text_tokens = self.processor(
  File "/usr/local/lib/python3.10/dist-packages/transformers/models/llava/processing_llava.py", line 160, in __call__
    num_image_tokens = (height // self.patch_size) * (
TypeError: unsupported operand type(s) for //: 'int' and 'NoneType'

2025-02-20T10:30:10.824445 - Prompt executed in 93.55 seconds
2025-02-20T10:34:10.158857 - got prompt
2025-02-20T10:34:10.215406 - !!! Exception during processing !!! unsupported operand type(s) for //: 'int' and 'NoneType'
2025-02-20T10:34:10.215798 - Traceback (most recent call last):
  File "/data/ml/ComfyUI-master/execution.py", line 327, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "/data/ml/ComfyUI-master/execution.py", line 202, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "/data/ml/ComfyUI-master/execution.py", line 174, in _map_node_over_list
    process_inputs(input_dict, i)
  File "/data/ml/ComfyUI-master/execution.py", line 163, in process_inputs
    results.append(getattr(obj, func)(**inputs))
  File "/data/ml/ComfyUI-master/custom_nodes/comfyui-hunyuanvideowrapper/nodes.py", line 904, in process
    prompt_embeds, negative_prompt_embeds, attention_mask, negative_attention_mask = encode_prompt(self,
  File "/data/ml/ComfyUI-master/custom_nodes/comfyui-hunyuanvideowrapper/nodes.py", line 829, in encode_prompt
    text_inputs = text_encoder.text2tokens(prompt,
  File "/data/ml/ComfyUI-master/custom_nodes/comfyui-hunyuanvideowrapper/hyvideo/text_encoder/__init__.py", line 253, in text2tokens
    text_tokens = self.processor(
  File "/usr/local/lib/python3.10/dist-packages/transformers/models/llava/processing_llava.py", line 160, in __call__
    num_image_tokens = (height // self.patch_size) * (
TypeError: unsupported operand type(s) for //: 'int' and 'NoneType'

2025-02-20T10:34:10.216304 - Prompt executed in 0.05 seconds
2025-02-20T10:34:35.689313 - got prompt
2025-02-20T10:34:35.748901 - !!! Exception during processing !!! unsupported operand type(s) for //: 'int' and 'NoneType'
2025-02-20T10:34:35.749271 - Traceback (most recent call last):
  File "/data/ml/ComfyUI-master/execution.py", line 327, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "/data/ml/ComfyUI-master/execution.py", line 202, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "/data/ml/ComfyUI-master/execution.py", line 174, in _map_node_over_list
    process_inputs(input_dict, i)
  File "/data/ml/ComfyUI-master/execution.py", line 163, in process_inputs
    results.append(getattr(obj, func)(**inputs))
  File "/data/ml/ComfyUI-master/custom_nodes/comfyui-hunyuanvideowrapper/nodes.py", line 904, in process
    prompt_embeds, negative_prompt_embeds, attention_mask, negative_attention_mask = encode_prompt(self,
  File "/data/ml/ComfyUI-master/custom_nodes/comfyui-hunyuanvideowrapper/nodes.py", line 829, in encode_prompt
    text_inputs = text_encoder.text2tokens(prompt,
  File "/data/ml/ComfyUI-master/custom_nodes/comfyui-hunyuanvideowrapper/hyvideo/text_encoder/__init__.py", line 253, in text2tokens
    text_tokens = self.processor(
  File "/usr/local/lib/python3.10/dist-packages/transformers/models/llava/processing_llava.py", line 160, in __call__
    num_image_tokens = (height // self.patch_size) * (
TypeError: unsupported operand type(s) for //: 'int' and 'NoneType'

2025-02-20T10:34:35.749715 - Prompt executed in 0.06 seconds
2025-02-20T10:38:54.476051 - got prompt
2025-02-20T10:38:54.532659 - !!! Exception during processing !!! unsupported operand type(s) for //: 'int' and 'NoneType'
2025-02-20T10:38:54.533099 - Traceback (most recent call last):
  File "/data/ml/ComfyUI-master/execution.py", line 327, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "/data/ml/ComfyUI-master/execution.py", line 202, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "/data/ml/ComfyUI-master/execution.py", line 174, in _map_node_over_list
    process_inputs(input_dict, i)
  File "/data/ml/ComfyUI-master/execution.py", line 163, in process_inputs
    results.append(getattr(obj, func)(**inputs))
  File "/data/ml/ComfyUI-master/custom_nodes/comfyui-hunyuanvideowrapper/nodes.py", line 904, in process
    prompt_embeds, negative_prompt_embeds, attention_mask, negative_attention_mask = encode_prompt(self,
  File "/data/ml/ComfyUI-master/custom_nodes/comfyui-hunyuanvideowrapper/nodes.py", line 829, in encode_prompt
    text_inputs = text_encoder.text2tokens(prompt,
  File "/data/ml/ComfyUI-master/custom_nodes/comfyui-hunyuanvideowrapper/hyvideo/text_encoder/__init__.py", line 253, in text2tokens
    text_tokens = self.processor(
  File "/usr/local/lib/python3.10/dist-packages/transformers/models/llava/processing_llava.py", line 160, in __call__
    num_image_tokens = (height // self.patch_size) * (
TypeError: unsupported operand type(s) for //: 'int' and 'NoneType'

2025-02-20T10:38:54.533819 - Prompt executed in 0.05 seconds
2025-02-20T10:39:31.092135 - got prompt
2025-02-20T10:39:31.148335 - !!! Exception during processing !!! unsupported operand type(s) for //: 'int' and 'NoneType'
2025-02-20T10:39:31.148682 - Traceback (most recent call last):
  File "/data/ml/ComfyUI-master/execution.py", line 327, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "/data/ml/ComfyUI-master/execution.py", line 202, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "/data/ml/ComfyUI-master/execution.py", line 174, in _map_node_over_list
    process_inputs(input_dict, i)
  File "/data/ml/ComfyUI-master/execution.py", line 163, in process_inputs
    results.append(getattr(obj, func)(**inputs))
  File "/data/ml/ComfyUI-master/custom_nodes/comfyui-hunyuanvideowrapper/nodes.py", line 904, in process
    prompt_embeds, negative_prompt_embeds, attention_mask, negative_attention_mask = encode_prompt(self,
  File "/data/ml/ComfyUI-master/custom_nodes/comfyui-hunyuanvideowrapper/nodes.py", line 829, in encode_prompt
    text_inputs = text_encoder.text2tokens(prompt,
  File "/data/ml/ComfyUI-master/custom_nodes/comfyui-hunyuanvideowrapper/hyvideo/text_encoder/__init__.py", line 253, in text2tokens
    text_tokens = self.processor(
  File "/usr/local/lib/python3.10/dist-packages/transformers/models/llava/processing_llava.py", line 160, in __call__
    num_image_tokens = (height // self.patch_size) * (
TypeError: unsupported operand type(s) for //: 'int' and 'NoneType'

2025-02-20T10:39:31.149079 - Prompt executed in 0.05 seconds
2025-02-20T10:41:35.270096 - Error handling request from 127.0.0.1
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/dist-packages/aiohttp/web_protocol.py", line 480, in _handle_request
    resp = await request_handler(request)
  File "/usr/local/lib/python3.10/dist-packages/aiohttp/web_app.py", line 569, in _handle
    return await handler(request)
  File "/usr/local/lib/python3.10/dist-packages/aiohttp/web_middlewares.py", line 117, in impl
    return await handler(request)
  File "/data/ml/ComfyUI-master/server.py", line 50, in cache_control
    response: web.Response = await handler(request)
  File "/data/ml/ComfyUI-master/server.py", line 142, in origin_only_middleware
    response = await handler(request)
  File "/data/ml/ComfyUI-master/custom_nodes/ComfyUI-Manager/glob/manager_server.py", line 1437, in get_notice
    version_tag = core.get_comfyui_tag()
  File "/data/ml/ComfyUI-master/custom_nodes/ComfyUI-Manager/glob/manager_core.py", line 77, in get_comfyui_tag
    repo = git.Repo(comfy_path)
  File "/usr/local/lib/python3.10/dist-packages/git/repo/base.py", line 289, in __init__
    raise InvalidGitRepositoryError(epath)
git.exc.InvalidGitRepositoryError: /data/ml/ComfyUI-master
2025-02-20T10:41:40.031995 - [ComfyUI-Manager] The ComfyRegistry cache update is still in progress, so an outdated cache is being used.2025-02-20T10:41:40.033188 - 
2025-02-20T10:41:40.049688 - nightly_channel: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/cache
2025-02-20T10:41:40.049851 - FETCH DATA from: /data/ml/ComfyUI-master/user/default/ComfyUI-Manager/cache/1514988643_custom-node-list.json2025-02-20T10:41:40.049897 - 2025-02-20T10:41:40.055767 -  [DONE]2025-02-20T10:41:40.055820 - 
2025-02-20T10:41:40.062730 - �[2K
Fetching: /data/ml/ComfyUI-master/custom_nodes/ComfyUI-LatentSyncWrapper-main2025-02-20T10:41:40.063037 - �[2K
Fetching: /data/ml/ComfyUI-master/custom_nodes/ComfyUI-KJNodes-main2025-02-20T10:41:40.063607 - 2025-02-20T10:41:40.063801 - �[2K
Fetching: /data/ml/ComfyUI-master/custom_nodes/ComfyUI_bitsandbytes_NF42025-02-20T10:41:40.064037 - 2025-02-20T10:41:40.064306 - �[2K
Fetching: /data/ml/ComfyUI-master/custom_nodes/ComfyUI_essentials-main2025-02-20T10:41:40.064437 - Traceback (most recent call last):
2025-02-20T10:41:40.064574 - �[2K
Fetching: /data/ml/ComfyUI-master/custom_nodes/ComfyUI-Whisper-master2025-02-20T10:41:40.065153 - 2025-02-20T10:41:40.065526 - Traceback (most recent call last):
2025-02-20T10:41:40.065796 - 2025-02-20T10:41:40.066635 - 2025-02-20T10:41:40.066850 - �[2K
Fetching: /data/ml/ComfyUI-master/custom_nodes/ComfyUI-Easy-Use-main2025-02-20T10:41:40.067972 - �[2K
Fetching: /data/ml/ComfyUI-master/custom_nodes/sdxl_prompt_styler2025-02-20T10:41:40.068411 - 2025-02-20T10:41:40.069104 - �[2K
Fetching: /data/ml/ComfyUI-master/custom_nodes/rgthree-comfy2025-02-20T10:41:40.069170 - �[2K
Fetching: /data/ml/ComfyUI-master/custom_nodes/comfyui_ultimatesdupscale2025-02-20T10:41:40.069685 - 2025-02-20T10:41:40.069788 - Traceback (most recent call last):
2025-02-20T10:41:40.069980 - �[2K
Fetching: /data/ml/ComfyUI-master/custom_nodes/AIGODLIKE-COMFYUI-TRANSLATION2025-02-20T10:41:40.070054 -   File "/data/ml/ComfyUI-master/custom_nodes/ComfyUI-Manager/glob/manager_core.py", line 561, in check_update
    is_updated, success = git_repo_update_check_with(fullpath, do_fetch=True)
2025-02-20T10:41:40.070181 - 2025-02-20T10:41:40.070325 - 2025-02-20T10:41:40.070841 - �[2K
Fetching: /data/ml/ComfyUI-master/custom_nodes/comfyui-wd14-tagger2025-02-20T10:41:40.071024 - Traceback (most recent call last):
2025-02-20T10:41:40.072500 - �[2K
Fetching: /data/ml/ComfyUI-master/custom_nodes/ComfyUI-Manager2025-02-20T10:41:40.072668 - 2025-02-20T10:41:40.073247 - 2025-02-20T10:41:40.073294 -   File "/data/ml/ComfyUI-master/custom_nodes/ComfyUI-Manager/glob/manager_core.py", line 561, in check_update
    is_updated, success = git_repo_update_check_with(fullpath, do_fetch=True)
2025-02-20T10:41:40.073982 - 2025-02-20T10:41:40.074217 - 2025-02-20T10:41:40.074349 - 2025-02-20T10:41:40.075137 - Traceback (most recent call last):
2025-02-20T10:41:40.075885 - 2025-02-20T10:41:40.076906 - 2025-02-20T10:41:40.077353 - 2025-02-20T10:41:40.078599 - 2025-02-20T10:41:40.079098 -   File "/data/ml/ComfyUI-master/custom_nodes/ComfyUI-Manager/glob/manager_core.py", line 561, in check_update
    is_updated, success = git_repo_update_check_with(fullpath, do_fetch=True)
2025-02-20T10:41:40.079731 -   File "/data/ml/ComfyUI-master/custom_nodes/ComfyUI-Manager/glob/manager_core.py", line 1858, in git_repo_update_check_with
    raise ValueError(f'[ComfyUI-Manager] Not a valid git repository: {path}')
2025-02-20T10:41:40.079804 - 2025-02-20T10:41:40.080044 -   File "/data/ml/ComfyUI-master/custom_nodes/ComfyUI-Manager/glob/manager_core.py", line 561, in check_update
    is_updated, success = git_repo_update_check_with(fullpath, do_fetch=True)
2025-02-20T10:41:40.080903 -   File "/data/ml/ComfyUI-master/custom_nodes/ComfyUI-Manager/glob/manager_core.py", line 1858, in git_repo_update_check_with
    raise ValueError(f'[ComfyUI-Manager] Not a valid git repository: {path}')
2025-02-20T10:41:40.083450 - 2025-02-20T10:41:40.083721 - 2025-02-20T10:41:40.084224 -   File "/data/ml/ComfyUI-master/custom_nodes/ComfyUI-Manager/glob/manager_core.py", line 561, in check_update
    is_updated, success = git_repo_update_check_with(fullpath, do_fetch=True)
2025-02-20T10:41:40.085223 - 2025-02-20T10:41:40.087091 - 2025-02-20T10:41:40.087480 -   File "/data/ml/ComfyUI-master/custom_nodes/ComfyUI-Manager/glob/manager_core.py", line 1858, in git_repo_update_check_with
    raise ValueError(f'[ComfyUI-Manager] Not a valid git repository: {path}')
2025-02-20T10:41:40.087995 - ValueError: [ComfyUI-Manager] Not a valid git repository: /data/ml/ComfyUI-master/custom_nodes/ComfyUI-LatentSyncWrapper-main
2025-02-20T10:41:40.088337 - 2025-02-20T10:41:40.088968 -   File "/data/ml/ComfyUI-master/custom_nodes/ComfyUI-Manager/glob/manager_core.py", line 1858, in git_repo_update_check_with
    raise ValueError(f'[ComfyUI-Manager] Not a valid git repository: {path}')
2025-02-20T10:41:40.090295 - ValueError: [ComfyUI-Manager] Not a valid git repository: /data/ml/ComfyUI-master/custom_nodes/ComfyUI-KJNodes-main
2025-02-20T10:41:40.091011 - 2025-02-20T10:41:40.091245 - 2025-02-20T10:41:40.091427 -   File "/data/ml/ComfyUI-master/custom_nodes/ComfyUI-Manager/glob/manager_core.py", line 1858, in git_repo_update_check_with
    raise ValueError(f'[ComfyUI-Manager] Not a valid git repository: {path}')
2025-02-20T10:41:40.091670 - 2025-02-20T10:41:40.092963 - 2025-02-20T10:41:40.093317 - ValueError: [ComfyUI-Manager] Not a valid git repository: /data/ml/ComfyUI-master/custom_nodes/ComfyUI_essentials-main
2025-02-20T10:41:40.097792 - �[2K
Fetching: /data/ml/ComfyUI-master/custom_nodes/comfyui-hunyuanvideowrapper2025-02-20T10:41:40.097965 - 2025-02-20T10:41:40.098320 - ValueError: [ComfyUI-Manager] Not a valid git repository: /data/ml/ComfyUI-master/custom_nodes/ComfyUI-Whisper-master
2025-02-20T10:41:40.100012 - �[2K
Fetching: /data/ml/ComfyUI-master/custom_nodes/ComfyUI-F5-TTS2025-02-20T10:41:40.101757 - 2025-02-20T10:41:40.102234 - 2025-02-20T10:41:40.102291 - ValueError: [ComfyUI-Manager] Not a valid git repository: /data/ml/ComfyUI-master/custom_nodes/ComfyUI-Easy-Use-main
2025-02-20T10:41:40.107451 - 2025-02-20T10:41:40.108011 - 2025-02-20T10:41:40.108557 - �[2K
Fetching: /data/ml/ComfyUI-master/custom_nodes/comfyui_controlnet_aux2025-02-20T10:41:40.111969 - 2025-02-20T10:41:40.112202 - �[2K
Fetching: /data/ml/ComfyUI-master/custom_nodes/comfyui-videohelpersuite2025-02-20T10:41:40.112311 - 2025-02-20T10:41:40.112523 - �[2K
Fetching: /data/ml/ComfyUI-master/custom_nodes/comfyui-custom-scripts2025-02-20T10:41:40.112598 - 2025-02-20T10:41:40.113226 - 2025-02-20T10:41:42.781339 - FETCH FAILED: ComfyUI_essentials-main@unknown
2025-02-20T10:41:42.781478 - FETCH FAILED: ComfyUI-Whisper-master@unknown
2025-02-20T10:41:42.781567 - FETCH FAILED: ComfyUI-LatentSyncWrapper-main@unknown
2025-02-20T10:41:42.781636 - FETCH FAILED: ComfyUI-KJNodes-main@unknown
2025-02-20T10:41:42.781700 - FETCH FAILED: ComfyUI-Easy-Use-main@unknown
2025-02-20T10:41:42.781765 - 
Done.
2025-02-20T10:41:47.949820 - FETCH DATA from: /data/ml/ComfyUI-master/user/default/ComfyUI-Manager/cache/1742899825_extension-node-map.json2025-02-20T10:41:47.949868 - 2025-02-20T10:41:47.954810 -  [DONE]2025-02-20T10:41:47.954858 - 
2025-02-20T10:41:48.258146 - [ComfyUI-Manager] The ComfyRegistry cache update is still in progress, so an outdated cache is being used.2025-02-20T10:41:48.258197 - 
2025-02-20T10:41:48.272385 - Error handling request from 127.0.0.1
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/dist-packages/aiohttp/web_protocol.py", line 480, in _handle_request
    resp = await request_handler(request)
  File "/usr/local/lib/python3.10/dist-packages/aiohttp/web_app.py", line 569, in _handle
    return await handler(request)
  File "/usr/local/lib/python3.10/dist-packages/aiohttp/web_middlewares.py", line 117, in impl
    return await handler(request)
  File "/data/ml/ComfyUI-master/server.py", line 50, in cache_control
    response: web.Response = await handler(request)
  File "/data/ml/ComfyUI-master/server.py", line 142, in origin_only_middleware
    response = await handler(request)
  File "/data/ml/ComfyUI-master/custom_nodes/ComfyUI-Manager/glob/manager_server.py", line 747, in fetch_customnode_list
    node_packs = await core.get_unified_total_nodes(channel, request.rel_url.query["mode"], 'cache')
  File "/data/ml/ComfyUI-master/custom_nodes/ComfyUI-Manager/glob/manager_core.py", line 2723, in get_unified_total_nodes
    await unified_manager.reload(regsitry_cache_mode)
  File "/data/ml/ComfyUI-master/custom_nodes/ComfyUI-Manager/glob/manager_core.py", line 713, in reload
    self.update_cache_at_path(fullpath)
  File "/data/ml/ComfyUI-master/custom_nodes/ComfyUI-Manager/glob/manager_core.py", line 523, in update_cache_at_path
    node_package = InstalledNodePackage.from_fullpath(fullpath, self.resolve_from_path)
  File "/data/ml/ComfyUI-master/custom_nodes/ComfyUI-Manager/glob/node_package.py", line 63, in from_fullpath
    info = resolve_from_path(fullpath)
  File "/data/ml/ComfyUI-master/custom_nodes/ComfyUI-Manager/glob/manager_core.py", line 498, in resolve_from_path
    url = git_utils.git_url(fullpath)
  File "/data/ml/ComfyUI-master/custom_nodes/ComfyUI-Manager/glob/git_utils.py", line 41, in git_url
    config.read(git_config_path)
  File "/usr/lib/python3.10/configparser.py", line 699, in read
    self._read(fp, filename)
  File "/usr/lib/python3.10/configparser.py", line 1098, in _read
    raise DuplicateOptionError(sectname, optname,
configparser.DuplicateOptionError: While reading from '/data/ml/ComfyUI-master/custom_nodes/ComfyUI_bitsandbytes_NF4/.git/config' [line 13]: option 'vscode-merge-base' in section 'branch "master"' already exists
2025-02-20T10:42:07.405823 - got prompt
2025-02-20T10:42:07.462595 - !!! Exception during processing !!! unsupported operand type(s) for //: 'int' and 'NoneType'
2025-02-20T10:42:07.462948 - Traceback (most recent call last):
  File "/data/ml/ComfyUI-master/execution.py", line 327, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "/data/ml/ComfyUI-master/execution.py", line 202, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "/data/ml/ComfyUI-master/execution.py", line 174, in _map_node_over_list
    process_inputs(input_dict, i)
  File "/data/ml/ComfyUI-master/execution.py", line 163, in process_inputs
    results.append(getattr(obj, func)(**inputs))
  File "/data/ml/ComfyUI-master/custom_nodes/comfyui-hunyuanvideowrapper/nodes.py", line 904, in process
    prompt_embeds, negative_prompt_embeds, attention_mask, negative_attention_mask = encode_prompt(self,
  File "/data/ml/ComfyUI-master/custom_nodes/comfyui-hunyuanvideowrapper/nodes.py", line 829, in encode_prompt
    text_inputs = text_encoder.text2tokens(prompt,
  File "/data/ml/ComfyUI-master/custom_nodes/comfyui-hunyuanvideowrapper/hyvideo/text_encoder/__init__.py", line 253, in text2tokens
    text_tokens = self.processor(
  File "/usr/local/lib/python3.10/dist-packages/transformers/models/llava/processing_llava.py", line 160, in __call__
    num_image_tokens = (height // self.patch_size) * (
TypeError: unsupported operand type(s) for //: 'int' and 'NoneType'

2025-02-20T10:42:07.463448 - Prompt executed in 0.05 seconds
2025-02-20T10:42:35.124606 - got prompt
2025-02-20T10:42:35.127622 - Failed to validate prompt for output 12:
2025-02-20T10:42:35.127777 - * HyVideoTextImageEncode 5:
2025-02-20T10:42:35.127862 -   - Required input is missing: image_token_selection_expr
2025-02-20T10:42:35.127933 - Output will be ignored
2025-02-20T10:42:35.128024 - invalid prompt: {'type': 'prompt_outputs_failed_validation', 'message': 'Prompt outputs failed validation', 'details': '', 'extra_info': {}}
2025-02-20T10:43:03.836846 - got prompt
2025-02-20T10:43:03.893564 - !!! Exception during processing !!! unsupported operand type(s) for //: 'int' and 'NoneType'
2025-02-20T10:43:03.893928 - Traceback (most recent call last):
  File "/data/ml/ComfyUI-master/execution.py", line 327, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "/data/ml/ComfyUI-master/execution.py", line 202, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "/data/ml/ComfyUI-master/execution.py", line 174, in _map_node_over_list
    process_inputs(input_dict, i)
  File "/data/ml/ComfyUI-master/execution.py", line 163, in process_inputs
    results.append(getattr(obj, func)(**inputs))
  File "/data/ml/ComfyUI-master/custom_nodes/comfyui-hunyuanvideowrapper/nodes.py", line 904, in process
    prompt_embeds, negative_prompt_embeds, attention_mask, negative_attention_mask = encode_prompt(self,
  File "/data/ml/ComfyUI-master/custom_nodes/comfyui-hunyuanvideowrapper/nodes.py", line 829, in encode_prompt
    text_inputs = text_encoder.text2tokens(prompt,
  File "/data/ml/ComfyUI-master/custom_nodes/comfyui-hunyuanvideowrapper/hyvideo/text_encoder/__init__.py", line 253, in text2tokens
    text_tokens = self.processor(
  File "/usr/local/lib/python3.10/dist-packages/transformers/models/llava/processing_llava.py", line 160, in __call__
    num_image_tokens = (height // self.patch_size) * (
TypeError: unsupported operand type(s) for //: 'int' and 'NoneType'

2025-02-20T10:43:03.894371 - Prompt executed in 0.05 seconds
2025-02-20T10:44:48.610221 - got prompt
2025-02-20T10:44:48.693802 - Loading text encoder model (clipL) from: /data/ml/ComfyUI-master/models/clip/clip-vit-large-patch14
2025-02-20T10:44:48.834982 - Text encoder to dtype: torch.float16
2025-02-20T10:44:48.836725 - Loading tokenizer (clipL) from: /data/ml/ComfyUI-master/models/clip/clip-vit-large-patch14
2025-02-20T10:44:50.114276 - Loading text encoder model (vlm) from: /data/ml/ComfyUI-master/models/LLM/llava-llama-3-8b-v1_1-transformers
2025-02-20T10:45:51.959655 - 
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [01:01<00:00, 15.99s/it]2025-02-20T10:45:51.961105 - 
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [01:01<00:00, 15.43s/it]2025-02-20T10:45:51.961745 - 
2025-02-20T10:46:26.040733 - Text encoder to dtype: torch.bfloat16
2025-02-20T10:46:26.048011 - Loading tokenizer (vlm) from: /data/ml/ComfyUI-master/models/LLM/llava-llama-3-8b-v1_1-transformers
2025-02-20T10:46:31.832330 - !!! Exception during processing !!! Allocation on device 
2025-02-20T10:46:31.840145 - Traceback (most recent call last):
  File "/data/ml/ComfyUI-master/execution.py", line 327, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "/data/ml/ComfyUI-master/execution.py", line 202, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "/data/ml/ComfyUI-master/execution.py", line 174, in _map_node_over_list
    process_inputs(input_dict, i)
  File "/data/ml/ComfyUI-master/execution.py", line 163, in process_inputs
    results.append(getattr(obj, func)(**inputs))
  File "/data/ml/ComfyUI-master/custom_nodes/comfyui-hunyuanvideowrapper/nodes.py", line 902, in process
    text_encoder_1.to(device)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1343, in to
    return self._apply(convert)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 903, in _apply
    module._apply(fn)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 903, in _apply
    module._apply(fn)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 903, in _apply
    module._apply(fn)
  [Previous line repeated 4 more times]
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 930, in _apply
    param_applied = fn(param)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1329, in convert
    return t.to(
torch.OutOfMemoryError: Allocation on device 

2025-02-20T10:46:31.840386 - Got an OOM, unloading all loaded models.
2025-02-20T10:46:31.868697 - Prompt executed in 103.26 seconds
2025-02-20T10:47:06.044800 - got prompt
2025-02-20T10:47:07.159355 - Loading text encoder model (clipL) from: /data/ml/ComfyUI-master/models/clip/clip-vit-large-patch14
2025-02-20T10:47:09.787534 - Text encoder to dtype: torch.float16
2025-02-20T10:47:09.789473 - Loading tokenizer (clipL) from: /data/ml/ComfyUI-master/models/clip/clip-vit-large-patch14
2025-02-20T10:47:12.188271 - Loading text encoder model (vlm) from: /data/ml/ComfyUI-master/models/LLM/llava-llama-3-8b-v1_1-transformers
2025-02-20T10:48:36.156441 - 
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [01:23<00:00, 18.62s/it]2025-02-20T10:48:36.156718 - 
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [01:23<00:00, 20.87s/it]2025-02-20T10:48:36.156762 - 
2025-02-20T10:48:36.277124 - Text encoder to dtype: torch.bfloat16
2025-02-20T10:48:36.277353 - Loading tokenizer (vlm) from: /data/ml/ComfyUI-master/models/LLM/llava-llama-3-8b-v1_1-transformers
2025-02-20T10:48:36.743493 - !!! Exception during processing !!! list index out of range
2025-02-20T10:48:36.751765 - Traceback (most recent call last):
  File "/data/ml/ComfyUI-master/execution.py", line 327, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "/data/ml/ComfyUI-master/execution.py", line 202, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "/data/ml/ComfyUI-master/execution.py", line 174, in _map_node_over_list
    process_inputs(input_dict, i)
  File "/data/ml/ComfyUI-master/execution.py", line 163, in process_inputs
    results.append(getattr(obj, func)(**inputs))
  File "/data/ml/ComfyUI-master/custom_nodes/comfyui-hunyuanvideowrapper/nodes.py", line 904, in process
    prompt_embeds, negative_prompt_embeds, attention_mask, negative_attention_mask = encode_prompt(self,
  File "/data/ml/ComfyUI-master/custom_nodes/comfyui-hunyuanvideowrapper/nodes.py", line 829, in encode_prompt
    text_inputs = text_encoder.text2tokens(prompt,
  File "/data/ml/ComfyUI-master/custom_nodes/comfyui-hunyuanvideowrapper/hyvideo/text_encoder/__init__.py", line 253, in text2tokens
    text_tokens = self.processor(
  File "/usr/local/lib/python3.10/dist-packages/transformers/models/llava/processing_llava.py", line 145, in __call__
    image_inputs = self.image_processor(images, **output_kwargs["images_kwargs"])
  File "/usr/local/lib/python3.10/dist-packages/transformers/image_processing_utils.py", line 41, in __call__
    return self.preprocess(images, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/transformers/models/clip/image_processing_clip.py", line 286, in preprocess
    images = make_list_of_images(images)
  File "/usr/local/lib/python3.10/dist-packages/transformers/image_utils.py", line 185, in make_list_of_images
    if is_batched(images):
  File "/usr/local/lib/python3.10/dist-packages/transformers/image_utils.py", line 158, in is_batched
    return is_valid_image(img[0])
IndexError: list index out of range

2025-02-20T10:48:36.752218 - Prompt executed in 90.69 seconds
2025-02-20T10:51:28.312181 - got prompt
2025-02-20T10:51:28.411335 - Loading text encoder model (clipL) from: /data/ml/ComfyUI-master/models/clip/clip-vit-large-patch14
2025-02-20T10:51:28.572402 - Text encoder to dtype: torch.float16
2025-02-20T10:51:28.574230 - Loading tokenizer (clipL) from: /data/ml/ComfyUI-master/models/clip/clip-vit-large-patch14
2025-02-20T10:51:28.663767 - Loading text encoder model (llm) from: /data/ml/ComfyUI-master/models/LLM/llava-llama-3-8b-text-encoder-tokenizer
2025-02-20T10:52:43.949955 - 
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [01:15<00:00, 15.31s/it]2025-02-20T10:52:43.950273 - 
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [01:15<00:00, 18.77s/it]2025-02-20T10:52:43.950327 - 
2025-02-20T10:52:44.004053 - Text encoder to dtype: torch.bfloat16
2025-02-20T10:52:44.004269 - Loading tokenizer (llm) from: /data/ml/ComfyUI-master/models/LLM/llava-llama-3-8b-text-encoder-tokenizer
2025-02-20T10:52:47.281300 - llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 15
2025-02-20T10:52:50.855387 - clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 16
2025-02-20T10:52:51.820803 - model_type FLOW
2025-02-20T10:52:51.822150 - The config attributes {'use_flow_sigmas': True, 'prediction_type': 'flow_prediction'} were passed to FlowMatchDiscreteScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.
2025-02-20T10:52:51.824863 - Scheduler config:2025-02-20T10:52:51.824919 -  2025-02-20T10:52:51.824974 - FrozenDict([('num_train_timesteps', 1000), ('flow_shift', 9.0), ('reverse', True), ('solver', 'euler'), ('n_tokens', None), ('_use_default_values', ['num_train_timesteps', 'n_tokens'])])2025-02-20T10:52:51.825011 - 
2025-02-20T10:52:51.825743 - Using accelerate to load and assign model weights to device...
2025-02-20T10:52:52.204722 - Requested to load HyVideoModel
2025-02-20T10:53:57.378239 - loaded completely 13546.21957092285 12555.953247070312 True
2025-02-20T10:54:07.979779 - Input (height, width, video_length) = (256, 256, 33)
2025-02-20T10:54:07.980697 - The config attributes {'use_flow_sigmas': True, 'prediction_type': 'flow_prediction'} were passed to FlowMatchDiscreteScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.
2025-02-20T10:54:07.980924 - Scheduler config:2025-02-20T10:54:07.980960 -  2025-02-20T10:54:07.981005 - FrozenDict([('num_train_timesteps', 1000), ('flow_shift', 9.0), ('reverse', True), ('solver', 'euler'), ('n_tokens', None), ('_use_default_values', ['num_train_timesteps', 'n_tokens'])])2025-02-20T10:54:07.981030 - 
2025-02-20T10:54:08.165593 - Swapping 20 double blocks and 0 single blocks2025-02-20T10:54:08.165650 - 
2025-02-20T10:54:10.227493 - Sampling 33 frames in 9 latents at 256x256 with 6 inference steps
2025-02-20T10:55:53.721393 - 
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 6/6 [01:43<00:00, 17.66s/it]2025-02-20T10:55:53.722332 - 
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 6/6 [01:43<00:00, 17.25s/it]2025-02-20T10:55:53.722403 - 
2025-02-20T10:55:53.728158 - Allocated memory: memory=5.941 GB
2025-02-20T10:55:53.728419 - Max allocated memory: max_memory=7.141 GB
2025-02-20T10:55:53.728526 - Max reserved memory: max_reserved=7.531 GB
2025-02-20T10:56:16.265921 - !!! Exception during processing !!! Allocation on device 
2025-02-20T10:56:16.277331 - Traceback (most recent call last):
  File "/data/ml/ComfyUI-master/execution.py", line 327, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "/data/ml/ComfyUI-master/execution.py", line 202, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "/data/ml/ComfyUI-master/execution.py", line 174, in _map_node_over_list
    process_inputs(input_dict, i)
  File "/data/ml/ComfyUI-master/execution.py", line 163, in process_inputs
    results.append(getattr(obj, func)(**inputs))
  File "/data/ml/ComfyUI-master/custom_nodes/comfyui-hunyuanvideowrapper/nodes.py", line 1369, in decode
    video = vae.decode(
  File "/usr/local/lib/python3.10/dist-packages/diffusers/utils/accelerate_utils.py", line 46, in wrapper
    return method(self, *args, **kwargs)
  File "/data/ml/ComfyUI-master/custom_nodes/comfyui-hunyuanvideowrapper/hyvideo/vae/autoencoder_kl_causal_3d.py", line 345, in decode
    decoded = self._decode(z).sample
  File "/data/ml/ComfyUI-master/custom_nodes/comfyui-hunyuanvideowrapper/hyvideo/vae/autoencoder_kl_causal_3d.py", line 315, in _decode
    dec = self.decoder(z)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1750, in _call_impl
    return forward_call(*args, **kwargs)
  File "/data/ml/ComfyUI-master/custom_nodes/comfyui-hunyuanvideowrapper/hyvideo/vae/vae.py", line 299, in forward
    sample = up_block(sample, latent_embeds)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1750, in _call_impl
    return forward_call(*args, **kwargs)
  File "/data/ml/ComfyUI-master/custom_nodes/comfyui-hunyuanvideowrapper/hyvideo/vae/unet_causal_3d_blocks.py", line 795, in forward
    hidden_states = upsampler(hidden_states)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1750, in _call_impl
    return forward_call(*args, **kwargs)
  File "/data/ml/ComfyUI-master/custom_nodes/comfyui-hunyuanvideowrapper/hyvideo/vae/unet_causal_3d_blocks.py", line 194, in forward
    hidden_states = self.conv(hidden_states)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1750, in _call_impl
    return forward_call(*args, **kwargs)
  File "/data/ml/ComfyUI-master/custom_nodes/comfyui-hunyuanvideowrapper/hyvideo/vae/unet_causal_3d_blocks.py", line 78, in forward
    return self.conv(x)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1750, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/conv.py", line 725, in forward
    return self._conv_forward(input, self.weight, self.bias)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/conv.py", line 720, in _conv_forward
    return F.conv3d(
torch.OutOfMemoryError: Allocation on device 

2025-02-20T10:56:16.277642 - Got an OOM, unloading all loaded models.
2025-02-20T10:56:18.511546 - Prompt executed in 290.20 seconds

Attached Workflow

Please make sure that workflow does not contain any sensitive information such as API keys or passwords.

{"last_node_id":35,"last_link_id":43,"nodes":[{"id":35,"type":"HyVideoBlockSwap","pos":[-351,-44],"size":[315,130],"flags":{},"order":0,"mode":0,"inputs":[],"outputs":[{"name":"block_swap_args","type":"BLOCKSWAPARGS","links":[43],"label":"block_swap_args"}],"properties":{"Node name for S&R":"HyVideoBlockSwap"},"widgets_values":[20,0,false,false]},{"id":5,"type":"HyVideoDecode","pos":[977.2962646484375,-363.6768798828125],"size":[345.4285888671875,150],"flags":{},"order":6,"mode":0,"inputs":[{"name":"vae","type":"VAE","link":6,"label":"vae"},{"name":"samples","type":"LATENT","link":4,"label":"samples"}],"outputs":[{"name":"images","type":"IMAGE","links":[42],"slot_index":0,"label":"images"}],"properties":{"Node name for S&R":"HyVideoDecode"},"widgets_values":[true,8,256,true]},{"id":34,"type":"VHS_VideoCombine","pos":[1367,-275],"size":[371.7926940917969,334],"flags":{},"order":7,"mode":0,"inputs":[{"name":"images","type":"IMAGE","link":42,"label":"图像"},{"name":"audio","type":"AUDIO","link":null,"shape":7,"label":"音频"},{"name":"meta_batch","type":"VHS_BatchManager","link":null,"shape":7,"label":"批次管理"},{"name":"vae","type":"VAE","link":null,"shape":7}],"outputs":[{"name":"Filenames","type":"VHS_FILENAMES","links":null,"label":"文件名"}],"properties":{"Node name for S&R":"VHS_VideoCombine"},"widgets_values":{"frame_rate":16,"loop_count":0,"filename_prefix":"HunyuanVideo","format":"video/h264-mp4","pix_fmt":"yuv420p","crf":19,"save_metadata":true,"trim_to_audio":false,"pingpong":false,"save_output":false,"videopreview":{"hidden":false,"paused":false,"params":{"filename":"HunyuanVideo_00001.mp4","subfolder":"","type":"temp","format":"video/h264-mp4","frame_rate":16,"workflow":"HunyuanVideo_00001.png"},"muted":false}}},{"id":7,"type":"HyVideoVAELoader","pos":[442,-282],"size":[379.166748046875,82],"flags":{},"order":1,"mode":0,"inputs":[{"name":"compile_args","type":"COMPILEARGS","link":null,"shape":7,"label":"compile_args"}],"outputs":[{"name":"vae","type":"VAE","links":[6],"slot_index":0,"label":"vae"}],"properties":{"Node name for S&R":"HyVideoVAELoader"},"widgets_values":["hunyuan_video_vae_bf16.safetensors","bf16"]},{"id":30,"type":"HyVideoTextEncode","pos":[203,247],"size":[400,200],"flags":{},"order":4,"mode":0,"inputs":[{"name":"text_encoders","type":"HYVIDTEXTENCODER","link":35,"label":"text_encoders"},{"name":"custom_prompt_template","type":"PROMPT_TEMPLATE","link":null,"shape":7,"label":"custom_prompt_template"},{"name":"clip_l","type":"CLIP","link":null,"shape":7,"label":"clip_l"},{"name":"hyvid_cfg","type":"HYVID_CFG","link":null,"shape":7,"label":"hyvid_cfg"}],"outputs":[{"name":"hyvid_embeds","type":"HYVIDEMBEDS","links":[36],"label":"hyvid_embeds"}],"properties":{"Node name for S&R":"HyVideoTextEncode"},"widgets_values":["high quality anime style movie featuring a dog running in forest",true,"video"]},{"id":3,"type":"HyVideoSampler","pos":[668,-62],"size":[315,418],"flags":{},"order":5,"mode":0,"inputs":[{"name":"model","type":"HYVIDEOMODEL","link":2,"label":"model"},{"name":"hyvid_embeds","type":"HYVIDEMBEDS","link":36,"label":"hyvid_embeds"},{"name":"samples","type":"LATENT","link":null,"shape":7,"label":"samples"},{"name":"stg_args","type":"STGARGS","link":null,"shape":7,"label":"stg_args"},{"name":"context_options","type":"HYVIDCONTEXT","link":null,"shape":7,"label":"context_options"},{"name":"feta_args","type":"FETAARGS","link":null,"shape":7,"label":"feta_args"},{"name":"teacache_args","type":"TEACACHEARGS","link":null,"shape":7,"label":"teacache_args"}],"outputs":[{"name":"samples","type":"LATENT","links":[4],"slot_index":0,"label":"samples"}],"properties":{"Node name for S&R":"HyVideoSampler"},"widgets_values":[256,256,33,6,6,9,2,"fixed",1,1,"FlowMatchDiscreteScheduler"]},{"id":1,"type":"HyVideoModelLoader","pos":[24,-63],"size":[509.7506103515625,242],"flags":{},"order":3,"mode":0,"inputs":[{"name":"compile_args","type":"COMPILEARGS","link":null,"shape":7,"label":"compile_args"},{"name":"block_swap_args","type":"BLOCKSWAPARGS","link":43,"shape":7,"label":"block_swap_args"},{"name":"lora","type":"HYVIDLORA","link":null,"shape":7,"label":"lora"}],"outputs":[{"name":"model","type":"HYVIDEOMODEL","links":[2],"slot_index":0,"label":"model"}],"properties":{"Node name for S&R":"HyVideoModelLoader"},"widgets_values":["hunyuan_video_FastVideo_720_fp8_e4m3fn.safetensors","bf16","fp8_e4m3fn","offload_device","sdpa",false,true]},{"id":16,"type":"DownloadAndLoadHyVideoTextEncoder","pos":[-310,248],"size":[441,202],"flags":{},"order":2,"mode":0,"inputs":[],"outputs":[{"name":"hyvid_text_encoder","type":"HYVIDTEXTENCODER","links":[35],"label":"hyvid_text_encoder"}],"properties":{"Node name for S&R":"DownloadAndLoadHyVideoTextEncoder"},"widgets_values":["Kijai/llava-llama-3-8b-text-encoder-tokenizer","openai/clip-vit-large-patch14","bf16",false,2,"bnb_nf4","offload_device"]}],"links":[[2,1,0,3,0,"HYVIDEOMODEL"],[4,3,0,5,1,"LATENT"],[6,7,0,5,0,"VAE"],[35,16,0,30,0,"HYVIDTEXTENCODER"],[36,30,0,3,1,"HYVIDEMBEDS"],[42,5,0,34,0,"IMAGE"],[43,35,0,1,1,"BLOCKSWAPARGS"]],"groups":[],"config":{},"extra":{"ds":{"scale":0.8390545288824066,"offset":[398.652906055179,424.39596830361614]},"node_versions":{"comfyui-hunyuanvideowrapper":"9f50ed17d9b28603d61b636a16b865ab047d4388","comfyui-videohelpersuite":"f7369389620ff244ddd6086cf0fa792a569086f2"},"VHS_latentpreview":false,"VHS_latentpreviewrate":0},"version":0.4}

Additional Context

(Please add any additional context or steps to reproduce the error here)
`

@kijai
Copy link
Owner

kijai commented Feb 20, 2025

Disable the auto_tile_size on the VAE decoder and put 64 in the temporal tile size and try 128 as the spatial tile size to reduce VRAM requirement of that node.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants