We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Loading tokenizer (llm) from: /home/wangjiafeng/data/ComfyUI/models/LLM/llava-llama-3-8b-text-encoder-tokenizer end_vram - start_vram: 33554432 - 33554432 = 0 #16 [DownloadAndLoadHyVideoTextEncoder]: 5.57s - vram 0b llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 19 clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 20 end_vram - start_vram: 15171076224 - 33554432 = 15137521792 #30 [HyVideoTextEncode]: 11.16s - vram 15137521792b end_vram - start_vram: 35655024 - 35655024 = 0 #73 [easy cleanGpuUsed]: 0.42s - vram 0b end_vram - start_vram: 35655024 - 35655024 = 0 #59 [HyVideoBlockSwap]: 0.00s - vram 0b model_type FLOW The config attributes {'use_flow_sigmas': True, 'prediction_type': 'flow_prediction'} were passed to FlowMatchDiscreteScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file. Scheduler config: FrozenDict([('num_train_timesteps', 1000), ('flow_shift', 9.0), ('reverse', True), ('solver', 'euler'), ('n_tokens', None), ('_use_default_values', ['n_tokens', 'num_train_timesteps'])]) Using accelerate to load and assign model weights to device... Requested to load HyVideoModel loaded completely 22257.48421936035 22257.419921875 False end_vram - start_vram: 23374251376 - 35655024 = 23338596352 #1 [HyVideoModelLoader]: 19.07s - vram 23338596352b end_vram - start_vram: 35655024 - 35655024 = 0 #72 [easy cleanGpuUsed]: 0.81s - vram 0b end_vram - start_vram: 35655024 - 35655024 = 0 #64 [HyVideoEnhanceAVideo]: 0.00s - vram 0b end_vram - start_vram: 35655024 - 35655024 = 0 #44 [LoadImage]: 0.06s - vram 0b end_vram - start_vram: 35655024 - 35655024 = 0 #45 [ImageResizeKJ]: 0.04s - vram 0b end_vram - start_vram: 528612630 - 35655024 = 492957606 #7 [HyVideoVAELoader]: 1.90s - vram 492957606b encoded latents shape torch.Size([1, 16, 1, 120, 68]) end_vram - start_vram: 2275304773 - 528612630 = 1746692143 #43 [HyVideoEncode]: 0.29s - vram 1746692143b Input (height, width, video_length) = (960, 544, 97) The config attributes {'reverse': True, 'solver': 'euler'} were passed to DPMSolverMultistepScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file. Swapping 20 double blocks and 10 single blocks image_cond_latents shape: torch.Size([1, 16, 1, 120, 68]) image_latents shape: torch.Size([1, 16, 25, 120, 68]) Sampling 97 frames in 25 latents at 544x960 with 30 inference steps 0%| | 0/30 [00:00<?, ?it/s] !!! Exception during processing !!! 'NoneType' object is not callable Traceback (most recent call last): File "/home/wangjiafeng/data/ComfyUI/execution.py", line 328, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) File "/home/wangjiafeng/data/ComfyUI/execution.py", line 203, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) File "/home/wangjiafeng/data/ComfyUI/execution.py", line 174, in _map_node_over_list process_inputs(input_dict, i) File "/home/wangjiafeng/data/ComfyUI/execution.py", line 163, in process_inputs results.append(getattr(obj, func)(**inputs)) File "/home/wangjiafeng/data/ComfyUI/custom_nodes/ComfyUI-HunyuanVideoWrapper/nodes.py", line 1294, in process out_latents = model["pipe"]( File "/home/wangjiafeng/anaconda3/envs/comfyui/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "/home/wangjiafeng/data/ComfyUI/custom_nodes/ComfyUI-HunyuanVideoWrapper/hyvideo/diffusion/pipelines/pipeline_hunyuan_video.py", line 776, in call noise_pred = self.transformer( # For an input image (129, 192, 336) (1, 256, 256) File "/home/wangjiafeng/anaconda3/envs/comfyui/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/wangjiafeng/anaconda3/envs/comfyui/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/home/wangjiafeng/data/ComfyUI/custom_nodes/ComfyUI-HunyuanVideoWrapper/hyvideo/modules/models.py", line 1047, in forward img, txt = _process_double_blocks(img, txt, vec, block_args) File "/home/wangjiafeng/data/ComfyUI/custom_nodes/ComfyUI-HunyuanVideoWrapper/hyvideo/modules/models.py", line 894, in _process_double_blocks img, txt = block(img, txt, vec, *block_args) File "/home/wangjiafeng/anaconda3/envs/comfyui/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/wangjiafeng/anaconda3/envs/comfyui/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/home/wangjiafeng/data/ComfyUI/custom_nodes/ComfyUI-HunyuanVideoWrapper/hyvideo/modules/models.py", line 257, in forward attn = attention( File "/home/wangjiafeng/data/ComfyUI/custom_nodes/ComfyUI-HunyuanVideoWrapper/hyvideo/modules/attention.py", line 189, in attention x = flash_attn_varlen_func( TypeError: 'NoneType' object is not callable
end_vram - start_vram: 15993282806 - 35916144 = 15957366662 #3 [HyVideoSampler]: 1.97s - vram 15957366662b Prompt executed in 41.32 seconds
The text was updated successfully, but these errors were encountered:
I get the same Error in "HunyuanVideo Sampler" node
Sorry, something went wrong.
有解决方案吗
No branches or pull requests
Loading tokenizer (llm) from: /home/wangjiafeng/data/ComfyUI/models/LLM/llava-llama-3-8b-text-encoder-tokenizer
end_vram - start_vram: 33554432 - 33554432 = 0
#16 [DownloadAndLoadHyVideoTextEncoder]: 5.57s - vram 0b
llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 19
clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 20
end_vram - start_vram: 15171076224 - 33554432 = 15137521792
#30 [HyVideoTextEncode]: 11.16s - vram 15137521792b
end_vram - start_vram: 35655024 - 35655024 = 0
#73 [easy cleanGpuUsed]: 0.42s - vram 0b
end_vram - start_vram: 35655024 - 35655024 = 0
#59 [HyVideoBlockSwap]: 0.00s - vram 0b
model_type FLOW
The config attributes {'use_flow_sigmas': True, 'prediction_type': 'flow_prediction'} were passed to FlowMatchDiscreteScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.
Scheduler config: FrozenDict([('num_train_timesteps', 1000), ('flow_shift', 9.0), ('reverse', True), ('solver', 'euler'), ('n_tokens', None), ('_use_default_values', ['n_tokens', 'num_train_timesteps'])])
Using accelerate to load and assign model weights to device...
Requested to load HyVideoModel
loaded completely 22257.48421936035 22257.419921875 False
end_vram - start_vram: 23374251376 - 35655024 = 23338596352
#1 [HyVideoModelLoader]: 19.07s - vram 23338596352b
end_vram - start_vram: 35655024 - 35655024 = 0
#72 [easy cleanGpuUsed]: 0.81s - vram 0b
end_vram - start_vram: 35655024 - 35655024 = 0
#64 [HyVideoEnhanceAVideo]: 0.00s - vram 0b
end_vram - start_vram: 35655024 - 35655024 = 0
#44 [LoadImage]: 0.06s - vram 0b
end_vram - start_vram: 35655024 - 35655024 = 0
#45 [ImageResizeKJ]: 0.04s - vram 0b
end_vram - start_vram: 528612630 - 35655024 = 492957606
#7 [HyVideoVAELoader]: 1.90s - vram 492957606b
encoded latents shape torch.Size([1, 16, 1, 120, 68])
end_vram - start_vram: 2275304773 - 528612630 = 1746692143
#43 [HyVideoEncode]: 0.29s - vram 1746692143b
Input (height, width, video_length) = (960, 544, 97)
The config attributes {'reverse': True, 'solver': 'euler'} were passed to DPMSolverMultistepScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.
Swapping 20 double blocks and 10 single blocks
image_cond_latents shape: torch.Size([1, 16, 1, 120, 68])
image_latents shape: torch.Size([1, 16, 25, 120, 68])
Sampling 97 frames in 25 latents at 544x960 with 30 inference steps
0%| | 0/30 [00:00<?, ?it/s]
!!! Exception during processing !!! 'NoneType' object is not callable
Traceback (most recent call last):
File "/home/wangjiafeng/data/ComfyUI/execution.py", line 328, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "/home/wangjiafeng/data/ComfyUI/execution.py", line 203, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "/home/wangjiafeng/data/ComfyUI/execution.py", line 174, in _map_node_over_list
process_inputs(input_dict, i)
File "/home/wangjiafeng/data/ComfyUI/execution.py", line 163, in process_inputs
results.append(getattr(obj, func)(**inputs))
File "/home/wangjiafeng/data/ComfyUI/custom_nodes/ComfyUI-HunyuanVideoWrapper/nodes.py", line 1294, in process
out_latents = model["pipe"](
File "/home/wangjiafeng/anaconda3/envs/comfyui/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/home/wangjiafeng/data/ComfyUI/custom_nodes/ComfyUI-HunyuanVideoWrapper/hyvideo/diffusion/pipelines/pipeline_hunyuan_video.py", line 776, in call
noise_pred = self.transformer( # For an input image (129, 192, 336) (1, 256, 256)
File "/home/wangjiafeng/anaconda3/envs/comfyui/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/wangjiafeng/anaconda3/envs/comfyui/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/wangjiafeng/data/ComfyUI/custom_nodes/ComfyUI-HunyuanVideoWrapper/hyvideo/modules/models.py", line 1047, in forward
img, txt = _process_double_blocks(img, txt, vec, block_args)
File "/home/wangjiafeng/data/ComfyUI/custom_nodes/ComfyUI-HunyuanVideoWrapper/hyvideo/modules/models.py", line 894, in _process_double_blocks
img, txt = block(img, txt, vec, *block_args)
File "/home/wangjiafeng/anaconda3/envs/comfyui/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/wangjiafeng/anaconda3/envs/comfyui/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/wangjiafeng/data/ComfyUI/custom_nodes/ComfyUI-HunyuanVideoWrapper/hyvideo/modules/models.py", line 257, in forward
attn = attention(
File "/home/wangjiafeng/data/ComfyUI/custom_nodes/ComfyUI-HunyuanVideoWrapper/hyvideo/modules/attention.py", line 189, in attention
x = flash_attn_varlen_func(
TypeError: 'NoneType' object is not callable
end_vram - start_vram: 15993282806 - 35916144 = 15957366662
#3 [HyVideoSampler]: 1.97s - vram 15957366662b
Prompt executed in 41.32 seconds
The text was updated successfully, but these errors were encountered: