Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Failed to import transformers.models.aria.configuration_aria because of the following error (look up to see its traceback): No module named 'transformers.models.aria.configuration_aria' #374

Open
liaceboy opened this issue Feb 19, 2025 · 2 comments

Comments

@liaceboy
Copy link

ComfyUI Error Report

Error Details

  • Node ID: 16
  • Node Type: DownloadAndLoadHyVideoTextEncoder
  • Exception Type: RuntimeError
  • Exception Message: Failed to import transformers.models.aria.configuration_aria because of the following error (look up to see its traceback):
    No module named 'transformers.models.aria.configuration_aria'

Stack Trace

  File "D:\Comfyui\ComfyUI-aki-v1.2\ComfyUI-aki-v1.2\execution.py", line 323, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)

  File "D:\Comfyui\ComfyUI-aki-v1.2\ComfyUI-aki-v1.2\execution.py", line 198, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)

  File "D:\Comfyui\ComfyUI-aki-v1.2\ComfyUI-aki-v1.2\execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)

  File "D:\Comfyui\ComfyUI-aki-v1.2\ComfyUI-aki-v1.2\execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))

  File "D:\Comfyui\ComfyUI-aki-v1.2\ComfyUI-aki-v1.2\custom_nodes\ComfyUI-HunyuanVideoWrapper\nodes.py", line 623, in loadmodel
    text_encoder = TextEncoder(

  File "D:\Comfyui\ComfyUI-aki-v1.2\ComfyUI-aki-v1.2\custom_nodes\ComfyUI-HunyuanVideoWrapper\hyvideo\text_encoder\__init__.py", line 167, in __init__
    self.model, self.model_path = load_text_encoder(

  File "D:\Comfyui\ComfyUI-aki-v1.2\ComfyUI-aki-v1.2\custom_nodes\ComfyUI-HunyuanVideoWrapper\hyvideo\text_encoder\__init__.py", line 39, in load_text_encoder
    text_encoder = AutoModel.from_pretrained(

  File "D:\Comfyui\ComfyUI-aki-v1.2\ComfyUI-aki-v1.2\python\lib\site-packages\transformers\models\auto\auto_factory.py", line 543, in from_pretrained
    # Set the adapter kwargs

  File "D:\Comfyui\ComfyUI-aki-v1.2\ComfyUI-aki-v1.2\python\lib\site-packages\transformers\models\auto\auto_factory.py", line 780, in keys
    if key in self._config_mapping.keys()

  File "D:\Comfyui\ComfyUI-aki-v1.2\ComfyUI-aki-v1.2\python\lib\site-packages\transformers\models\auto\auto_factory.py", line 781, in <listcomp>
    ]

  File "D:\Comfyui\ComfyUI-aki-v1.2\ComfyUI-aki-v1.2\python\lib\site-packages\transformers\models\auto\auto_factory.py", line 777, in _load_attr_from_module
    self._load_attr_from_module(key, self._model_mapping[key]),

  File "D:\Comfyui\ComfyUI-aki-v1.2\ComfyUI-aki-v1.2\python\lib\site-packages\transformers\models\auto\auto_factory.py", line 693, in getattribute_from_module
    # object at the top level.

  File "D:\Comfyui\ComfyUI-aki-v1.2\ComfyUI-aki-v1.2\python\lib\site-packages\transformers\utils\import_utils.py", line 1805, in __getattr__

  File "D:\Comfyui\ComfyUI-aki-v1.2\ComfyUI-aki-v1.2\python\lib\site-packages\transformers\utils\import_utils.py", line 1819, in _get_module

System Information

  • ComfyUI Version: v0.3.5
  • Arguments: D:\Comfyui\ComfyUI-aki-v1.2\ComfyUI-aki-v1.2\main.py --auto-launch --preview-method auto --disable-cuda-malloc
  • OS: nt
  • Python Version: 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]
  • Embedded Python: false
  • PyTorch Version: 2.3.1+cu121

Devices

  • Name: cuda:0 NVIDIA GeForce RTX 4090 : cudaMallocAsync
    • Type: cuda
    • VRAM Total: 25756696576
    • VRAM Free: 23780383616
    • Torch VRAM Total: 503316480
    • Torch VRAM Free: 281795456

Logs

2025-02-19T11:25:34.029308 - []2025-02-19T11:25:34.029308 - 
2025-02-19T11:25:35.976583 - []2025-02-19T11:25:35.976583 - 
2025-02-19T11:25:37.439329 - []2025-02-19T11:25:37.439329 - 
2025-02-19T11:25:39.948628 - []2025-02-19T11:25:39.948628 - 
2025-02-19T11:25:41.900082 - []2025-02-19T11:25:41.900082 - 
2025-02-19T11:25:43.796411 - got prompt
2025-02-19T11:25:44.066050 - Failed to validate prompt for output 64:
2025-02-19T11:25:44.066050 - * LoadImage 44:
2025-02-19T11:25:44.066050 -   - Custom validation failed for node: image - Invalid image file: 微信截图_20241114093542.png
2025-02-19T11:25:44.067050 - Output will be ignored
2025-02-19T11:25:44.120106 - Failed to validate prompt for output 34:
2025-02-19T11:25:44.120106 - * HyVideoVAELoader 7:
2025-02-19T11:25:44.120106 -   - Value not in list: model_name: 'hunyuan_video_vae_bf16.safetensors' not in (list of length 28)
2025-02-19T11:25:44.120106 - * HyVideoSampler 3:
2025-02-19T11:25:44.120106 -   - Return type mismatch between linked nodes: stg_args, LATENT != STGARGS
2025-02-19T11:25:44.120106 - Output will be ignored
2025-02-19T11:25:44.120106 - invalid prompt: {'type': 'prompt_outputs_failed_validation', 'message': 'Prompt outputs failed validation', 'details': '', 'extra_info': {}}
2025-02-19T11:25:44.360435 - []2025-02-19T11:25:44.360435 - 
2025-02-19T11:25:44.472945 - []2025-02-19T11:25:44.472945 - 
2025-02-19T11:25:46.445313 - []2025-02-19T11:25:46.446313 - 
2025-02-19T11:25:48.403533 - []2025-02-19T11:25:48.403533 - 
2025-02-19T11:25:50.894455 - []2025-02-19T11:25:50.894455 - 
2025-02-19T11:25:50.926852 - []2025-02-19T11:25:50.926852 - 
2025-02-19T11:25:52.886858 - []2025-02-19T11:25:52.886858 - 
2025-02-19T11:25:52.909864 - got prompt
2025-02-19T11:25:53.008333 - Failed to validate prompt for output 34:
2025-02-19T11:25:53.008333 - * HyVideoVAELoader 7:
2025-02-19T11:25:53.008333 -   - Value not in list: model_name: 'hunyuan_video_vae_bf16.safetensors' not in (list of length 28)
2025-02-19T11:25:53.008333 - * HyVideoSampler 3:
2025-02-19T11:25:53.008333 -   - Return type mismatch between linked nodes: stg_args, LATENT != STGARGS
2025-02-19T11:25:53.008333 - Output will be ignored
2025-02-19T11:25:54.723230 - Prompt executed in 1.71 seconds
2025-02-19T11:25:54.997452 - []2025-02-19T11:25:54.997452 - 
2025-02-19T11:25:57.364267 - []2025-02-19T11:25:57.364267 - 
2025-02-19T11:25:57.453779 - []2025-02-19T11:25:57.453779 - 
2025-02-19T11:25:59.376437 - []2025-02-19T11:25:59.377437 - 
2025-02-19T11:26:01.326349 - []2025-02-19T11:26:01.326349 - 
2025-02-19T11:26:03.840600 - []2025-02-19T11:26:03.840600 - 
2025-02-19T11:26:03.856600 - []2025-02-19T11:26:03.856600 - 
2025-02-19T11:26:05.802350 - []2025-02-19T11:26:05.802350 - 
2025-02-19T11:26:06.349677 - got prompt
2025-02-19T11:26:06.445146 - Failed to validate prompt for output 34:
2025-02-19T11:26:06.445146 - * HyVideoSampler 3:
2025-02-19T11:26:06.445146 -   - Return type mismatch between linked nodes: stg_args, LATENT != STGARGS
2025-02-19T11:26:06.445146 - Output will be ignored
2025-02-19T11:26:06.864815 - Prompt executed in 0.42 seconds
2025-02-19T11:26:08.131891 - []2025-02-19T11:26:08.131891 - 
2025-02-19T11:26:09.803518 - []2025-02-19T11:26:09.803518 - 
2025-02-19T11:26:12.054804 - []2025-02-19T11:26:12.054804 - 
2025-02-19T11:26:14.021396 - []2025-02-19T11:26:14.021396 - 
2025-02-19T11:26:15.812323 - []2025-02-19T11:26:15.812323 - 
2025-02-19T11:26:17.993218 - []2025-02-19T11:26:17.993218 - 
2025-02-19T11:26:19.945124 - []2025-02-19T11:26:19.945124 - 
2025-02-19T11:26:22.280790 - []2025-02-19T11:26:22.280790 - 
2025-02-19T11:26:22.392304 - []2025-02-19T11:26:22.392304 - 
2025-02-19T11:26:24.351596 - []2025-02-19T11:26:24.351596 - 
2025-02-19T11:26:26.319891 - []2025-02-19T11:26:26.319891 - 
2025-02-19T11:26:28.693272 - []2025-02-19T11:26:28.693272 - 
2025-02-19T11:26:28.765783 - []2025-02-19T11:26:28.765783 - 
2025-02-19T11:26:30.716470 - []2025-02-19T11:26:30.716470 - 
2025-02-19T11:26:32.657860 - []2025-02-19T11:26:32.658861 - 
2025-02-19T11:26:35.133312 - []2025-02-19T11:26:35.133312 - 
2025-02-19T11:26:35.173312 - []2025-02-19T11:26:35.173312 - 
2025-02-19T11:26:37.055027 - []2025-02-19T11:26:37.055027 - 
2025-02-19T11:26:38.999955 - []2025-02-19T11:26:38.999955 - 
2025-02-19T11:26:41.438097 - []2025-02-19T11:26:41.438097 - 
2025-02-19T11:26:41.580125 - []2025-02-19T11:26:41.580125 - 
2025-02-19T11:26:43.401181 - []2025-02-19T11:26:43.401181 - 
2025-02-19T11:26:45.326083 - []2025-02-19T11:26:45.326083 - 
2025-02-19T11:26:47.255980 - []2025-02-19T11:26:47.255980 - 
2025-02-19T11:26:49.212844 - []2025-02-19T11:26:49.212844 - 
2025-02-19T11:26:51.026387 - got prompt
2025-02-19T11:26:51.118897 - Failed to validate prompt for output 64:
2025-02-19T11:26:51.118897 - * LoadImage 44:
2025-02-19T11:26:51.118897 -   - Custom validation failed for node: image - Invalid image file: 微信截图_20241114093542.png
2025-02-19T11:26:51.118897 - Output will be ignored
2025-02-19T11:26:51.163406 - Failed to validate prompt for output 34:
2025-02-19T11:26:51.163406 - * HyVideoVAELoader 7:
2025-02-19T11:26:51.163406 -   - Value not in list: model_name: 'hunyuan_video_vae_bf16.safetensors' not in (list of length 28)
2025-02-19T11:26:51.163406 - * HyVideoModelLoader 1:
2025-02-19T11:26:51.163406 -   - Value not in list: model: 'skyreels_hunyuan_i2v_bf16.safetensors' not in ['F.1基础算法模型-_F.1-dev-fp8.safetensors', 'FLUX_言灵_极致优秀的动漫大模型_F.1_V1.safetensors', 'animeproFLUX_fp8E4m3fn.safetensors', 'flux1-dev.safetensors', 'flux1-dev.sft', 'hyvideo\\hunyuan_video_FastVideo_720_fp8_e4m3fn.safetensors', 'iclight_sd15_fbc.safetensors', 'iclight_sd15_fbc_unet_ldm.safetensors', 'iclight_sd15_fc.safetensors', 'iclight_sd15_fc_unet_ldm.safetensors', 'kolors\\diffusion_pytorch_model.safetensors', 'mochi\\.cache\\huggingface\\download\\mochi_preview_dit_fp8_e4m3fn.safetensors', 'mochi\\mochi_preview_dit_GGUF_Q8_0.safetensors']
2025-02-19T11:26:51.163406 - Output will be ignored
2025-02-19T11:26:51.164407 - invalid prompt: {'type': 'prompt_outputs_failed_validation', 'message': 'Prompt outputs failed validation', 'details': '', 'extra_info': {}}
2025-02-19T11:26:51.226411 - []2025-02-19T11:26:51.226411 - 
2025-02-19T11:26:52.527632 - []2025-02-19T11:26:52.527632 - 
2025-02-19T11:26:55.190734 - []2025-02-19T11:26:55.190734 - 
2025-02-19T11:26:57.140603 - []2025-02-19T11:26:57.140603 - 
2025-02-19T11:26:58.459850 - []2025-02-19T11:26:58.459850 - 
2025-02-19T11:27:01.070222 - []2025-02-19T11:27:01.070222 - 
2025-02-19T11:27:03.021250 - []2025-02-19T11:27:03.021250 - 
2025-02-19T11:27:04.417393 - []2025-02-19T11:27:04.417393 - 
2025-02-19T11:27:07.006023 - []2025-02-19T11:27:07.006023 - 
2025-02-19T11:27:08.971298 - []2025-02-19T11:27:08.971298 - 
2025-02-19T11:27:09.617414 - got prompt
2025-02-19T11:27:10.511855 - []2025-02-19T11:27:10.511855 - 
2025-02-19T11:27:12.827779 - encoded latents shape2025-02-19T11:27:12.827779 -  2025-02-19T11:27:12.827779 - torch.Size([1, 16, 1, 68, 120])2025-02-19T11:27:12.827779 - 
2025-02-19T11:27:12.827779 - Loading text encoder model (clipL) from: D:\Comfyui\ComfyUI-aki-v1.2\ComfyUI-aki-v1.2\models\clip\clip-vit-large-patch14
2025-02-19T11:27:13.112808 - []2025-02-19T11:27:13.112808 - 
2025-02-19T11:27:13.422957 - Text encoder to dtype: torch.float16
2025-02-19T11:27:13.450468 - Loading tokenizer (clipL) from: D:\Comfyui\ComfyUI-aki-v1.2\ComfyUI-aki-v1.2\models\clip\clip-vit-large-patch14
2025-02-19T11:27:13.520472 - Loading text encoder model (llm) from: D:\Comfyui\ComfyUI-aki-v1.2\ComfyUI-aki-v1.2\models\LLM\llava-llama-3-8b-text-encoder-tokenizer
2025-02-19T11:27:13.523469 - !!! Exception during processing !!! Failed to import transformers.models.aria.configuration_aria because of the following error (look up to see its traceback):
No module named 'transformers.models.aria.configuration_aria'
2025-02-19T11:27:13.525471 - Traceback (most recent call last):
  File "D:\Comfyui\ComfyUI-aki-v1.2\ComfyUI-aki-v1.2\python\lib\site-packages\transformers\utils\import_utils.py", line 1817, in _get_module
  File "D:\Comfyui\ComfyUI-aki-v1.2\ComfyUI-aki-v1.2\python\lib\importlib\__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1004, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'transformers.models.aria.configuration_aria'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "D:\Comfyui\ComfyUI-aki-v1.2\ComfyUI-aki-v1.2\execution.py", line 323, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "D:\Comfyui\ComfyUI-aki-v1.2\ComfyUI-aki-v1.2\execution.py", line 198, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "D:\Comfyui\ComfyUI-aki-v1.2\ComfyUI-aki-v1.2\execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)
  File "D:\Comfyui\ComfyUI-aki-v1.2\ComfyUI-aki-v1.2\execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
  File "D:\Comfyui\ComfyUI-aki-v1.2\ComfyUI-aki-v1.2\custom_nodes\ComfyUI-HunyuanVideoWrapper\nodes.py", line 623, in loadmodel
    text_encoder = TextEncoder(
  File "D:\Comfyui\ComfyUI-aki-v1.2\ComfyUI-aki-v1.2\custom_nodes\ComfyUI-HunyuanVideoWrapper\hyvideo\text_encoder\__init__.py", line 167, in __init__
    self.model, self.model_path = load_text_encoder(
  File "D:\Comfyui\ComfyUI-aki-v1.2\ComfyUI-aki-v1.2\custom_nodes\ComfyUI-HunyuanVideoWrapper\hyvideo\text_encoder\__init__.py", line 39, in load_text_encoder
    text_encoder = AutoModel.from_pretrained(
  File "D:\Comfyui\ComfyUI-aki-v1.2\ComfyUI-aki-v1.2\python\lib\site-packages\transformers\models\auto\auto_factory.py", line 543, in from_pretrained
    # Set the adapter kwargs
  File "D:\Comfyui\ComfyUI-aki-v1.2\ComfyUI-aki-v1.2\python\lib\site-packages\transformers\models\auto\auto_factory.py", line 780, in keys
    if key in self._config_mapping.keys()
  File "D:\Comfyui\ComfyUI-aki-v1.2\ComfyUI-aki-v1.2\python\lib\site-packages\transformers\models\auto\auto_factory.py", line 781, in <listcomp>
    ]
  File "D:\Comfyui\ComfyUI-aki-v1.2\ComfyUI-aki-v1.2\python\lib\site-packages\transformers\models\auto\auto_factory.py", line 777, in _load_attr_from_module
    self._load_attr_from_module(key, self._model_mapping[key]),
  File "D:\Comfyui\ComfyUI-aki-v1.2\ComfyUI-aki-v1.2\python\lib\site-packages\transformers\models\auto\auto_factory.py", line 693, in getattribute_from_module
    # object at the top level.
  File "D:\Comfyui\ComfyUI-aki-v1.2\ComfyUI-aki-v1.2\python\lib\site-packages\transformers\utils\import_utils.py", line 1805, in __getattr__
  File "D:\Comfyui\ComfyUI-aki-v1.2\ComfyUI-aki-v1.2\python\lib\site-packages\transformers\utils\import_utils.py", line 1819, in _get_module
RuntimeError: Failed to import transformers.models.aria.configuration_aria because of the following error (look up to see its traceback):
No module named 'transformers.models.aria.configuration_aria'

2025-02-19T11:27:13.525471 - Prompt executed in 3.56 seconds
2025-02-19T11:27:15.051461 - []2025-02-19T11:27:15.051461 - 
2025-02-19T11:27:16.426881 - []2025-02-19T11:27:16.426881 - 
2025-02-19T11:27:19.031267 - []2025-02-19T11:27:19.031267 - 
2025-02-19T11:27:20.988161 - []2025-02-19T11:27:20.988161 - 
2025-02-19T11:27:22.369234 - []2025-02-19T11:27:22.369234 - 
2025-02-19T11:27:24.974092 - []2025-02-19T11:27:24.974092 - 
2025-02-19T11:27:26.918843 - []2025-02-19T11:27:26.918843 - 
2025-02-19T11:27:28.303473 - []2025-02-19T11:27:28.303473 - 
2025-02-19T11:27:30.891620 - []2025-02-19T11:27:30.891620 - 
2025-02-19T11:27:32.824798 - []2025-02-19T11:27:32.825799 - 
2025-02-19T11:27:34.271152 - []2025-02-19T11:27:34.271152 - 
2025-02-19T11:27:36.780210 - []2025-02-19T11:27:36.780210 - 
2025-02-19T11:27:38.719509 - []2025-02-19T11:27:38.719509 - 
2025-02-19T11:27:40.229270 - []2025-02-19T11:27:40.229270 - 
2025-02-19T11:27:42.668593 - []2025-02-19T11:27:42.668593 - 
2025-02-19T11:27:44.609507 - []2025-02-19T11:27:44.609507 - 
2025-02-19T11:27:46.162915 - []2025-02-19T11:27:46.162915 - 
2025-02-19T11:27:48.603751 - []2025-02-19T11:27:48.603751 - 
2025-02-19T11:27:50.542544 - []2025-02-19T11:27:50.542544 - 
2025-02-19T11:27:52.112873 - []2025-02-19T11:27:52.112873 - 
2025-02-19T11:27:54.505764 - []2025-02-19T11:27:54.505764 - 
2025-02-19T11:27:56.472305 - []2025-02-19T11:27:56.472305 - 
2025-02-19T11:27:58.042962 - []2025-02-19T11:27:58.042962 - 
2025-02-19T11:28:00.436511 - []2025-02-19T11:28:00.436511 - 
2025-02-19T11:28:02.377802 - []2025-02-19T11:28:02.377802 - 
2025-02-19T11:28:03.997460 - []2025-02-19T11:28:03.997460 - 
2025-02-19T11:28:06.354729 - []2025-02-19T11:28:06.354729 - 
2025-02-19T11:28:08.306566 - []2025-02-19T11:28:08.306566 - 
2025-02-19T11:28:09.925461 - []2025-02-19T11:28:09.925461 - 
2025-02-19T11:28:12.225360 - []2025-02-19T11:28:12.225360 - 
2025-02-19T11:28:14.171069 - []2025-02-19T11:28:14.171069 - 
2025-02-19T11:28:15.858641 - []2025-02-19T11:28:15.858641 - 
2025-02-19T11:28:18.116748 - []2025-02-19T11:28:18.116748 - 
2025-02-19T11:28:20.056606 - []2025-02-19T11:28:20.056606 - 
2025-02-19T11:28:21.831567 - []2025-02-19T11:28:21.831567 - 
2025-02-19T11:28:24.049572 - []2025-02-19T11:28:24.049572 - 
2025-02-19T11:28:26.003819 - []2025-02-19T11:28:26.003819 - 
2025-02-19T11:28:28.326773 - []2025-02-19T11:28:28.326773 - 
2025-02-19T11:28:28.452288 - []2025-02-19T11:28:28.453288 - 
2025-02-19T11:28:30.394284 - []2025-02-19T11:28:30.394284 - 
2025-02-19T11:28:32.326853 - []2025-02-19T11:28:32.326853 - 
2025-02-19T11:28:34.773250 - []2025-02-19T11:28:34.773250 - 
2025-02-19T11:28:34.783760 - []2025-02-19T11:28:34.783760 - 
2025-02-19T11:28:36.719019 - []2025-02-19T11:28:36.719019 - 
2025-02-19T11:28:38.633674 - []2025-02-19T11:28:38.633674 - 
2025-02-19T11:28:41.057136 - []2025-02-19T11:28:41.057136 - 
2025-02-19T11:28:41.192278 - []2025-02-19T11:28:41.192278 - 
2025-02-19T11:28:43.991988 - []2025-02-19T11:28:43.991988 - 
2025-02-19T11:28:45.909644 - []2025-02-19T11:28:45.909644 - 
2025-02-19T11:28:47.114476 - []2025-02-19T11:28:47.114476 - 
2025-02-19T11:28:49.704368 - DownloadAndLoadHyVideoTextEncoder2025-02-19T11:28:49.704368 - 
2025-02-19T11:28:49.894388 - []2025-02-19T11:28:49.894388 - 
2025-02-19T11:28:51.838288 - []2025-02-19T11:28:51.838288 - 
2025-02-19T11:28:53.059617 - []2025-02-19T11:28:53.059617 - 
2025-02-19T11:28:55.792346 - []2025-02-19T11:28:55.792346 - 
2025-02-19T11:28:57.715856 - []2025-02-19T11:28:57.715856 - 
2025-02-19T11:28:59.006555 - []2025-02-19T11:28:59.006555 - 
2025-02-19T11:29:01.670999 - []2025-02-19T11:29:01.670999 - 
2025-02-19T11:29:03.621473 - []2025-02-19T11:29:03.621473 - 
2025-02-19T11:29:04.964416 - []2025-02-19T11:29:04.964416 - 
2025-02-19T11:29:07.544097 - []2025-02-19T11:29:07.544097 - 
2025-02-19T11:29:09.490487 - []2025-02-19T11:29:09.490487 - 
2025-02-19T11:29:10.909931 - []2025-02-19T11:29:10.909931 - 
2025-02-19T11:29:13.073009 - got prompt
2025-02-19T11:29:13.299549 - Loading text encoder model (clipL) from: D:\Comfyui\ComfyUI-aki-v1.2\ComfyUI-aki-v1.2\models\clip\clip-vit-large-patch14
2025-02-19T11:29:13.575570 - Text encoder to dtype: torch.float16
2025-02-19T11:29:13.607089 - []2025-02-19T11:29:13.607089 - 
2025-02-19T11:29:13.618090 - Loading tokenizer (clipL) from: D:\Comfyui\ComfyUI-aki-v1.2\ComfyUI-aki-v1.2\models\clip\clip-vit-large-patch14
2025-02-19T11:29:13.674090 - Loading text encoder model (llm) from: D:\Comfyui\ComfyUI-aki-v1.2\ComfyUI-aki-v1.2\models\LLM\llava-llama-3-8b-text-encoder-tokenizer
2025-02-19T11:29:13.675091 - !!! Exception during processing !!! Failed to import transformers.models.aria.configuration_aria because of the following error (look up to see its traceback):
No module named 'transformers.models.aria.configuration_aria'
2025-02-19T11:29:13.675091 - Traceback (most recent call last):
  File "D:\Comfyui\ComfyUI-aki-v1.2\ComfyUI-aki-v1.2\python\lib\site-packages\transformers\utils\import_utils.py", line 1817, in _get_module
  File "D:\Comfyui\ComfyUI-aki-v1.2\ComfyUI-aki-v1.2\python\lib\importlib\__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1004, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'transformers.models.aria.configuration_aria'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "D:\Comfyui\ComfyUI-aki-v1.2\ComfyUI-aki-v1.2\execution.py", line 323, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "D:\Comfyui\ComfyUI-aki-v1.2\ComfyUI-aki-v1.2\execution.py", line 198, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "D:\Comfyui\ComfyUI-aki-v1.2\ComfyUI-aki-v1.2\execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)
  File "D:\Comfyui\ComfyUI-aki-v1.2\ComfyUI-aki-v1.2\execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
  File "D:\Comfyui\ComfyUI-aki-v1.2\ComfyUI-aki-v1.2\custom_nodes\ComfyUI-HunyuanVideoWrapper\nodes.py", line 623, in loadmodel
    text_encoder = TextEncoder(
  File "D:\Comfyui\ComfyUI-aki-v1.2\ComfyUI-aki-v1.2\custom_nodes\ComfyUI-HunyuanVideoWrapper\hyvideo\text_encoder\__init__.py", line 167, in __init__
    self.model, self.model_path = load_text_encoder(
  File "D:\Comfyui\ComfyUI-aki-v1.2\ComfyUI-aki-v1.2\custom_nodes\ComfyUI-HunyuanVideoWrapper\hyvideo\text_encoder\__init__.py", line 39, in load_text_encoder
    text_encoder = AutoModel.from_pretrained(
  File "D:\Comfyui\ComfyUI-aki-v1.2\ComfyUI-aki-v1.2\python\lib\site-packages\transformers\models\auto\auto_factory.py", line 543, in from_pretrained
    # Set the adapter kwargs
  File "D:\Comfyui\ComfyUI-aki-v1.2\ComfyUI-aki-v1.2\python\lib\site-packages\transformers\models\auto\auto_factory.py", line 780, in keys
    if key in self._config_mapping.keys()
  File "D:\Comfyui\ComfyUI-aki-v1.2\ComfyUI-aki-v1.2\python\lib\site-packages\transformers\models\auto\auto_factory.py", line 781, in <listcomp>
    ]
  File "D:\Comfyui\ComfyUI-aki-v1.2\ComfyUI-aki-v1.2\python\lib\site-packages\transformers\models\auto\auto_factory.py", line 777, in _load_attr_from_module
    self._load_attr_from_module(key, self._model_mapping[key]),
  File "D:\Comfyui\ComfyUI-aki-v1.2\ComfyUI-aki-v1.2\python\lib\site-packages\transformers\models\auto\auto_factory.py", line 693, in getattribute_from_module
    # object at the top level.
  File "D:\Comfyui\ComfyUI-aki-v1.2\ComfyUI-aki-v1.2\python\lib\site-packages\transformers\utils\import_utils.py", line 1805, in __getattr__
  File "D:\Comfyui\ComfyUI-aki-v1.2\ComfyUI-aki-v1.2\python\lib\site-packages\transformers\utils\import_utils.py", line 1819, in _get_module
RuntimeError: Failed to import transformers.models.aria.configuration_aria because of the following error (look up to see its traceback):
No module named 'transformers.models.aria.configuration_aria'

2025-02-19T11:29:13.676090 - Prompt executed in 0.44 seconds

Attached Workflow

Please make sure that workflow does not contain any sensitive information such as API keys or passwords.

{"last_node_id":64,"last_link_id":89,"nodes":[{"id":57,"type":"HyVideoTorchCompileSettings","pos":[-697.438720703125,-518.0609130859375],"size":[441,274],"flags":{},"order":0,"mode":0,"inputs":[],"outputs":[{"name":"torch_compile_args","type":"COMPILEARGS","links":[79],"label":"torch_compile_args"}],"properties":{"Node name for S&R":"HyVideoTorchCompileSettings","enableTabs":false,"tabWidth":65,"tabXOffset":10,"hasSecondTab":false,"secondTabText":"Send Back","secondTabOffset":80,"secondTabWidth":65},"widgets_values":["inductor",false,"default",false,64,true,true,false,false,false]},{"id":58,"type":"HyVideoCFG","pos":[332.0693664550781,322.355224609375],"size":[400,200],"flags":{},"order":1,"mode":0,"inputs":[],"outputs":[{"name":"hyvid_cfg","type":"HYVID_CFG","links":[86],"slot_index":0,"label":"hyvid_cfg"}],"properties":{"Node name for S&R":"HyVideoCFG","enableTabs":false,"tabWidth":65,"tabXOffset":10,"hasSecondTab":false,"secondTabText":"Send Back","secondTabOffset":80,"secondTabWidth":65},"widgets_values":["Aerial view, aerial view, overexposed, low quality, deformation, a poor composition, bad hands, bad teeth, bad eyes, bad limbs, distortion",6,0,0.1]},{"id":60,"type":"ColorMatch","pos":[1731.7098388671875,-148.0238494873047],"size":[315,102],"flags":{},"order":13,"mode":0,"inputs":[{"name":"image_ref","type":"IMAGE","link":84,"label":"参考图像"},{"name":"image_target","type":"IMAGE","link":83,"label":"目标图像"}],"outputs":[{"name":"image","type":"IMAGE","links":[85],"slot_index":0,"label":"图像"}],"properties":{"Node name for S&R":"ColorMatch","enableTabs":false,"tabWidth":65,"tabXOffset":10,"hasSecondTab":false,"secondTabText":"Send Back","secondTabOffset":80,"secondTabWidth":65},"widgets_values":["mkl",1]},{"id":52,"type":"ImageConcatMulti","pos":[2075.693115234375,-122.45597076416016],"size":[210,150],"flags":{},"order":14,"mode":0,"inputs":[{"name":"image_1","type":"IMAGE","link":71,"label":"图像_1"},{"name":"image_2","type":"IMAGE","link":85,"label":"图像_2"}],"outputs":[{"name":"images","type":"IMAGE","links":[73],"slot_index":0,"label":"图像"}],"properties":{},"widgets_values":[2,"right",false,null]},{"id":43,"type":"HyVideoEncode","pos":[449.0269775390625,648.575439453125],"size":[315,198],"flags":{},"order":9,"mode":0,"inputs":[{"name":"vae","type":"VAE","link":54,"label":"vae"},{"name":"image","type":"IMAGE","link":63,"label":"image"}],"outputs":[{"name":"samples","type":"LATENT","links":[89],"slot_index":0,"label":"samples"}],"properties":{"Node name for S&R":"HyVideoEncode","enableTabs":false,"tabWidth":65,"tabXOffset":10,"hasSecondTab":false,"secondTabText":"Send Back","secondTabOffset":80,"secondTabWidth":65},"widgets_values":[false,64,256,true]},{"id":5,"type":"HyVideoDecode","pos":[1347.7462158203125,-192.7362823486328],"size":[345.4285888671875,150],"flags":{},"order":12,"mode":0,"inputs":[{"name":"vae","type":"VAE","link":6,"label":"vae"},{"name":"samples","type":"LATENT","link":4,"label":"samples"}],"outputs":[{"name":"images","type":"IMAGE","links":[83],"slot_index":0,"label":"images"}],"properties":{"Node name for S&R":"HyVideoDecode","enableTabs":false,"tabWidth":65,"tabXOffset":10,"hasSecondTab":false,"secondTabText":"Send Back","secondTabOffset":80,"secondTabWidth":65},"widgets_values":[true,64,192,false]},{"id":34,"type":"VHS_VideoCombine","pos":[2308.818603515625,-119.01728820800781],"size":[1156.6568603515625,334],"flags":{},"order":15,"mode":0,"inputs":[{"name":"images","type":"IMAGE","link":73,"label":"图像"},{"name":"audio","type":"AUDIO","link":null,"label":"音频","shape":7},{"name":"meta_batch","type":"VHS_BatchManager","link":null,"label":"批次管理","shape":7},{"name":"vae","type":"VAE","link":null,"label":"vae","shape":7}],"outputs":[{"name":"Filenames","type":"VHS_FILENAMES","links":null,"label":"文件名"}],"properties":{"Node name for S&R":"VHS_VideoCombine","enableTabs":false,"tabWidth":65,"tabXOffset":10,"hasSecondTab":false,"secondTabText":"Send Back","secondTabOffset":80,"secondTabWidth":65},"widgets_values":{"frame_rate":24,"loop_count":0,"filename_prefix":"HunyuanVideo_skyreel_I2V","format":"video/h264-mp4","pix_fmt":"yuv420p","crf":19,"save_metadata":true,"trim_to_audio":false,"pingpong":false,"save_output":true,"videopreview":{"hidden":false,"paused":false,"params":{"filename":"HunyuanVideo_skyreel_I2V_00004.mp4","subfolder":"","type":"output","format":"video/h264-mp4","frame_rate":24,"workflow":"HunyuanVideo_skyreel_I2V_00004.png","fullpath":"E:\\ComfyUI-aki-e1205\\output\\HunyuanVideo_skyreel_I2V_00004.mp4"},"muted":false}}},{"id":45,"type":"ImageResizeKJ","pos":[-220.4620819091797,732.4263916015625],"size":[315,266],"flags":{},"order":7,"mode":0,"inputs":[{"name":"image","type":"IMAGE","link":56,"label":"图像"},{"name":"get_image_size","type":"IMAGE","link":null,"label":"参考图像","shape":7},{"name":"width_input","type":"INT","link":null,"label":"宽度","shape":7,"widget":{"name":"width_input"}},{"name":"height_input","type":"INT","link":null,"label":"高度","shape":7,"widget":{"name":"height_input"}}],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[63,71,84,88],"slot_index":0,"label":"图像"},{"name":"width","type":"INT","links":[69],"slot_index":1,"label":"宽度"},{"name":"height","type":"INT","links":[70],"slot_index":2,"label":"高度"}],"properties":{"Node name for S&R":"ImageResizeKJ","enableTabs":false,"tabWidth":65,"tabXOffset":10,"hasSecondTab":false,"secondTabText":"Send Back","secondTabOffset":80,"secondTabWidth":65},"widgets_values":[960,544,"lanczos",false,2,0,0,"center"]},{"id":64,"type":"PreviewImage","pos":[164.6226348876953,751.6864624023438],"size":[210,246],"flags":{},"order":10,"mode":0,"inputs":[{"name":"images","type":"IMAGE","link":88,"label":"图像"}],"outputs":[],"properties":{"Node name for S&R":"PreviewImage","enableTabs":false,"tabWidth":65,"tabXOffset":10,"hasSecondTab":false,"secondTabText":"Send Back","secondTabOffset":80,"secondTabWidth":65},"widgets_values":[]},{"id":59,"type":"HyVideoBlockSwap","pos":[-588.6229248046875,-192.00633239746094],"size":[315,130],"flags":{},"order":2,"mode":0,"inputs":[],"outputs":[{"name":"block_swap_args","type":"BLOCKSWAPARGS","links":[87],"slot_index":0,"label":"block_swap_args"}],"properties":{"Node name for S&R":"HyVideoBlockSwap","enableTabs":false,"tabWidth":65,"tabXOffset":10,"hasSecondTab":false,"secondTabText":"Send Back","secondTabOffset":80,"secondTabWidth":65},"widgets_values":[20,0,false,false]},{"id":30,"type":"HyVideoTextEncode","pos":[329.66632080078125,15.596169471740723],"size":[437.3100280761719,240.78936767578125],"flags":{},"order":8,"mode":0,"inputs":[{"name":"text_encoders","type":"HYVIDTEXTENCODER","link":35,"label":"text_encoders"},{"name":"custom_prompt_template","type":"PROMPT_TEMPLATE","link":null,"label":"custom_prompt_template","shape":7},{"name":"clip_l","type":"CLIP","link":null,"label":"clip_l","shape":7},{"name":"hyvid_cfg","type":"HYVID_CFG","link":86,"label":"hyvid_cfg","shape":7}],"outputs":[{"name":"hyvid_embeds","type":"HYVIDEMBEDS","links":[74],"slot_index":0,"label":"hyvid_embeds"}],"properties":{"Node name for S&R":"HyVideoTextEncode","enableTabs":false,"tabWidth":65,"tabXOffset":10,"hasSecondTab":false,"secondTabText":"Send Back","secondTabOffset":80,"secondTabWidth":65},"widgets_values":["A beautiful woman is standing in the pool.","bad quality video","video"]},{"id":3,"type":"HyVideoSampler","pos":[958.2782592773438,-209.86329650878906],"size":[315,739],"flags":{},"order":11,"mode":0,"inputs":[{"name":"model","type":"HYVIDEOMODEL","link":2,"label":"model"},{"name":"hyvid_embeds","type":"HYVIDEMBEDS","link":74,"label":"hyvid_embeds"},{"name":"samples","type":"LATENT","link":89,"label":"samples","shape":7},{"name":"stg_args","type":"STGARGS","link":null,"label":"stg_args","shape":7},{"name":"context_options","type":"COGCONTEXT","link":null,"label":"context_options","shape":7},{"name":"feta_args","type":"FETAARGS","link":null,"label":"feta_args","shape":7},{"name":"feta_args","type":"FETAARGS","link":null,"label":"feta_args","shape":7},{"name":"width","type":"INT","link":69,"label":"width","widget":{"name":"width"}},{"name":"height","type":"INT","link":70,"label":"height","widget":{"name":"height"}},{"name":"teacache_args","type":"TEACACHEARGS","link":null,"label":"teacache_args","shape":7}],"outputs":[{"name":"samples","type":"LATENT","links":[4],"slot_index":0,"label":"samples"}],"properties":{"Node name for S&R":"HyVideoSampler","enableTabs":false,"tabWidth":65,"tabXOffset":10,"hasSecondTab":false,"secondTabText":"Send Back","secondTabOffset":80,"secondTabWidth":65},"widgets_values":[512,320,49,30,1,7,14,"fixed",1,1]},{"id":7,"type":"HyVideoVAELoader","pos":[-684.8287963867188,342.1769104003906],"size":[379.166748046875,82],"flags":{},"order":3,"mode":0,"inputs":[{"name":"compile_args","type":"COMPILEARGS","link":null,"label":"compile_args","shape":7}],"outputs":[{"name":"vae","type":"VAE","links":[6,54],"slot_index":0,"label":"vae"}],"properties":{"Node name for S&R":"HyVideoVAELoader","enableTabs":false,"tabWidth":65,"tabXOffset":10,"hasSecondTab":false,"secondTabText":"Send Back","secondTabOffset":80,"secondTabWidth":65},"widgets_values":["hyvid\\hunyuan_video_vae_bf16.safetensors","bf16"]},{"id":44,"type":"LoadImage","pos":[-628,699],"size":[315,314.0000305175781],"flags":{},"order":4,"mode":0,"inputs":[],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[56],"slot_index":0,"label":"图像"},{"name":"MASK","type":"MASK","links":null,"label":"遮罩"}],"properties":{"Node name for S&R":"LoadImage","enableTabs":false,"tabWidth":65,"tabXOffset":10,"hasSecondTab":false,"secondTabText":"Send Back","secondTabOffset":80,"secondTabWidth":65},"widgets_values":["Liblib_00104_.png","image"]},{"id":1,"type":"HyVideoModelLoader","pos":[-212.74745178222656,-470.8774108886719],"size":[426.1773986816406,242],"flags":{},"order":6,"mode":0,"inputs":[{"name":"compile_args","type":"COMPILEARGS","link":79,"label":"compile_args","shape":7},{"name":"block_swap_args","type":"BLOCKSWAPARGS","link":87,"label":"block_swap_args","shape":7},{"name":"lora","type":"HYVIDLORA","link":null,"label":"lora","shape":7}],"outputs":[{"name":"model","type":"HYVIDEOMODEL","links":[2],"slot_index":0,"label":"model"}],"properties":{"Node name for S&R":"HyVideoModelLoader","enableTabs":false,"tabWidth":65,"tabXOffset":10,"hasSecondTab":false,"secondTabText":"Send Back","secondTabOffset":80,"secondTabWidth":65},"widgets_values":["hyvideo\\hunyuan_video_FastVideo_720_fp8_e4m3fn.safetensors","bf16","fp8_e4m3fn_fast","offload_device","sageattn_varlen",false]},{"id":16,"type":"DownloadAndLoadHyVideoTextEncoder","pos":[-685,53],"size":[441,202],"flags":{},"order":5,"mode":0,"inputs":[],"outputs":[{"name":"hyvid_text_encoder","type":"HYVIDTEXTENCODER","links":[35],"label":"hyvid_text_encoder"}],"properties":{"Node name for S&R":"DownloadAndLoadHyVideoTextEncoder","enableTabs":false,"tabWidth":65,"tabXOffset":10,"hasSecondTab":false,"secondTabText":"Send Back","secondTabOffset":80,"secondTabWidth":65},"widgets_values":["Kijai/llava-llama-3-8b-text-encoder-tokenizer","openai/clip-vit-large-patch14","fp16",false,2,"disabled"]}],"links":[[2,1,0,3,0,"HYVIDEOMODEL"],[4,3,0,5,1,"LATENT"],[6,7,0,5,0,"VAE"],[35,16,0,30,0,"HYVIDTEXTENCODER"],[54,7,0,43,0,"VAE"],[56,44,0,45,0,"IMAGE"],[63,45,0,43,1,"IMAGE"],[69,45,1,3,7,"INT"],[70,45,2,3,8,"INT"],[71,45,0,52,0,"IMAGE"],[73,52,0,34,0,"IMAGE"],[74,30,0,3,1,"HYVIDEMBEDS"],[79,57,0,1,0,"COMPILEARGS"],[83,5,0,60,1,"IMAGE"],[84,45,0,60,0,"IMAGE"],[85,60,0,52,1,"IMAGE"],[86,58,0,30,3,"HYVID_CFG"],[87,59,0,1,1,"BLOCKSWAPARGS"],[88,45,0,64,0,"IMAGE"],[89,43,0,3,2,"LATENT"]],"groups":[{"id":1,"title":"UNET模型加载","bounding":[-707.438720703125,-591.660888671875,936.8897094726562,545.675537109375],"color":"#3f789e","font_size":24,"flags":{}},{"id":2,"title":"Clip模型","bounding":[-694.7820434570312,-20.75260353088379,461,285.6000061035156],"color":"#3f789e","font_size":24,"flags":{}},{"id":3,"title":"提示词","bounding":[319.66632080078125,-58.00385284423828,457.31005859375,590.3590698242188],"color":"#3f789e","font_size":24,"flags":{}},{"id":4,"title":"VAE模型","bounding":[-694.8287963867188,268.576904296875,399.166748046875,165.60000610351562],"color":"#3f789e","font_size":24,"flags":{}},{"id":5,"title":"采样器","bounding":[948.2782592773438,-283.4632873535156,335.00006103515625,822.6000366210938],"color":"#3f789e","font_size":24,"flags":{}},{"id":6,"title":"skyreel 混元 图生视频    up 楚门的AI世界","bounding":[-690.8129272460938,-760.4976806640625,690.1055297851562,107.3684310913086],"color":"#3f789e","font_size":66,"flags":{}},{"id":7,"title":"VAE编码","bounding":[439.0269775390625,574.9754638671875,335.0000305175781,281.6000061035156],"color":"#3f789e","font_size":24,"flags":{}},{"id":8,"title":"VAE解码","bounding":[1337.7462158203125,-266.3362731933594,365.4285888671875,233.60000610351562],"color":"#3f789e","font_size":24,"flags":{}}],"config":{},"extra":{"ds":{"scale":1.0610764609500014,"offset":[1149.8851070971846,119.19675555700128]},"node_versions":{"ComfyUI-HunyuanVideoWrapper":"4fad5e349d84e80f53b4aeb51679f5105fd6598b","comfy-core":"0.3.14","ComfyUI-KJNodes":"095c8d4b526ba3c1f12fd9dd1d7f3540c6a11358","ComfyUI-VideoHelperSuite":"c47b10ca1798b4925ff5a5f07d80c51ca80a837d"},"VHS_latentpreview":true,"VHS_latentpreviewrate":0,"ue_links":[]},"version":0.4}

Additional Context

(Please add any additional context or steps to reproduce the error here)

@hooknick1979
Copy link

hooknick1979 commented Feb 19, 2025

I am having issues with the same node. I updated comfyui and had the text to video version running prior.

RuntimeError: Failed to import transformers.models.timm_wrapper.configuration_timm_wrapper because of the following error (look up to see its traceback):
cannot import name 'ImageNetInfo' from 'timm.data' (D:\ComfyUI_windows_portable_312\python_embeded\Lib\site-packages\timm\data_init_.py)

Do i need to update python? Python version: 3.12.8

@hooknick1979
Copy link

I found a solution. Thanks to some posts and Gemini. I had to edit the python file in the error with the following code.

"python_embeded\Lib\site-packages\timm\data_init_.py"

from .auto_augment import (RandAugment, AutoAugment, rand_augment_ops, auto_augment_policy,
rand_augment_transform, auto_augment_transform)
from .config import resolve_data_config
from .constants import *
from .dataset import ImageDataset, IterableImageDataset, AugMixDataset
from .dataset_factory import create_dataset
from .loader import create_loader
from .mixup import Mixup, FastCollateMixup
from .parsers import (create_parser,
get_img_extensions, is_img_extension, set_img_extensions, add_img_extensions, del_img_extensions)
from .real_labels import RealLabelsImagenet
from .transforms import *
from .transforms_factory import create_transform

class ImageNetInfo:
"""ImageNet dataset information"""
def init(self): # Fixed typo here: init -> init
self.label_descriptions = {}
self.labels = {}
self.label_to_name = {}
self.classes = []
self.class_to_idx = {}

def infer_imagenet_subset(dataset_name):
"""Infer ImageNet subset from dataset name"""
return None

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants