Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

No module named 'transformers.models.ijepa.configuration_ijepa' #202

Open
sidorowicz-aleksandra opened this issue Feb 12, 2025 · 0 comments

Comments

@sidorowicz-aleksandra
Copy link

sidorowicz-aleksandra commented Feb 12, 2025

I installed transformers in version 4.46.1 and parler-tts from the github link through. I use kaggle notebooks to run the parler-tts.

!pip install transformers==4.46.1 --q
!pip install git+https://github.com/huggingface/parler-tts.git --q

After running

from parler_tts import ParlerTTSForConditionalGeneration
device = "cuda" if torch.cuda.is_available() else "cpu"
model = ParlerTTSForConditionalGeneration.from_pretrained("parler-tts/parler-tts-mini-v1").to(device)

I receive the following error:

---------------------------------------------------------------------------
ModuleNotFoundError                       Traceback (most recent call last)
/usr/local/lib/python3.10/dist-packages/transformers/utils/import_utils.py in _get_module(self, module_name)

/usr/lib/python3.10/importlib/__init__.py in import_module(name, package)
    125             level += 1
--> 126     return _bootstrap._gcd_import(name[level:], package, level)
    127 

/usr/lib/python3.10/importlib/_bootstrap.py in _gcd_import(name, package, level)

/usr/lib/python3.10/importlib/_bootstrap.py in _find_and_load(name, import_)

/usr/lib/python3.10/importlib/_bootstrap.py in _find_and_load_unlocked(name, import_)

ModuleNotFoundError: No module named 'transformers.models.ijepa.configuration_ijepa'

The above exception was the direct cause of the following exception:

RuntimeError                              Traceback (most recent call last)
<ipython-input-41-b2a5dcf73a0e> in <cell line: 5>()
      3 
      4 # Load model and tokenizer
----> 5 model = ParlerTTSForConditionalGeneration.from_pretrained("parler-tts/parler-tts-mini-v1").to(device)
      6 tokenizer = AutoTokenizer.from_pretrained("parler-tts/parler-tts-mini-v1")
      7 

/usr/local/lib/python3.10/dist-packages/parler_tts/modeling_parler_tts.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
   2486         kwargs["_fast_init"] = False
   2487 
-> 2488         return super().from_pretrained(pretrained_model_name_or_path, *model_args, **kwargs)
   2489 
   2490     @classmethod

/usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, config, cache_dir, ignore_mismatched_sizes, force_download, local_files_only, token, revision, use_safetensors, weights_only, *model_args, **kwargs)
   4128                 missing_keys = [k for k in missing_keys if k not in missing_in_group]
   4129 
-> 4130         # Some models may have keys that are not in the state by design, removing them before needlessly warning
   4131         # the user.
   4132         if cls._keys_to_ignore_on_load_missing is not None:

/usr/local/lib/python3.10/dist-packages/parler_tts/modeling_parler_tts.py in __init__(self, config, text_encoder, audio_encoder, decoder)
   2351             from transformers.models.auto.modeling_auto import AutoModel
   2352 
-> 2353             audio_encoder = AutoModel.from_config(config.audio_encoder)
   2354 
   2355         if decoder is None:

/usr/local/lib/python3.10/dist-packages/transformers/models/auto/auto_factory.py in from_config(cls, config, **kwargs)
    420             trust_remote_code, config._name_or_path, has_local_code, has_remote_code
    421         )
--> 422 
    423         if has_remote_code and trust_remote_code:
    424             class_ref = config.auto_map[cls.__name__]

/usr/local/lib/python3.10/dist-packages/transformers/models/auto/auto_factory.py in keys(self)
    778             (
    779                 self._load_attr_from_module(key, self._config_mapping[key]),
--> 780                 self._load_attr_from_module(key, self._model_mapping[key]),
    781             )
    782             for key in self._model_mapping.keys()

/usr/local/lib/python3.10/dist-packages/transformers/models/auto/auto_factory.py in <listcomp>(.0)
    779                 self._load_attr_from_module(key, self._config_mapping[key]),
    780                 self._load_attr_from_module(key, self._model_mapping[key]),
--> 781             )
    782             for key in self._model_mapping.keys()
    783             if key in self._config_mapping.keys()

/usr/local/lib/python3.10/dist-packages/transformers/models/auto/auto_factory.py in _load_attr_from_module(self, model_type, attr)
    775 
    776     def items(self):
--> 777         mapping_items = [
    778             (
    779                 self._load_attr_from_module(key, self._config_mapping[key]),

/usr/local/lib/python3.10/dist-packages/transformers/models/auto/auto_factory.py in getattribute_from_module(module, attr)
    691     if isinstance(attr, tuple):
    692         return tuple(getattribute_from_module(module, a) for a in attr)
--> 693     if hasattr(module, attr):
    694         return getattr(module, attr)
    695     # Some of the mappings have entries model_type -> object of another model type. In that case we try to grab the

/usr/local/lib/python3.10/dist-packages/transformers/utils/import_utils.py in __getattr__(self, name)

/usr/local/lib/python3.10/dist-packages/transformers/utils/import_utils.py in _get_module(self, module_name)

RuntimeError: Failed to import transformers.models.ijepa.configuration_ijepa because of the following error (look up to see its traceback):
No module named 'transformers.models.ijepa.configuration_ijepa'
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant