Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

For --model mae_vit_base_patch8_128, KeyError #7

Open
robmarkcole opened this issue Apr 8, 2024 · 5 comments
Open

For --model mae_vit_base_patch8_128, KeyError #7

robmarkcole opened this issue Apr 8, 2024 · 5 comments

Comments

@robmarkcole
Copy link

robmarkcole commented Apr 8, 2024

Running eurosat_finetune, from the error:

    model = models_vit_tensor.__dict__[args.model](drop_path_rate=args.drop_path,
KeyError: 'mae_vit_base_patch8_128'

Adding print(list(models_vit_tensor.__dict__.keys()) I see:

['__name__', '__doc__', '__package__', '__loader__', '__spec__', '__file__', '__cached__', '__builtins__', 'partial', 'torch', 'nn', 

'Attention', 'Block', 'PatchEmbed', 'Linear_Block', 'Linear_Attention', 'VisionTransformer', 'vit_huge_patch14', 'vit_base_patch16', 

'vit_base_patch8', 'vit_base_patch8_128', 'vit_base_patch8_channel10', 'vit_base_patch16_128', 'vit_large_patch16', 

'vit_large_patch8_128', 'vit_huge_patch8_128', 'vit_base_patch8_120']

Possibly missing from the script: import models_mae_spectral and at line 273: model = models_mae_spectral.__dict__[args.model]()
However I then get AttributeError: 'MaskedAutoencoderViT' object has no attribute 'head'

@moonboy12138
Copy link

Thank you for your kind reminder. Please use the right command --model vit_base_patch8_128 during finetuning. We will correct this as soon as possible.

@robmarkcole
Copy link
Author

I then get

[11:43:49.174991] Load pre-trained checkpoint from: /teamspace/studios/this_studio/ieee_tpami_spectralgpt/weights/SpectralGPT+.pth
Traceback (most recent call last):
  File "/teamspace/studios/this_studio/ieee_tpami_spectralgpt/main_finetune.py", line 455, in <module>
    main(args)
  File "/teamspace/studios/this_studio/ieee_tpami_spectralgpt/main_finetune.py", line 293, in main
    if k in checkpoint_model and checkpoint_model[k].shape != state_dict[k].shape:
KeyError: 'pos_embed_spatial'

@moonboy12138
Copy link

moonboy12138 commented Apr 8, 2024

You can simply modify lines 291-292 by deleting 'pos_embed_spatial' in main_finetune.py to run the code. If the same error persists, try applying the same method again.

@robmarkcole
Copy link
Author

OK after removing 'pos_embed_spatial', 'pos_embed_temporal' I can proceed

@Ahuiforever
Copy link

OK after removing 'pos_embed_spatial', 'pos_embed_temporal' I can proceed

I suggest to modify the source code in the following way in case of other potential problems.

from

if k in checkpoint_model and checkpoint_model[k].shape != state_dict[k].shape:

to

if k in checkpoint_model and k in state_dict and checkpoint_model[k].shape != state_dict[k].shape:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants