Skip to content

EncoderDecoderModel not compatible with generate() method #478

Closed
@ZeguanXiao

Description

@ZeguanXiao

Environment info

  • adapter-transformers version: 3.1.0
  • Platform: Ubuntu 18.04 (Linux-5.4.0-87-generic-x86_64-with-glibc2.27)
  • Python version: Python 3.9.13
  • PyTorch version (GPU?): 1.13.1 (GPU)
  • Tensorflow version (GPU?): False
  • Using GPU in script?: True
  • Using distributed or parallel set-up in script?: Yes

Information

Model I am using (Bert, XLNet ...): EncoderDecoderModel

Language I am using the model on (English, Chinese ...): English

Adapter setup I am using (if any): AdapterConfig

The problem arises when using:

  • the official example scripts: (give details below)
  • my own modified scripts: (give details below)

The tasks I am working on is:

  • an official GLUE/SQUaD task: (give the name)
  • my own task or dataset: (give details below)

To reproduce

from transformers import EncoderDecoderModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model = EncoderDecoderModel.from_encoder_decoder_pretrained("bert-base-uncased", "bert-base-uncased")
model.add_adapter("pfeiffer")
model.set_active_adapters("pfeiffer")

text = "This is a test sentence."
inputs = tokenizer(text, return_tensors="pt")

model.generate(inputs.input_ids, bos_token_id=tokenizer.bos_token_id)

Error message:

Traceback (most recent call last):
  File "/nfsshare/home/xiaozeguan/anaconda3/envs/tnmt/lib/python3.7/site-packages/IPython/core/interactiveshell.py", line 3553, in run_code
    exec(code_obj, self.user_global_ns, self.user_ns)
  File "<ipython-input-16-61f95235ade8>", line 1, in <module>
    model.generate(inputs.input_ids, bos_token_id=tokenizer.bos_token_id)
  File "/nfsshare/home/xiaozeguan/anaconda3/envs/tnmt/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "/nfsshare/home/xiaozeguan/anaconda3/envs/tnmt/lib/python3.7/site-packages/transformers/generation_utils.py", line 1343, in generate
    inputs_tensor, model_kwargs, model_input_name
  File "/nfsshare/home/xiaozeguan/anaconda3/envs/tnmt/lib/python3.7/site-packages/transformers/generation_utils.py", line 585, in _prepare_encoder_decoder_kwargs_for_generation
    with ForwardContext(self, **encoder_kwargs):
  File "/nfsshare/home/xiaozeguan/anaconda3/envs/tnmt/lib/python3.7/site-packages/transformers/adapters/context.py", line 86, in __init__
    model.forward_context(self, *args, **kwargs)
  File "/nfsshare/home/xiaozeguan/anaconda3/envs/tnmt/lib/python3.7/site-packages/transformers/adapters/model_mixin.py", line 794, in forward_context
    context.prefix_states = self.base_model.prefix_tuning(*args, **kwargs)
  File "/nfsshare/home/xiaozeguan/anaconda3/envs/tnmt/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1270, in __getattr__
    type(self).__name__, name))
AttributeError: 'EncoderDecoderModel' object has no attribute 'prefix_tuning'

Metadata

Metadata

Assignees

Labels

bugSomething isn't working

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions