-
Notifications
You must be signed in to change notification settings - Fork 42
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature Request] Make token indexing work for batched input and UnifiedTransformer #138
Comments
Oh wait, seems like token indexing is supposed to work only with |
Butanium
changed the title
[Bug] Token indexing seems to be broken
[Feature Request] Make token indexing work for batched input
May 26, 2024
Ok but with UnifiedTransformer, token indexing doesn't work as padding side is right by default : l = ["ab dfez zd", "a", "b"]
from nnsight import LanguageModel
model = LanguageModel("gpt2", device_map="cpu")
with model.trace(l):
inp = model.input[1]['input_ids'].token[0].save()
with model.trace() as tracer:
inp_l = []
for s in l:
with tracer.invoke(s):
inp_l.append(model.input[1]['input_ids'].token[0].save())
from nnsight.models.UnifiedTransformer import UnifiedTransformer
umodel = UnifiedTransformer("gpt2", device="cpu")
with umodel.trace() as tracer:
inp_l2 = []
for s in l:
with tracer.invoke(s):
inp_l2.append(umodel.input[1]['input'].token[0].save())
print(inp)
print([i.item() for i in inp_l])
print([i.item() for i in inp_l2])
|
Butanium
changed the title
[Feature Request] Make token indexing work for batched input
[Feature Request] Make token indexing work for batched input and UnifiedTransformer
May 26, 2024
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Using
.token[0]
returns the padding tokenout:
The text was updated successfully, but these errors were encountered: