-
Notifications
You must be signed in to change notification settings - Fork 28.6k
Issues: huggingface/transformers
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Assistant Decoding for Llava-Onevision Does Not Work
bug
#37471
opened Apr 13, 2025 by
Brianzhengca
2 of 4 tasks
modelling_llama -> spda_attention; ValueError: too many values to unpack (expected 4)
bug
#37470
opened Apr 12, 2025 by
hpcpony
2 of 4 tasks
apply_chat_template() function, in particular with the chat_template = "rag"
#37469
opened Apr 12, 2025 by
willxxy
support flash-attn feature in llama4
Feature request
Request for a new feature
#37465
opened Apr 12, 2025 by
gxm651182644
Convnext image preprocessor raises an AssertionError when comparing logins
bug
#37461
opened Apr 12, 2025 by
chandrusuresh
RuntimeError: Failed to import transformers.models.bert.modeling_bert
bug
#37459
opened Apr 11, 2025 by
JaehyunsLee
4 tasks
[Llama 4]
offloaded_hybrid
fails on main w/ torch._dynamo.exc.BackendCompilerFailed
bug
#37451
opened Apr 11, 2025 by
Vaibhavs10
2 of 4 tasks
facebook/opt-30b Cuda Allocation Error with version >= 4.50.0 code
bug
#37436
opened Apr 11, 2025 by
inf3rnus
example with no trainer use accelerator.end_training() in a wrong way
bug
#37434
opened Apr 10, 2025 by
we1559
2 of 4 tasks
ImportError: cannot import name '_flash_supports_window_size' from 'transformers.modeling_flash_attention_utils'
bug
#37428
opened Apr 10, 2025 by
mv2731
4 tasks
pytorch_utils.py > isin_mps_friendly > RuntimeError: Expected elements.dtype() == test_elements.dtype() to be true, but got false.
bug
#37423
opened Apr 10, 2025 by
f2janyway
1 of 4 tasks
How to solve the error of converting Qwen onnx_model to tensorRT_model?
#37408
opened Apr 10, 2025 by
dearwind153
how to reduce original model's tokenizer vocabulary
Feature request
Request for a new feature
#37390
opened Apr 9, 2025 by
masterwang22327
Issue: Unexpected Shape of logits: When Using generate() with num_return_sequences > 1
bug
#37378
opened Apr 8, 2025 by
athmanar
2 of 4 tasks
Can't load Llama4 Processor
bug
Processing
Vision
#37375
opened Apr 8, 2025 by
pb-sameereddy
4 tasks
How to find a specific func doc when using transformers doc?
Feature request
Request for a new feature
#37364
opened Apr 8, 2025 by
habaohaba
Improve Request for a new feature
Vision
auxiliary_in_channels
default behavior in UperNet
Feature request
#37345
opened Apr 7, 2025 by
simonreise
Llama4TextExperts module implementation
bug
Usage
General questions about the library
#37325
opened Apr 6, 2025 by
Godofnothing
Lama4 scout. Any chance it could ever be in the browser?
Feature request
Request for a new feature
#37316
opened Apr 6, 2025 by
hpssjellis
Previous Next
ProTip!
Find all open issues with in progress development work with linked:pr.