Pinned Loading
-
text-generation-webui
text-generation-webui PublicA Gradio web UI for Large Language Models with support for multiple inference backends.
-
873 contributions in the last year
Day of Week | April Apr | May May | June Jun | July Jul | August Aug | September Sep | October Oct | November Nov | December Dec | January Jan | February Feb | March Mar | April Apr | ||||||||||||||||||||||||||||||||||||||||
Sunday Sun | |||||||||||||||||||||||||||||||||||||||||||||||||||||
Monday Mon | |||||||||||||||||||||||||||||||||||||||||||||||||||||
Tuesday Tue | |||||||||||||||||||||||||||||||||||||||||||||||||||||
Wednesday Wed | |||||||||||||||||||||||||||||||||||||||||||||||||||||
Thursday Thu | |||||||||||||||||||||||||||||||||||||||||||||||||||||
Friday Fri | |||||||||||||||||||||||||||||||||||||||||||||||||||||
Saturday Sat |
Less
No contributions.
Low contributions.
Medium-low contributions.
Medium-high contributions.
High contributions.
More
Contribution activity
April 2025
Created 3 repositories
-
oobabooga/llama-cpp-binaries
Python
This contribution was made on Apr 16
-
oobabooga/llama-cpp-wrapper
This contribution was made on Apr 15
-
oobabooga/exllamav3
Python
This contribution was made on Apr 7
Created a pull request in turboderp-org/exllamav3 that received 2 comments
Fix a tokenizer issue while converting models without a bos_token_id defined
I had to make this change to be able to convert arcee-ai/Virtuoso-Small
, which is of architecture Qwen2ForCausalLM
and has this in its tokenizer_co…
+2
−2
lines changed
•
2
comments
Opened 7 other pull requests in 1 repository
oobabooga/text-generation-webui
6
merged
1
closed
-
Merge dev branch
This contribution was made on Apr 18
-
Merge dev branch
This contribution was made on Apr 18
-
New llama.cpp loader
This contribution was made on Apr 16
-
New llama.cpp backend
This contribution was made on Apr 15
-
Merge dev branch
This contribution was made on Apr 9
-
Set context lengths to at most 8192 by default (to prevent out of memory errors)
This contribution was made on Apr 8
-
Add ExLlamaV3 support
This contribution was made on Apr 6
Created an issue in turboderp-org/exllamav3 that received 2 comments
cogito-v1-preview-qwen-32B
generates incoherent output for bpw <= 2.5
I have quantized deepcogito/cogito-v1-preview-qwen-32B for tests, and found that the following bpw worked perfectly, with no noticeable performance…
2
comments