QLora-ready Coding Models Collection For Finetuning. GPU is needed for both quantization and inference. β’ 15 items β’ Updated about 23 hours ago
view post Post 3020 π DeepSeek πv3 achieves a solid 7 point jump than v2.5, surpassing GPT-4o, but is still behind π o1 πand Claude 3.5. onekq-ai/WebApp1K-models-leaderboard See translation π 6 6 π₯ 6 6 π 1 1 + Reply
view post Post 586 October version of Claude 3.5 lifts SOTA (set by its June version) by 7 points. onekq-ai/WebApp1K-models-leaderboardClosed sourced models are widening the gap again.Note: Our frontier leaderboard now uses double test scenarios because the single-scenario test suit has been saturated. π 1 1 + Reply
QLora-ready Coding Models Collection For Finetuning. GPU is needed for both quantization and inference. β’ 15 items β’ Updated about 23 hours ago
Ollama-ready Coding Models Collection For inference. CPU is enough for both quantization and inference. β’ 14 items β’ Updated Oct 19, 2024 β’ 2