π Major Updates to HF Daily Paper Newsletter Bot
I'm excited to announce significant improvements to my HF Daily Paper Newsletter Bot! Here are the key updates:
πΌοΈ Enhanced Poster Generation - Implemented dynamic height adjustment for daily paper posters - Added support for displaying complete paper content without truncation - Improved Chinese font rendering and text layout - Integrated Hugging Face logo for better branding - Enhanced visual aesthetics with optimized card layouts
π Content Improvements - Removed paper count limitations (previously capped at 5 papers) - Enhanced title and summary extraction algorithms - Improved text wrapping and spacing for better readability - Added proper handling of long content with automatic layout adjustments
π οΈ Technical Enhancements - Implemented better font loading mechanism with fallback options - Added support for multiple Chinese font paths - Improved error handling and logging - Enhanced memory management for image processing - Added detailed debugging information
π Visual Design Updates - Refined color scheme with HF brand colors - Improved card spacing and padding - Enhanced typography with better font sizing - Added smooth transitions between paper cards - Optimized overall layout for better visual hierarchy
π§ Infrastructure Updates - Improved GitHub Actions workflow reliability - Enhanced error notification system - Added automatic retries for API calls - Improved logging and debugging capabilities
The bot now generates more professional and visually appealing daily paper summaries while ensuring complete content display. These updates make the newsletter more readable and informative for our users.
Try it out and let me know what you think! Your feedback helps me make continuous improvements to better serve the AI research community.
Can it run DeepSeek V3 671B is the new 'can it run Doom'.
How minimalistic can I go with on device AI with behemoth models - here I'm running DeepSeek V3 MoE on a single A6000 GPU.
Not great, not terrible, for this minimalistic setup. I love the Mixture of Experts architectures. Typically I'm running my core LLM distributed over the 4 GPUs.
Make sure you own your AI. AI in the cloud is not aligned with you; it's aligned with the company that owns it.
We had a few people asking about the differences and methodologies of our addition to the open-image-preferences dataset. So my colleague and I wrote a blog post about it with the new huggingface blog functionality: https://huggingface.co/blog/RapidataAI/more-image-preferences
Major update on the Talking to Chatbots dataset! Expanded the 'wrapped' dataset (one row per chat) to 2.86k records, and the 'unwrapped' version (one row per conversation turn) to 11k records. The main source is my ChatGPT archive with nearly 2 years of chats. It is still a work in progress as I incorporate chats from other sources and qualitative metrics (SCBN) for responses.
π―Fine-tuning SmolLM2 on a lightweight synthetic reasoning dataset for reasoning-specific tasks. Future updates will focus on lightweight, blazing-fast reasoning models. Until then, check out the blog for fine-tuning details.
π£ Looking for labeled, high-quality synthetic audio/TTS data π£ Have you been or are you currently calling API endpoints from OpenAI, ElevenLabs, etc? Do you have labeled audio data sitting around gathering dust? Let's talk! Join https://discord.gg/QuGxSWBfQy or comment down below.
If your data exceeds quantity & quality thresholds and is approved into the next hexgrad/Kokoro-82M training mix, and you permissively DM me the data under an effective Apache license, then I will DM back the corresponding voicepacks for YOUR data if/when the next Apache-licensed Kokoro base model drops.
What does this mean? If you've been calling closed-source TTS or audio API endpoints to: - Build voice agents - Make long-form audio, like audiobooks or podcasts - Handle customer support, etc Then YOU can contribute to the training mix and get useful artifacts in return. β€οΈ
All the responses get saved in the cfahlgren1/react-code-instructions dataset. Hopefully we can build one of the biggest, highest quality frontend datasets on the hub πͺ
Probably most of you already knows this trick but just in case: π€ Unable to connect to Hugging Face Spaces Dev Mode through local Cursor? π‘ Don't worry there is an easy trick!
- right click Connect with VS Code - copy link in your browser - vscode://vscode-remote/... - replace vscode with cursor and go - cursor://vscode-remote/...
In this article, I share my latest Gen AI and LLM advances, featuring innovative approaches radically different from both standard AI and classical ML/NLP. The focus is on doing better with less, using efficient architectures, new algorithms and evaluation metrics. It originates from research that I started long ago. It gained significant momentum in the last two years. See background and history at https://mltblog.com/4g2sKTv.
OpenAI, Perplexity, Anthropic, Llama and others typically follow the trend and implement solutions very similar to mines within 3 to 6 months after I publish new milestones. For instance, multi-tokens, knowledge graph tokens, multi-indexes, real-time fine-tuning, mixtures of experts, LLM routers, small enterprise sub-LLMs, prompt distillation, relevancy scoring engine, deep contextual retrieval, optimum agentic chunking, and modern UI instead of the basic prompt box. I keep adding new features all the time, staying ahead of competition.