DJ Sri Vigneshwar's picture

DJ Sri Vigneshwar

Sri-Vigneshwar-DJ

AI & ML interests

Currently building Hawky.ai - Creative Intelligence for Performance Marketing

Recent Activity

Articles

Organizations

AI FILMS's profile picture GEM benchmark's profile picture MusicAI's profile picture OpenVINO Toolkit's profile picture Open-Source AI Meetup's profile picture East China Normal University's profile picture AI Zero to Hero's profile picture Stable Diffusion Dreambooth Concepts Library's profile picture Blog-explorers's profile picture AI Tamil Nadu's profile picture LocalLLaMA's profile picture MLX Community's profile picture C4AI Community's profile picture M4-ai's profile picture Chinese LLMs on Hugging Face's profile picture Paris AI Running Club's profile picture Hawky.ai - The Creative Analytics Platform's profile picture Hawky.ai - Fine-tuned Language and Creative Generation Models (MarTech) 's profile picture Intelligent Estate's profile picture open/ acc's profile picture Data Is Better Together Contributor's profile picture Arracle AI's profile picture

Sri-Vigneshwar-DJ's activity

posted an update 1 day ago
view post
Post
523
Checkout phi-4 from Microsoft, dropped a day ago... If you ❤️ the Phi series, then here is the GGUF - Sri-Vigneshwar-DJ/phi-4-GGUF. phi-4 is a 14B highly efficient open LLM that beats much larger models at math and reasoning - check out evaluations on the Open LLM.

Technical paper - https://arxiv.org/pdf/2412.08905 ; The Data Synthesis approach is interesting
reacted to cfahlgren1's post with 🔥 1 day ago
view post
Post
965
Wow, I just added Langfuse tracing to the Deepseek Artifacts app and it's really nice 🔥

It allows me to visualize and track more things along with the cfahlgren1/react-code-instructions dataset.

It was just added as a one click Docker Space template, so it's super easy to self host 💪
posted an update 5 days ago
view post
Post
2008
Just sharing a thought: I started using DeepSeek V3 a lot, and an idea struck me about agents "orchestrating during inference" on a test-time compute model like DeepSeek V3 or the O1 series.

Agents (Instruction + Function Calls + Memory) execute during inference, and based on the output decision, a decision is made to scale the time to reason or perform other tasks.
posted an update 7 days ago
view post
Post
2312
Combining smolagents with Anthropic’s best practices simplifies building powerful AI agents:

1. Code-Based Agents: Write actions as Python code, reducing steps by 30%.
2. Prompt Chaining: Break tasks into sequential subtasks with validation gates.
3. Routing: Classify inputs and direct them to specialized handlers.
4. Fallback: Handle tasks even if classification fails.

https://huggingface.co/blog/Sri-Vigneshwar-DJ/building-effective-agents-with-anthropics-best-pra
reacted to as-cle-bert's post with 🔥 7 days ago
view post
Post
2053
🎉𝐄𝐚𝐫𝐥𝐲 𝐍𝐞𝐰 𝐘𝐞𝐚𝐫 𝐫𝐞𝐥𝐞𝐚𝐬𝐞𝐬🎉

Hi HuggingFacers🤗, I decided to ship early this year, and here's what I came up with:

𝐏𝐝𝐟𝐈𝐭𝐃𝐨𝐰𝐧 (https://github.com/AstraBert/PdfItDown) - If you're like me, and you have all your RAG pipeline optimized for PDFs, but not for other data formats, here is your solution! With PdfItDown, you can convert Word documents, presentations, HTML pages, markdown sheets and (why not?) CSVs and XMLs in PDF format, for seamless integration with your RAG pipelines. Built upon MarkItDown by Microsoft
GitHub Repo 👉 https://github.com/AstraBert/PdfItDown
PyPi Package 👉 https://pypi.org/project/pdfitdown/

𝐒𝐞𝐧𝐓𝐫𝐄𝐯 𝐯𝟏.𝟎.𝟎 (https://github.com/AstraBert/SenTrEv/tree/v1.0.0) - If you need to evaluate the 𝗿𝗲𝘁𝗿𝗶𝗲𝘃𝗮𝗹 performance of your 𝘁𝗲𝘅𝘁 𝗲𝗺𝗯𝗲𝗱𝗱𝗶𝗻𝗴 models, I have good news for you🥳🥳
The new release for 𝐒𝐞𝐧𝐓𝐫𝐄𝐯 now supports 𝗱𝗲𝗻𝘀𝗲 and 𝘀𝗽𝗮𝗿𝘀𝗲 retrieval (thanks to FastEmbed by Qdrant) with 𝘁𝗲𝘅𝘁-𝗯𝗮𝘀𝗲𝗱 𝗳𝗶𝗹𝗲 𝗳𝗼𝗿𝗺𝗮𝘁𝘀 (.docx, .pptx, .csv, .html, .xml, .md, .pdf) and new 𝗿𝗲𝗹𝗲𝘃𝗮𝗻𝗰𝗲 𝗺𝗲𝘁𝗿𝗶𝗰𝘀!
GitHub repo 👉 https://github.com/AstraBert/SenTrEv
Release Notes 👉 https://github.com/AstraBert/SenTrEv/releases/tag/v1.0.0
PyPi Package 👉 https://pypi.org/project/sentrev/

Happy New Year and have fun!🥂
  • 2 replies
·
reacted to AdinaY's post with 👀 about 1 month ago
view post
Post
1583
Sailor 2 🚢 open multilingual model for Southeast Asia by Sea AI Lab🔥
https://huggingface.co/sailor2
sail/Sailor2-20B-Chat

✨ Fully open code & ALL datasets 🙌
✨ 1B/ 8B/20B base & chat expanded on Qwen2.5
✨ Apache 2.0
✨ Supports 15 languages including English, Chinese, Burmese, Cebuano, Ilocano, Indonesian, Javanese, Khmer, Lao, Malay, Sundanese, Tagalog, Thai, Vietnamese, and Waray🇬🇧🇨🇳🇱🇦🇲🇾🇲🇲🇻🇳🇹🇭
reacted to qq8933's post with 🚀 about 1 month ago
view post
Post
3055
  • 3 replies
·
reacted to MonsterMMORPG's post with 🔥 about 2 months ago
view post
Post
1940
FLUX Redux is a hidden Gem

I am still doing huge research to publish an amazing fully Public - no paywalled Tutorial, but this is generated via SwarmUI

Style Model Merge Strength : 0.5

FLUX Guidance Scale is : 6

Used base model is my FLUX fine tuned model with 256 images via Kohya SS GUI as shown in tutorial ( https://youtu.be/FvpWy1x5etM ) - 70 epoch

Prompt : anime ohwx man walking in a jungle <segment:yolo-face_yolov9c.pt-1,0.7,0.5> ohwx man, anime
  • 4 replies
·
reacted to ZennyKenny's post with 👍 about 2 months ago
view post
Post
1212
I've joined the Bluesky community. Interested to see what decentralized social media looks like in action: https://bsky.app/profile/kghamilton.bsky.social

Looking forward to following other AI builders, tech enthusiasts, goth doomscrollers, and ironic meme creators.
reacted to openfree's post with 🔥 about 2 months ago
view post
Post
1535
🎉 Reached HuggingFace Trending Top 100 in Just One Day! Introducing Mouse-I

First, we want to thank everyone who helped Mouse-I reach the HuggingFace Spaces Trending Top 100! We're especially excited that a game called "Jewel Pop Game," created using Mouse-I, has reached the global top 160.
With this overwhelming response, we're thrilled to introduce Mouse-I, an AI-powered code generation and automatic deployment tool by Bidraft.

✨ What is Mouse-I?
Mouse-I is an innovative tool that automatically generates and deploys working web services within 60 seconds, simply based on your prompt input.

🚀 Key Features

One-Click Real-time Deployment: Complete from prompt to deployment in just 60 seconds
Real-time Preview: Instantly check your generated code results
40+ Templates: Ready-to-use templates including MBTI tests, investment management tools, Tetris games, and more
Real-time Editing: Instantly modify and apply generated code

⚡ How to Use
Create your own web service in just 3 steps:

Enter your prompt (15 seconds)
Code generation (40 seconds)
Deploy (5 seconds)

🌟 What Makes Us Special

Ultra-fast code generation powered by NVIDIA H100 GPUs
Advanced multi-LLM complex agent technology
All generated web apps available for free viewing and use in our marketplace

🔍 Current Status

Over 3,000 web apps generated, with 160+ successfully deployed
30x faster service completion compared to competing services

🎈 Join Our Beta Test
Try Mouse-I for free right now!
👉 Experience Mouse-I
🔮 Future Plans
We're planning to launch 'Mouse-II', specialized for backend system development, within this year. When used together with Mouse-I, it will enable complete automation of full-stack development.

We look forward to your feedback and suggestions about Mouse-I!
Thank you for your interest and support 🙏
#AI #CodeGeneration #WebDevelopment #HuggingFace #MouseI #Bidraft #AICodeAssistant
https://huggingface.co/spaces/VIDraft/mouse1

reacted to etemiz's post with 🔥 about 2 months ago
view post
Post
970
if I host in hf spaces, can I interact with the app using an API?
  • 1 reply
·
reacted to vilarin's post with 🔥 about 2 months ago
view post
Post
1131
🏄‍♂️While browsing new models, I stumbled upon Lumiere from aixonlab. After testing it, I feel it has considerable potential. Keep up the good work!

Lumiere Alpha is a model focusing on improving realism without compromising prompt coherency or changing the composition completely from the original Flux.1-Dev model.

🦄 Model: aixonlab/flux.1-lumiere-alpha

🦖 Demo: vilarin/lumiere
  • 1 reply
·
reacted to mgubri's post with 🔥 about 2 months ago
view post
Post
915
🎉 We’re excited to announce, in collaboration with @kaleidophon , the release of the models from our Apricot 🍑 paper, "Apricot: Calibrating Large Language Models Using Their Generations Only," accepted at ACL 2024! Reproducibility is essential in science, and we've worked hard to make it as seamless as possible.
parameterlab/apricot-models-673d2cae40b6ff437a86f0bf
reacted to jsulz's post with 🔥 about 2 months ago
view post
Post
2921
When the XetHub crew joined Hugging Face this fall, @erinys and I started brainstorming how to share our work to replace Git LFS on the Hub. Uploading and downloading large models and datasets takes precious time. That’s where our chunk-based approach comes in.

Instead of versioning files (like Git and Git LFS), we version variable-sized chunks of data. For the Hugging Face community, this means:

⏩ Only upload the chunks that changed.
🚀 Download just the updates, not the whole file.
🧠 We store your file as deduplicated chunks

In our benchmarks, we found that using CDC to store iterative model and dataset version led to transfer speedups of ~2x, but this isn’t just a performance boost. It’s a rethinking of how we manage models and datasets on the Hub.

We're planning on our new storage backend to the Hub in early 2025 - check out our blog to dive deeper, and let us know: how could this improve your workflows?

https://huggingface.co/blog/from-files-to-chunks
reacted to fdaudens's post with 🚀 about 2 months ago
view post
Post
1661
🚀 @Qwen just dropped 2.5-Turbo!

1M token context (that's entire "War and Peace"!) + 4.3x faster processing speed. Same price, way more power 🔥

Check out the demo: Qwen/Qwen2.5-Turbo-1M-Demo

#QWEN
reacted to AdinaY's post with 🚀🔥 about 2 months ago
view post
Post
2547
Let’s dive into the exciting releases from the Chinese community last week 🔥🚀
More details 👉 https://huggingface.co/zh-ai-community

Code model:
✨Qwen 2.5 coder by Alibaba Qwen
Qwen/qwen25-coder-66eaa22e6f99801bf65b0c2f
✨OpenCoder by InflyAI - Fully open code model🙌
infly/opencoder-672cec44bbb86c39910fb55e

Image model:
✨Hunyuan3D-1.0 by Tencent
tencent/Hunyuan3D-1

MLLM:
✨JanusFlow by DeepSeek
deepseek-ai/JanusFlow-1.3B
deepseek-ai/JanusFlow-1.3B
✨Mono-InternVL-2B by OpenGVlab
OpenGVLab/Mono-InternVL-2B

Video model:
✨CogVideoX 1.5 by ChatGLM
THUDM/CogVideoX1.5-5B-SAT

Audio model:
✨Fish Agent by FishAudio
fishaudio/fish-agent-v0.1-3b

Dataset:
✨OPI dataset by BAAIBeijing
BAAI/OPI
reacted to cfahlgren1's post with 🔥 about 2 months ago
view post
Post
2235
Why use Google Drive when you can have:

• Free storage with generous limits🆓
• Dataset Viewer (Sorting, Filtering, FTS) 🔍
• Third Party Library Support
• SQL Console 🟧
• Security 🔒
• Community, Reach, and Visibility 📈

It's a no brainer!

Check out our post on what you get instantly out of the box when you create a dataset.
https://huggingface.co/blog/researcher-dataset-sharing
  • 1 reply
·
reacted to pagezyhf's post with 👍🔥 about 2 months ago
view post
Post
1361
Hello Hugging Face Community,

I'd like to share here a bit more about our Deep Learning Containers (DLCs) we built with Google Cloud, to transform the way you build AI with open models on this platform!

With pre-configured, optimized environments for PyTorch Training (GPU) and Inference (CPU/GPU), Text Generation Inference (GPU), and Text Embeddings Inference (CPU/GPU), the Hugging Face DLCs offer:

⚡ Optimized performance on Google Cloud's infrastructure, with TGI, TEI, and PyTorch acceleration.
🛠️ Hassle-free environment setup, no more dependency issues.
🔄 Seamless updates to the latest stable versions.
💼 Streamlined workflow, reducing dev and maintenance overheads.
🔒 Robust security features of Google Cloud.
☁️ Fine-tuned for optimal performance, integrated with GKE and Vertex AI.
📦 Community examples for easy experimentation and implementation.
🔜 TPU support for PyTorch Training/Inference and Text Generation Inference is coming soon!

Find the documentation at https://huggingface.co/docs/google-cloud/en/index
If you need support, open a conversation on the forum: https://discuss.huggingface.co/c/google-cloud/69