AI & ML interests

None defined yet.

Articles

danielhanchen 
posted an update about 2 hours ago
danielhanchen 
posted an update 4 days ago
danielhanchen 
posted an update 5 days ago
danielhanchen 
posted an update 9 days ago
view post
Post
5103
We collaborated with Hugging Face to enable you to train MoE models 12× faster with 35% less VRAM via our new Triton kernels (no accuracy loss). 🤗

Train gpt-oss locally on 12.8GB VRAM with our free notebooks: https://unsloth.ai/docs/new/faster-moe
  • 1 reply
·
danielhanchen 
posted an update 14 days ago
view post
Post
3655
We created a tool-calling guide for local LLMs!

Learn how to use any open model like Qwen3-Coder-Next and GLM-4.7-Flash for function calling.

Guide: https://unsloth.ai/docs/basics/tool-calling-guide-for-local-llms

We provide hands-on examples for: story writing, Python execution, terminal tool calls, maths and more.
·
danielhanchen 
posted an update 16 days ago
danielhanchen 
posted an update 23 days ago
danielhanchen 
posted an update 28 days ago
view post
Post
2613
You can now fine-tune embedding models in our free Unsloth notebook! 🤗

Fine-tuning embedding models improves retrieval & RAG by aligning vectors to your domain-specific notion of similarity, improving search, clustering, and recommendations on your data.

⭐ Blog + Notebooks: https://unsloth.ai/docs/new/embedding-finetuning

Unsloth trains embedding models 1.8-3.3x faster with 20% less VRAM, 2x longer context & no accuracy loss vs. FA2 setups.

We'd like to thank Hugging Face and Unsloth contributor: electroglyph for making this possible!
·
danielhanchen 
posted an update about 1 month ago
danielhanchen 
posted an update about 1 month ago
view post
Post
2856
You can now do reinforcement learning training with 7× longer context and no accuracy loss, via our new batching algorithms.

Long reasoning chains in RL are costly, but now we enable you to train gpt-oss with GRPO & reach 380K context on a 192GB GPU.

Blog: https://unsloth.ai/docs/new/grpo-long-context
danielhanchen 
posted an update about 2 months ago
danielhanchen 
posted an update about 2 months ago
danielhanchen 
posted an update 2 months ago
danielhanchen 
posted an update 2 months ago
danielhanchen 
posted an update 2 months ago
danielhanchen 
posted an update 3 months ago
view post
Post
3879
Mistral's new Ministral 3 models can now be Run & Fine-tuned locally! (16GB RAM)
Ministral 3 have vision support and the best-in-class performance for their sizes.
14B Instruct GGUF: unsloth/Ministral-3-14B-Instruct-2512-GGUF
14B Reasoning GGUF: unsloth/Ministral-3-14B-Reasoning-2512-GGUF

🐱 Step-by-step Guide: https://docs.unsloth.ai/new/ministral-3
All GGUFs, BnB, FP8 etc. variants uploads: https://huggingface.co/collections/unsloth/ministral-3
·
danielhanchen 
posted an update 3 months ago