⚡ Local LLM Chat — Transformers.js v4
Hello! Select a model and click "Initialize" to start chatting locally in your browser.
Send
DeepSeek-R1-1.5B (~1.2GB - Heavy)
SmolLM2-135M (~270MB - Fast)
Qwen2.5-0.5B (~950MB - Balanced)
🚀 Initialize Model
Ready. Please note: First load will download several hundred MBs to your browser cache.