DeepSeek V3 Lite (16B MoE)
Distillation of DeepSeek V3 to a smaller MoE. 16B total / 2.4B active. Captures most of V3's reasoning at consumer-card-friendly memory.
Overview
Distillation of DeepSeek V3 to a smaller MoE. 16B total / 2.4B active. Captures most of V3's reasoning at consumer-card-friendly memory.
Family & lineage
How this model relates to others in its lineage. Family members share architecture and training-data roots; parent / children edges record direct distillation or fine-tune relationships.
Strengths
- MoE efficiency at consumer-tier VRAM
- DeepSeek V3 reasoning lineage
Weaknesses
- Active params (2.4B) limit reasoning depth vs full V3
Quantization variants
Each quantization trades model quality for file size and VRAM. Q4_K_M is the most popular starting point.
| Quantization | File size | VRAM required |
|---|---|---|
| Q4_K_M | 9.5 GB | 12 GB |
Get the model
HuggingFace
Original weights
Source repository — direct quantization required.
Hardware that runs this
Cards with enough VRAM for at least one quantization of DeepSeek V3 Lite (16B MoE).
Models worth comparing
Same parameter band, plus what's one tier above and below — so you can decide what actually fits your hardware.
Frequently asked
What's the minimum VRAM to run DeepSeek V3 Lite (16B MoE)?
Can I use DeepSeek V3 Lite (16B MoE) commercially?
What's the context length of DeepSeek V3 Lite (16B MoE)?
Source: huggingface.co/deepseek-ai/DeepSeek-V3-Lite
Reviewed by RunLocalAI Editorial. See our editorial policy for how we research and verify model claims.