DeepSeek V4 Pro
Reasoning-tuned variant of DeepSeek V4. Same MoE shape; reasoning-emission optimized. The current open-weight benchmark leader on math + reasoning.
Overview
Reasoning-tuned variant of DeepSeek V4. Same MoE shape; reasoning-emission optimized. The current open-weight benchmark leader on math + reasoning.
Family & lineage
How this model relates to others in its lineage. Family members share architecture and training-data roots; parent / children edges record direct distillation or fine-tune relationships.
Strengths
- Strongest open coder of 2026 — closes in on Claude Opus 4.6
- 1M token context window with CSA+HCA attention
- 27% per-token FLOPs vs V3.2; 10% KV cache
- MIT license — fully open weights
Weaknesses
- 1.6T total params — workstation cluster or cloud GPU only
- Q4_K_M still ~920 GB on disk
- Local deployment is research-tier only
Quantization variants
Each quantization trades model quality for file size and VRAM. Q4_K_M is the most popular starting point.
| Quantization | File size | VRAM required |
|---|---|---|
| Q4_K_M | 920.0 GB | 1024 GB |
Get the model
HuggingFace
Original weights
Source repository — direct quantization required.
Hardware that runs this
Cards with enough VRAM for at least one quantization of DeepSeek V4 Pro.
Models worth comparing
Same parameter band, plus what's one tier above and below — so you can decide what actually fits your hardware.
Frequently asked
What's the minimum VRAM to run DeepSeek V4 Pro?
Can I use DeepSeek V4 Pro commercially?
What's the context length of DeepSeek V4 Pro?
Source: huggingface.co/deepseek-ai/DeepSeek-V4-Pro
Reviewed by RunLocalAI Editorial. See our editorial policy for how we research and verify model claims.