DeepSeek R1 Distill Mistral 24B
Community R1 distill onto a Mistral Small 3 base. Apache 2.0; combines R1 reasoning with Mistral instruction polish.
Overview
Community R1 distill onto a Mistral Small 3 base. Apache 2.0; combines R1 reasoning with Mistral instruction polish.
Family & lineage
How this model relates to others in its lineage. Family members share architecture and training-data roots; parent / children edges record direct distillation or fine-tune relationships.
Strengths
- Apache 2.0 reasoning model
- Mistral instruction-following base
Weaknesses
- Community distill — less validated than Qwen / Llama distills
Quantization variants
Each quantization trades model quality for file size and VRAM. Q4_K_M is the most popular starting point.
| Quantization | File size | VRAM required |
|---|---|---|
| Q4_K_M | 14.0 GB | 18 GB |
Get the model
HuggingFace
Original weights
Source repository — direct quantization required.
Hardware that runs this
Cards with enough VRAM for at least one quantization of DeepSeek R1 Distill Mistral 24B.
Models worth comparing
Same parameter band, plus what's one tier above and below — so you can decide what actually fits your hardware.
Frequently asked
What's the minimum VRAM to run DeepSeek R1 Distill Mistral 24B?
Can I use DeepSeek R1 Distill Mistral 24B commercially?
What's the context length of DeepSeek R1 Distill Mistral 24B?
Source: huggingface.co/community/DeepSeek-R1-Distill-Mistral-24B
Reviewed by RunLocalAI Editorial. See our editorial policy for how we research and verify model claims.