deepseek
24B parameters
Commercial OK

DeepSeek R1 Distill Mistral 24B

Community R1 distill onto a Mistral Small 3 base. Apache 2.0; combines R1 reasoning with Mistral instruction polish.

License: Apache 2.0·Released Mar 18, 2025·Context: 32,768 tokens

Overview

Community R1 distill onto a Mistral Small 3 base. Apache 2.0; combines R1 reasoning with Mistral instruction polish.

Family & lineage

How this model relates to others in its lineage. Family members share architecture and training-data roots; parent / children edges record direct distillation or fine-tune relationships.

Strengths

  • Apache 2.0 reasoning model
  • Mistral instruction-following base

Weaknesses

  • Community distill — less validated than Qwen / Llama distills

Quantization variants

Each quantization trades model quality for file size and VRAM. Q4_K_M is the most popular starting point.

QuantizationFile sizeVRAM required
Q4_K_M14.0 GB18 GB

Get the model

HuggingFace

Original weights

huggingface.co/community/DeepSeek-R1-Distill-Mistral-24B

Source repository — direct quantization required.

Hardware that runs this

Cards with enough VRAM for at least one quantization of DeepSeek R1 Distill Mistral 24B.

Compare alternatives

Models worth comparing

Same parameter band, plus what's one tier above and below — so you can decide what actually fits your hardware.

Frequently asked

What's the minimum VRAM to run DeepSeek R1 Distill Mistral 24B?

18GB of VRAM is enough to run DeepSeek R1 Distill Mistral 24B at the Q4_K_M quantization (file size 14.0 GB). Higher-quality quantizations need more.

Can I use DeepSeek R1 Distill Mistral 24B commercially?

Yes — DeepSeek R1 Distill Mistral 24B ships under the Apache 2.0, which permits commercial use. Always read the license text before deployment.

What's the context length of DeepSeek R1 Distill Mistral 24B?

DeepSeek R1 Distill Mistral 24B supports a context window of 32,768 tokens (about 33K).

Source: huggingface.co/community/DeepSeek-R1-Distill-Mistral-24B

Reviewed by RunLocalAI Editorial. See our editorial policy for how we research and verify model claims.