deepseek
7B parameters
Commercial OK

DeepSeek R1 Distill Qwen 7B

Smallest practical R1 distill. Reasoning on a 6GB GPU.

License: MIT·Released Jan 20, 2025·Context: 131,072 tokens

Overview

Smallest practical R1 distill. Reasoning on a 6GB GPU.

Strengths

  • MIT
  • Reasoning on 6GB

Weaknesses

  • Limited depth

Quantization variants

Each quantization trades model quality for file size and VRAM. Q4_K_M is the most popular starting point.

QuantizationFile sizeVRAM required
Q4_K_M4.7 GB6 GB
Q8_08.1 GB10 GB

Get the model

Ollama

One-line install

ollama run deepseek-r1:7bRead our Ollama review →

HuggingFace

Original weights

huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B

Source repository — direct quantization required.

Hardware that runs this

Cards with enough VRAM for at least one quantization of DeepSeek R1 Distill Qwen 7B.

Compare alternatives

Models worth comparing

Same parameter band, plus what's one tier above and below — so you can decide what actually fits your hardware.

Frequently asked

What's the minimum VRAM to run DeepSeek R1 Distill Qwen 7B?

6GB of VRAM is enough to run DeepSeek R1 Distill Qwen 7B at the Q4_K_M quantization (file size 4.7 GB). Higher-quality quantizations need more.

Can I use DeepSeek R1 Distill Qwen 7B commercially?

Yes — DeepSeek R1 Distill Qwen 7B ships under the MIT, which permits commercial use. Always read the license text before deployment.

What's the context length of DeepSeek R1 Distill Qwen 7B?

DeepSeek R1 Distill Qwen 7B supports a context window of 131,072 tokens (about 131K).

How do I install DeepSeek R1 Distill Qwen 7B with Ollama?

Run `ollama pull deepseek-r1:7b` to download, then `ollama run deepseek-r1:7b` to start a chat session. The default quantization is Q4_K_M.

Source: huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B

Reviewed by RunLocalAI Editorial. See our editorial policy for how we research and verify model claims.