RUNLOCALAIv38
→WILL IT RUNBEST GPUCOMPARETROUBLESHOOTSTARTPULSEMODELSHARDWARETOOLSBENCH
RUNLOCALAI

Operator-grade instrument for local-AI hardware intelligence. Hand-written verdicts. Real benchmarks. Reproducible commands.

OP·Fredoline Eruo
DIR
  • Models
  • Hardware
  • Tools
  • Benchmarks
  • Will it run?
GUIDES
  • Best GPU
  • Best laptop
  • Best Mac
  • Best used GPU
  • Best budget GPU
  • Best GPU for Ollama
  • Best GPU for SD
  • AI PC build $2K
  • CUDA vs ROCm
  • 16 vs 24 GB
  • Compare hardware
  • Custom compare
REF
  • Systems
  • Ecosystem maps
  • Pillar guides
  • Methodology
  • Glossary
  • Errors KB
  • Troubleshooting
  • Resources
  • Public API
EDITOR
  • About
  • About the author
  • Changelog
  • Latest
  • Updates
  • Submit benchmark
  • Send feedback
  • Trust
  • Editorial policy
  • How we make money
  • Contact
LEGAL
  • Privacy
  • Terms
  • Sitemap
MAIL · MONTHLY DIGEST
Get monthly local AI changes
Monthly recap. No spam.
DISCLOSURE

Some links on this site are affiliate links (Amazon Associates and other first-class retailers). When you buy through them, we earn a small commission at no extra cost to you. Affiliate links do not influence our verdicts — there are cards we rate highly that we don't have affiliate relationships with, and cards that sell well that we refuse to recommend. Read more →

SYS · ONLINEUPTIME · 100%2026 · operator-owned
RUNLOCALAI · v38
Families/Text & Reasoning/DeepSeek
Text & Reasoning
Open-weight
DeepSeek License (permissive commercial)

DeepSeek

by DeepSeek AI

DeepSeek's frontier reasoning + code MoE family. DeepSeek V3 (671B MoE) and V4 are the leading open-weight reasoning models in 2026; DeepSeek Coder V3 is the canonical open-weight code model. All ship under DeepSeek's permissive commercial-friendly license.

Best entry point for local use

Start with DeepSeek R1-Distill-Qwen-32B at Q4_K_M via Ollama — it runs on a single RTX 4090 24 GB, delivers MMLU 86.1% and MATH ~89%, and captures ~75% of full DeepSeek V3 reasoning quality at 1/20th the VRAM. For lighter hardware (<16 GB VRAM), use R1-Distill-Llama-8B at Q5_K_M — ~6 GB, fits on MacBook Pro M4 Max at 25+ tok/s. Skip full-scale DeepSeek V3/V4 MoE (671B–1T params) for local deployment — Q4 requires ~380 GB VRAM minimum and decode drops to ~3-4 tok/s even on Mac Studio M3 Ultra. The distilled variants are the pragmatic entry point for 90% of users. Skip DeepSeek Coder V3 unless you specifically need FIM (fill-in-the-middle) code completion — the base V3 and distilled variants handle code generation competitively.

Deployment guidance

For single-user local: Ollama + deepseek-r1:32b Q4_K_M on RTX 4090 24 GB or Apple M3 Ultra via MLX-LM. Distilled variants are standard dense architectures — use same deployment stack as base (Llama or Qwen). For multi-user MoE serving: vLLM 0.6.3+ with FP8 DeepSeek V3 MLA kernel on 4× H100 SXM — ~6,000 tok/s at batch 32 with expert parallelism. Enable multi-token prediction (MTP) for single-user throughput (+1.8×). For datacenter MoE: TensorRT-LLM 0.12.0+ FP8 on 8× H100 SXM — ~18,000 tok/s at batch 128. Never quantize MoE router weights below FP16. ExLlamaV2 does not support DeepSeek MoE — use vLLM or SGLang. See GPU buyer guide.

Featured models

Models in this family with our verdicts

DeepSeek Coder V3DeepSeek V3 (671B MoE)DeepSeek V4DeepSeek V3 Lite (16B MoE)DeepSeek R1 Distill Llama 8B

Recommended runtimes

SGLangvLLM

Related families

LlamaQwen

Related — keep moving

Compare hardware
  • RTX 3090 vs RTX 4090 →
  • RTX 4090 vs RTX 5090 →
Buyer guides
  • Best GPU for DeepSeek models →
  • Best GPU for local AI →
  • Best laptop for local AI →
  • Best Mac for local AI →
  • Best used GPU for local AI →
When it doesn't work
  • CUDA out of memory →
  • Ollama running slowly →
  • ROCm not detected →
  • Model keeps crashing →
Runtimes that fit
  • SGLang →
  • vLLM →
Alternatives
LlamaQwen
Before you buy

Verify DeepSeek runs on your specific hardware before committing money.

Will it run on my hardware? →Custom hardware comparison →GPU recommender (4 questions) →