RUNLOCALAIv38
→WILL IT RUNBEST GPUCOMPARETROUBLESHOOTSTARTPULSEMODELSHARDWARETOOLSBENCH
RUNLOCALAI

Independently operated catalog for local-AI hardware and software. Hand-written verdicts. Source-cited claims. Reproducible commands when we have them.

OP·Fredoline Eruo
DIR
  • Models
  • Hardware
  • Tools
  • Benchmarks
TOOLS
  • Will it run?
  • Compare hardware
  • Cost vs cloud
  • Choose my GPU
  • Quick answers
REF
  • All buyer guides
  • Methodology
  • Glossary
  • Errors KB
  • Trust
EDITOR
  • About
  • Author
  • How we make money
  • Editorial policy
  • Contact
LEGAL
  • Privacy
  • Terms
  • Sitemap
MAIL · MONTHLY DIGEST
Get monthly local AI changes
Monthly recap. No spam.
DISCLOSURE

Some links on this site are affiliate links (Amazon Associates and other first-class retailers). When you buy through them, we earn a small commission at no extra cost to you. Affiliate links do not influence our verdicts — there are cards we rate highly that we don't have affiliate relationships with, and cards that sell well that we refuse to recommend. Read more →

© 2026 runlocalai.coIndependently operated
RUNLOCALAI · v38
Will it run? / NVIDIA GB200 NVL72 / reasoning

What can NVIDIA GB200 NVL72 run for reasoning?

Build: NVIDIA GB200 NVL72 + — + 32 GB RAM (windows)

Memory: 13824 GB VRAM + 32 GB system RAM
Runner: llama.cpp / Ollama (CUDA)
AnyChatCodingAgentsReasoningVisionLong contextCreative

Runs comfortably
147 models

Ranked by fit for reasoning use case + predicted speed. Click a row for VRAM breakdown.

#1Llama 3.1 Nemotron Nano 8B
8B
llama
Commercial OK
Quant: Q4_K_MContext: 8,192VRAM: 19.1 GBHeadroom: 13804.9 GB
1077
tok/s
E
Weights
4.83 GB
KV cache
4.00 GB
Activations
8.43 GB
Runtime
1.80 GB
Model details →Run-on benchmark page →
#2DeepSeek R1 Distill Qwen 7B
7B
deepseek
Commercial OK
Quant: Q8_0Context: 8,192VRAM: 21.3 GBHeadroom: 13802.7 GB
ollama run deepseek-r1:7b
699
tok/s
E
Weights
7.44 GB
KV cache
3.50 GB
Activations
8.56 GB
Runtime
1.80 GB
Model details →Run-on benchmark page →
#3DeepSeek V4 Flash (284B MoE)
284B
deepseek
Commercial OK
Quant: Q5_K_MContext: 8,192VRAM: 357.0 GBHeadroom: 13467.0 GB
582
tok/s
E
Weights
195.25 GB
KV cache
142.00 GB
Activations
17.95 GB
Runtime
1.80 GB
Model details →Run-on benchmark page →
#4Phi-4 Reasoning 14B
14B
phi
Commercial OK
Quant: Q4_K_MContext: 8,192VRAM: 25.9 GBHeadroom: 13798.1 GB
ollama run phi4-reasoning:14b
615
tok/s
E
Weights
8.45 GB
KV cache
7.00 GB
Activations
8.61 GB
Runtime
1.80 GB
Model details →Run-on benchmark page →
#5DeepSeek R1 Distill Qwen 14B
14B
deepseek
Commercial OK
Quant: Q4_K_MContext: 8,192VRAM: 25.9 GBHeadroom: 13798.1 GB
ollama run deepseek-r1:14b
615
tok/s
E
Weights
8.45 GB
KV cache
7.00 GB
Activations
8.61 GB
Runtime
1.80 GB
Model details →Run-on benchmark page →
#6Qwen 3.5 235B-A17B (MoE)
397B
qwen
Commercial OK
Quant: Q4_K_MContext: 8,192VRAM: 460.2 GBHeadroom: 13363.8 GB
507
tok/s
E
Weights
239.69 GB
KV cache
198.50 GB
Activations
20.18 GB
Runtime
1.80 GB
Model details →Run-on benchmark page →
#7QwQ 32B Preview
32B
qwen
Commercial OK
Quant: Q4_K_MContext: 8,192VRAM: 46.3 GBHeadroom: 13777.7 GB
ollama run qwq:32b
269
tok/s
E
Weights
19.32 GB
KV cache
16.00 GB
Activations
9.16 GB
Runtime
1.80 GB
Model details →Run-on benchmark page →
#8DeepSeek V4 Pro (1.6T MoE)
1600B
deepseek
Commercial OK
Quant: Q4_K_MContext: 8,192VRAM: 1824.3 GBHeadroom: 11999.7 GB
176
tok/s
E
Weights
966.00 GB
KV cache
800.00 GB
Activations
56.49 GB
Runtime
1.80 GB
Model details →Run-on benchmark page →
#9DeepSeek R1 Distill Qwen 32B
32B
deepseek
Commercial OK
Quant: Q8_0Context: 8,192VRAM: 61.7 GBHeadroom: 13762.3 GB
ollama run deepseek-r1:32b
153
tok/s
E
Weights
34.00 GB
KV cache
16.00 GB
Activations
9.89 GB
Runtime
1.80 GB
Model details →Run-on benchmark page →
#10DeepSeek R1 Distill Llama 8B
8B
deepseek
Commercial OK
Quant: Q4_K_MContext: 8,192VRAM: 19.1 GBHeadroom: 13804.9 GB
1077
tok/s
E
Weights
4.83 GB
KV cache
4.00 GB
Activations
8.43 GB
Runtime
1.80 GB
Model details →Run-on benchmark page →
#11DeepSeek R1 Distill Mistral 24B
24B
deepseek
Commercial OK
Quant: Q4_K_MContext: 8,192VRAM: 37.2 GBHeadroom: 13786.8 GB
359
tok/s
E
Weights
14.49 GB
KV cache
12.00 GB
Activations
8.92 GB
Runtime
1.80 GB
Model details →Run-on benchmark page →
#12DeepSeek V4
745B
deepseek
Commercial OK
Quant: AWQ-INT4Context: 8,192VRAM: 1164.7 GBHeadroom: 12659.3 GB
137
tok/s
E
Weights
745.00 GB
KV cache
372.50 GB
Activations
45.44 GB
Runtime
1.80 GB
Model details →Run-on benchmark page →

What if you upgraded?

Hypothetical scenarios. We re-ran the compatibility engine for each.

+32 GB system RAM

~$80–150

Doubles your CPU-offload working set. Helps when models don't quite fit in VRAM.

Unlocks: 36 new comfortable

  • • Gemma 3 1B
  • • Llama 3.2 1B Instruct
  • • Gemma 4 E2B (Effective 2B)
  • • Llama 3.2 3B Instruct
Shop this upgrade↗

Add a second NVIDIA GB200 NVL72

see current pricing

Tensor parallelism splits the model across both cards, effectively doubling VRAM. Bandwidth doesn't double — runs ~1.5× the single-card speed in practice.

Unlocks: 36 new comfortable

  • • Gemma 3 1B
  • • Llama 3.2 1B Instruct
  • • Gemma 4 E2B (Effective 2B)
  • • Llama 3.2 3B Instruct
Shop this upgrade↗

Some links above are affiliate links. We may earn a commission at no extra cost to you. How we make money.

Won't run
top 3 popular models

Need more memory than you have. Shown for orientation.

Qwen 3.6 35B-A3B (MTP)
35B
qwen
Commercial OK

Even with CPU offload, needs more memory than your VRAM (13824 GB) + 60% of system RAM (19 GB) combined.

—
Qwen 3.6 27B (MTP)
27B
qwen
Commercial OK

Even with CPU offload, needs more memory than your VRAM (13824 GB) + 60% of system RAM (19 GB) combined.

—
Ring-2.6-1T
1000B
other
Commercial OK

Even with CPU offload, needs more memory than your VRAM (13824 GB) + 60% of system RAM (19 GB) combined.

—

How to read these numbers

M
Measured — we ran this exact combo on owner hardware.

~
Extrapolated — predicted from a measured benchmark on similar-bandwidth hardware.

E
Estimated — pure formula based on VRAM bandwidth and model architecture.

Full methodology →

Want a specific benchmark we don't have? Email support@runlocalai.co and we'll prioritize it.