RUNLOCALAIv38
→WILL IT RUNBEST GPUCOMPARETROUBLESHOOTSTARTPULSEMODELSHARDWARETOOLSBENCH
RUNLOCALAI

Independently operated catalog for local-AI hardware and software. Hand-written verdicts. Source-cited claims. Reproducible commands when we have them.

OP·Fredoline Eruo
DIR
  • Models
  • Hardware
  • Tools
  • Benchmarks
TOOLS
  • Will it run?
  • Compare hardware
  • Cost vs cloud
  • Choose my GPU
  • Quick answers
REF
  • All buyer guides
  • Methodology
  • Glossary
  • Errors KB
  • Trust
EDITOR
  • About
  • Author
  • How we make money
  • Editorial policy
  • Contact
LEGAL
  • Privacy
  • Terms
  • Sitemap
MAIL · MONTHLY DIGEST
Get monthly local AI changes
Monthly recap. No spam.
DISCLOSURE

Some links on this site are affiliate links (Amazon Associates and other first-class retailers). When you buy through them, we earn a small commission at no extra cost to you. Affiliate links do not influence our verdicts — there are cards we rate highly that we don't have affiliate relationships with, and cards that sell well that we refuse to recommend. Read more →

© 2026 runlocalai.coIndependently operated
RUNLOCALAI · v38
Will it run? / NVIDIA GB200 NVL72 / long context

What can NVIDIA GB200 NVL72 run for long context?

Build: NVIDIA GB200 NVL72 + — + 32 GB RAM (windows)

Memory: 13824 GB VRAM + 32 GB system RAM
Runner: llama.cpp / Ollama (CUDA)
AnyChatCodingAgentsReasoningVisionLong contextCreative

Runs comfortably
166 models

Ranked by fit for long context use case + predicted speed. Click a row for VRAM breakdown.

#1DeepSeek V4 Flash (284B MoE)
284B
deepseek
Commercial OK
Quant: Q5_K_MContext: 8,192VRAM: 357.0 GBHeadroom: 13467.0 GB
582
tok/s
E
Weights
195.25 GB
KV cache
142.00 GB
Activations
17.95 GB
Runtime
1.80 GB
Model details →Run-on benchmark page →
#2DeepSeek V4 Pro (1.6T MoE)
1600B
deepseek
Commercial OK
Quant: Q4_K_MContext: 8,192VRAM: 1824.3 GBHeadroom: 11999.7 GB
176
tok/s
E
Weights
966.00 GB
KV cache
800.00 GB
Activations
56.49 GB
Runtime
1.80 GB
Model details →Run-on benchmark page →
#3Nemotron 3 Nano (30B-A3B)
30B
other
Commercial OK
Quant: Q8_0Context: 8,192VRAM: 58.5 GBHeadroom: 13765.5 GB
ollama run nemotron3:nano
163
tok/s
E
Weights
31.88 GB
KV cache
15.00 GB
Activations
9.79 GB
Runtime
1.80 GB
Model details →Run-on benchmark page →
#4Nemotron 3 Super (120B-A12B)
120B
other
Commercial OK
Quant: Q4_K_MContext: 8,192VRAM: 146.1 GBHeadroom: 13677.9 GB
ollama run nemotron3:super
72
tok/s
E
Weights
72.45 GB
KV cache
60.00 GB
Activations
11.81 GB
Runtime
1.80 GB
Model details →Run-on benchmark page →
#5Phi-3.5 Mini Instruct
3.8B
phi
Commercial OK
Quant: Q8_0Context: 8,192VRAM: 16.1 GBHeadroom: 13807.9 GB
ollama run phi3.5:3.8b
1288
tok/s
E
Weights
4.04 GB
KV cache
1.90 GB
Activations
8.39 GB
Runtime
1.80 GB
Model details →Run-on benchmark page →
#6Gemma 4 E4B (Effective 4B)
4B
gemma
Commercial OK
Quant: Q8_0Context: 8,192VRAM: 16.5 GBHeadroom: 13807.5 GB
ollama run gemma4:e4b
1224
tok/s
E
Weights
4.25 GB
KV cache
2.00 GB
Activations
8.40 GB
Runtime
1.80 GB
Model details →Run-on benchmark page →
#7Qwen 2.5 7B Instruct
7B
qwen
Commercial OK
Quant: Q8_0Context: 8,192VRAM: 18.3 GBHeadroom: 13805.7 GB
ollama run qwen2.5:7b
699
tok/s
E
Weights
7.44 GB
KV cache
0.47 GB
Activations
8.56 GB
Runtime
1.80 GB
Model details →Run-on benchmark page →
#8Qwen 3 8B
8B
qwen
Commercial OK
Quant: Q8_0Context: 8,192VRAM: 22.9 GBHeadroom: 13801.1 GB
ollama run qwen3:8b
612
tok/s
E
Weights
8.50 GB
KV cache
4.00 GB
Activations
8.62 GB
Runtime
1.80 GB
Model details →Run-on benchmark page →
#9Qwen 3.5 235B-A17B (MoE)
397B
qwen
Commercial OK
Quant: Q4_K_MContext: 8,192VRAM: 460.2 GBHeadroom: 13363.8 GB
507
tok/s
E
Weights
239.69 GB
KV cache
198.50 GB
Activations
20.18 GB
Runtime
1.80 GB
Model details →Run-on benchmark page →
#10Mistral Nemo 12B Instruct
12B
mistral
Commercial OK
Quant: Q8_0Context: 8,192VRAM: 29.4 GBHeadroom: 13794.6 GB
ollama run mistral-nemo:12b
408
tok/s
E
Weights
12.75 GB
KV cache
6.00 GB
Activations
8.83 GB
Runtime
1.80 GB
Model details →Run-on benchmark page →
#11Llama 3.1 8B Instruct
8B
llama
Commercial OK
Quant: FP16Context: 8,192VRAM: 27.9 GBHeadroom: 13796.1 GB
ollama run llama3.1:8b
325
tok/s
E
Weights
16.00 GB
KV cache
1.07 GB
Activations
8.99 GB
Runtime
1.80 GB
Model details →Run-on benchmark page →
#12Qwen 3 14B
14B
qwen
Commercial OK
Quant: Q8_0Context: 8,192VRAM: 32.6 GBHeadroom: 13791.4 GB
ollama run qwen3:14b
350
tok/s
E
Weights
14.88 GB
KV cache
7.00 GB
Activations
8.94 GB
Runtime
1.80 GB
Model details →Run-on benchmark page →

What if you upgraded?

Hypothetical scenarios. We re-ran the compatibility engine for each.

+32 GB system RAM

~$80–150

Doubles your CPU-offload working set. Helps when models don't quite fit in VRAM.

Unlocks: 17 new comfortable

  • • Gemma 3 1B
  • • Llama 3.2 1B Instruct
  • • Gemma 4 E2B (Effective 2B)
  • • Whisper Large v3
Shop this upgrade↗

Add a second NVIDIA GB200 NVL72

see current pricing

Tensor parallelism splits the model across both cards, effectively doubling VRAM. Bandwidth doesn't double — runs ~1.5× the single-card speed in practice.

Unlocks: 17 new comfortable

  • • Gemma 3 1B
  • • Llama 3.2 1B Instruct
  • • Gemma 4 E2B (Effective 2B)
  • • Whisper Large v3
Shop this upgrade↗

Some links above are affiliate links. We may earn a commission at no extra cost to you. How we make money.

Won't run
top 3 popular models

Need more memory than you have. Shown for orientation.

Qwen 3.6 35B-A3B (MTP)
35B
qwen
Commercial OK

Even with CPU offload, needs more memory than your VRAM (13824 GB) + 60% of system RAM (19 GB) combined.

—
Qwen 3.6 27B (MTP)
27B
qwen
Commercial OK

Even with CPU offload, needs more memory than your VRAM (13824 GB) + 60% of system RAM (19 GB) combined.

—
Ring-2.6-1T
1000B
other
Commercial OK

Even with CPU offload, needs more memory than your VRAM (13824 GB) + 60% of system RAM (19 GB) combined.

—

How to read these numbers

M
Measured — we ran this exact combo on owner hardware.

~
Extrapolated — predicted from a measured benchmark on similar-bandwidth hardware.

E
Estimated — pure formula based on VRAM bandwidth and model architecture.

Full methodology →

Want a specific benchmark we don't have? Email support@runlocalai.co and we'll prioritize it.