RUNLOCALAIv38
→WILL IT RUNBEST GPUCOMPARETROUBLESHOOTSTARTPULSEMODELSHARDWARETOOLSBENCH
RUNLOCALAI

Independently operated catalog for local-AI hardware and software. Hand-written verdicts. Source-cited claims. Reproducible commands when we have them.

OP·Fredoline Eruo
DIR
  • Models
  • Hardware
  • Tools
  • Benchmarks
TOOLS
  • Will it run?
  • Compare hardware
  • Cost vs cloud
  • Choose my GPU
  • Quick answers
REF
  • All buyer guides
  • Methodology
  • Glossary
  • Errors KB
  • Trust
EDITOR
  • About
  • Author
  • How we make money
  • Editorial policy
  • Contact
LEGAL
  • Privacy
  • Terms
  • Sitemap
MAIL · MONTHLY DIGEST
Get monthly local AI changes
Monthly recap. No spam.
DISCLOSURE

Some links on this site are affiliate links (Amazon Associates and other first-class retailers). When you buy through them, we earn a small commission at no extra cost to you. Affiliate links do not influence our verdicts — there are cards we rate highly that we don't have affiliate relationships with, and cards that sell well that we refuse to recommend. Read more →

© 2026 runlocalai.coIndependently operated
RUNLOCALAI · v38
Will it run? / NVIDIA GB200 NVL72 / creative

What can NVIDIA GB200 NVL72 run for creative?

Build: NVIDIA GB200 NVL72 + — + 32 GB RAM (windows)

Memory: 13824 GB VRAM + 32 GB system RAM
Runner: llama.cpp / Ollama (CUDA)
AnyChatCodingAgentsReasoningVisionLong contextCreative

Runs comfortably
176 models

Ranked by fit for creative use case + predicted speed. Click a row for VRAM breakdown.

#1Hermes 3 Llama 3.1 8B
8B
hermes
Commercial OK
Quant: Q8_0Context: 8,192VRAM: 22.9 GBHeadroom: 13801.1 GB
ollama run hermes3:8b
612
tok/s
E
Weights
8.50 GB
KV cache
4.00 GB
Activations
8.62 GB
Runtime
1.80 GB
Model details →Run-on benchmark page →
#2Hermes 3 Llama 3.1 70B
70B
hermes
Commercial OK
Quant: Q4_K_MContext: 8,192VRAM: 89.4 GBHeadroom: 13734.6 GB
ollama run hermes3:70b
123
tok/s
E
Weights
42.26 GB
KV cache
35.00 GB
Activations
10.31 GB
Runtime
1.80 GB
Model details →Run-on benchmark page →
#3Gemma 2 9B Instruct
9B
gemma
Commercial OK
Quant: Q8_0Context: 8,192VRAM: 24.5 GBHeadroom: 13799.5 GB
ollama run gemma2:9b
544
tok/s
E
Weights
9.56 GB
KV cache
4.50 GB
Activations
8.67 GB
Runtime
1.80 GB
Model details →Run-on benchmark page →
#4Dolphin 3.0 Mistral 24B
24B
dolphin
Commercial OK
Quant: Q4_K_MContext: 8,192VRAM: 37.2 GBHeadroom: 13786.8 GB
ollama run dolphin-mistral:24b
359
tok/s
E
Weights
14.49 GB
KV cache
12.00 GB
Activations
8.92 GB
Runtime
1.80 GB
Model details →Run-on benchmark page →
#5Dolphin 3.0 Llama 3.2 3B
3B
dolphin
Commercial OK
Quant: Q4_K_MContext: 8,192VRAM: 13.4 GBHeadroom: 13810.6 GB
2871
tok/s
E
Weights
1.81 GB
KV cache
1.50 GB
Activations
8.28 GB
Runtime
1.80 GB
Model details →Run-on benchmark page →
#6Hermes 3 Llama 3.2 3B
3B
hermes
Commercial OK
Quant: Q4_K_MContext: 8,192VRAM: 13.4 GBHeadroom: 13810.6 GB
2871
tok/s
E
Weights
1.81 GB
KV cache
1.50 GB
Activations
8.28 GB
Runtime
1.80 GB
Model details →Run-on benchmark page →
#7Hermes 4 Llama 3.3 70B
70B
hermes
Commercial OK
Quant: AWQ-INT4Context: 8,192VRAM: 118.5 GBHeadroom: 13705.5 GB
74
tok/s
E
Weights
70.00 GB
KV cache
35.00 GB
Activations
11.69 GB
Runtime
1.80 GB
Model details →Run-on benchmark page →
#8Dolphin 3 Llama 3.3 70B
70B
dolphin
Commercial OK
Quant: AWQ-INT4Context: 8,192VRAM: 118.5 GBHeadroom: 13705.5 GB
74
tok/s
E
Weights
70.00 GB
KV cache
35.00 GB
Activations
11.69 GB
Runtime
1.80 GB
Model details →Run-on benchmark page →
#9Gemma 4 E2B (Effective 2B)
2B
gemma
Commercial OK
Quant: Q8_0Context: 8,192VRAM: 13.2 GBHeadroom: 13810.8 GB
ollama run gemma4:e2b
2447
tok/s
E
Weights
2.13 GB
KV cache
1.00 GB
Activations
8.30 GB
Runtime
1.80 GB
Model details →Run-on benchmark page →
#10Gemma 4 E4B (Effective 4B)
4B
gemma
Commercial OK
Quant: Q8_0Context: 8,192VRAM: 16.5 GBHeadroom: 13807.5 GB
ollama run gemma4:e4b
1224
tok/s
E
Weights
4.25 GB
KV cache
2.00 GB
Activations
8.40 GB
Runtime
1.80 GB
Model details →Run-on benchmark page →
#11Gemma 3 4B
4B
gemma
Commercial OK
Quant: Q8_0Context: 8,192VRAM: 16.5 GBHeadroom: 13807.5 GB
ollama run gemma3:4b
1224
tok/s
E
Weights
4.25 GB
KV cache
2.00 GB
Activations
8.40 GB
Runtime
1.80 GB
Model details →Run-on benchmark page →
#12CodeGemma 7B
7B
gemma
Commercial OK
Quant: Q4_K_MContext: 8,192VRAM: 17.9 GBHeadroom: 13806.1 GB
ollama run codegemma:7b
1230
tok/s
E
Weights
4.23 GB
KV cache
3.50 GB
Activations
8.40 GB
Runtime
1.80 GB
Model details →Run-on benchmark page →

What if you upgraded?

Hypothetical scenarios. We re-ran the compatibility engine for each.

+32 GB system RAM

~$80–150

Doubles your CPU-offload working set. Helps when models don't quite fit in VRAM.

Unlocks: 7 new comfortable

  • • Gemma 3 1B
  • • Llama 3.2 1B Instruct
  • • Whisper Large v3 Turbo
  • • SmolLM 2 360M Instruct
Shop this upgrade↗

Add a second NVIDIA GB200 NVL72

see current pricing

Tensor parallelism splits the model across both cards, effectively doubling VRAM. Bandwidth doesn't double — runs ~1.5× the single-card speed in practice.

Unlocks: 7 new comfortable

  • • Gemma 3 1B
  • • Llama 3.2 1B Instruct
  • • Whisper Large v3 Turbo
  • • SmolLM 2 360M Instruct
Shop this upgrade↗

Some links above are affiliate links. We may earn a commission at no extra cost to you. How we make money.

Won't run
top 3 popular models

Need more memory than you have. Shown for orientation.

Qwen 3.6 35B-A3B (MTP)
35B
qwen
Commercial OK

Even with CPU offload, needs more memory than your VRAM (13824 GB) + 60% of system RAM (19 GB) combined.

—
Qwen 3.6 27B (MTP)
27B
qwen
Commercial OK

Even with CPU offload, needs more memory than your VRAM (13824 GB) + 60% of system RAM (19 GB) combined.

—
Ring-2.6-1T
1000B
other
Commercial OK

Even with CPU offload, needs more memory than your VRAM (13824 GB) + 60% of system RAM (19 GB) combined.

—

How to read these numbers

M
Measured — we ran this exact combo on owner hardware.

~
Extrapolated — predicted from a measured benchmark on similar-bandwidth hardware.

E
Estimated — pure formula based on VRAM bandwidth and model architecture.

Full methodology →

Want a specific benchmark we don't have? Email support@runlocalai.co and we'll prioritize it.