Build: NVIDIA GB200 NVL72 + — + 32 GB RAM (windows)
Ranked by fit for chat use case + predicted speed. Click a row for VRAM breakdown.
ollama run llama3.2:3bollama run mistral:7bollama run qwen3:8bollama run gemma2:9bollama run mistral-nemo:12bollama run gemma3:12bollama run llama3.1:8bollama run qwen3:14bollama run gemma4:26b-moeollama run mistral-small:24bHypothetical scenarios. We re-ran the compatibility engine for each.
~$80–150
Doubles your CPU-offload working set. Helps when models don't quite fit in VRAM.
Unlocks: 1 new comfortable
Some links above are affiliate links. We may earn a commission at no extra cost to you. How we make money.
Need more memory than you have. Shown for orientation.
Even with CPU offload, needs more memory than your VRAM (13824 GB) + 60% of system RAM (19 GB) combined.
Even with CPU offload, needs more memory than your VRAM (13824 GB) + 60% of system RAM (19 GB) combined.
Even with CPU offload, needs more memory than your VRAM (13824 GB) + 60% of system RAM (19 GB) combined.
Want a specific benchmark we don't have? Email support@runlocalai.co and we'll prioritize it.