What can NVIDIA GeForce RTX 5090 run?

Build: RTX 5090 + Ryzen 9 9950X + 64GB DDR5

Memory: 32 GB VRAM + 64 GB system RAM
Runner: llama.cpp / Ollama (CUDA)

Runs comfortably
34 models

Full-VRAM resident, with room for context. No compromises.

#1Gemma 3 1B
1B
gemma
Commercial OK
Quant: Q4_K_MContext: 8,192VRAM: 11.1 GBHeadroom: 20.9 GBTTFT: instant
ollama run gemma3:1b
1929
tok/s
E
Weights
0.60 GB
KV cache
0.50 GB
Activations
8.22 GB
Runtime
1.80 GB
Time to first token (prefill, 512-token prompt): ~20 ms (instant)
#2Llama 3.2 1B Instruct
1B
llama
Commercial OK
Quant: Q8_0Context: 8,192VRAM: 11.6 GBHeadroom: 20.4 GBTTFT: instant
ollama run llama3.2:1b
1096
tok/s
E
Weights
1.06 GB
KV cache
0.50 GB
Activations
8.25 GB
Runtime
1.80 GB
Time to first token (prefill, 512-token prompt): ~20 ms (instant)
#3Gemma 4 E2B (Effective 2B)
2B
gemma
Commercial OK
Quant: Q8_0Context: 8,192VRAM: 13.2 GBHeadroom: 18.8 GBTTFT: instant
ollama run gemma4:e2b
548
tok/s
E
Weights
2.13 GB
KV cache
1.00 GB
Activations
8.30 GB
Runtime
1.80 GB
Time to first token (prefill, 512-token prompt): ~41 ms (instant)
#4Llama 3.2 3B Instruct
3B
llama
Commercial OK
Quant: Q8_0Context: 8,192VRAM: 14.8 GBHeadroom: 17.2 GBTTFT: instant
ollama run llama3.2:3b
365
tok/s
E
Weights
3.19 GB
KV cache
1.50 GB
Activations
8.35 GB
Runtime
1.80 GB
Time to first token (prefill, 512-token prompt): ~61 ms (instant)
#5Phi-3.5 Vision
4.2B
phi
Commercial OK
Quant: Q4_K_MContext: 8,192VRAM: 14.8 GBHeadroom: 17.2 GBTTFT: instant
459
tok/s
E
Weights
2.54 GB
KV cache
2.10 GB
Activations
8.32 GB
Runtime
1.80 GB
Time to first token (prefill, 512-token prompt): ~86 ms (instant)
#6Phi-3.5 Mini Instruct
3.8B
phi
Commercial OK
Quant: Q8_0Context: 8,192VRAM: 16.1 GBHeadroom: 15.9 GBTTFT: instant
ollama run phi3.5:3.8b
288
tok/s
E
Weights
4.04 GB
KV cache
1.90 GB
Activations
8.39 GB
Runtime
1.80 GB
Time to first token (prefill, 512-token prompt): ~78 ms (instant)
#7Gemma 4 E4B (Effective 4B)
4B
gemma
Commercial OK
Quant: Q8_0Context: 8,192VRAM: 16.5 GBHeadroom: 15.5 GBTTFT: instant
ollama run gemma4:e4b
274
tok/s
E
Weights
4.25 GB
KV cache
2.00 GB
Activations
8.40 GB
Runtime
1.80 GB
Time to first token (prefill, 512-token prompt): ~82 ms (instant)
#8Qwen 3 4B
4B
qwen
Commercial OK
Quant: Q8_0Context: 8,192VRAM: 16.5 GBHeadroom: 15.5 GBTTFT: instant
ollama run qwen3:4b
274
tok/s
E
Weights
4.25 GB
KV cache
2.00 GB
Activations
8.40 GB
Runtime
1.80 GB
Time to first token (prefill, 512-token prompt): ~82 ms (instant)
#9Gemma 3 4B
4B
gemma
Commercial OK
Quant: Q8_0Context: 8,192VRAM: 16.5 GBHeadroom: 15.5 GBTTFT: instant
ollama run gemma3:4b
274
tok/s
E
Weights
4.25 GB
KV cache
2.00 GB
Activations
8.40 GB
Runtime
1.80 GB
Time to first token (prefill, 512-token prompt): ~82 ms (instant)
#10Llama 3.1 Nemotron Nano 8B
8B
llama
Commercial OK
Quant: Q4_K_MContext: 8,192VRAM: 19.1 GBHeadroom: 12.9 GBTTFT: fast
241
tok/s
E
Weights
4.83 GB
KV cache
4.00 GB
Activations
8.43 GB
Runtime
1.80 GB
Time to first token (prefill, 512-token prompt): ~164 ms (fast)
#11Mistral 7B Instruct v0.3
7B
mistral
Commercial OK
Quant: Q5_K_MContext: 8,192VRAM: 18.5 GBHeadroom: 13.5 GBTTFT: fast
ollama run mistral:7b
242
tok/s
E
Weights
4.81 GB
KV cache
3.50 GB
Activations
8.43 GB
Runtime
1.80 GB
Time to first token (prefill, 512-token prompt): ~143 ms (fast)
#12CodeGemma 7B
7B
gemma
Commercial OK
Quant: Q4_K_MContext: 8,192VRAM: 17.9 GBHeadroom: 14.1 GBTTFT: fast
ollama run codegemma:7b
276
tok/s
E
Weights
4.23 GB
KV cache
3.50 GB
Activations
8.40 GB
Runtime
1.80 GB
Time to first token (prefill, 512-token prompt): ~143 ms (fast)

Runs with tradeoffs
18 models

Tight VRAM, partial CPU offload, or context-limited.

Llama 3.3 70B Instruct
70B
llama
Commercial OK
Quant: Q5_K_MContext: 8,192VRAM: 63.2 GBHeadroom: 7.2 GBTTFT: noticeable
  • Partial CPU offload: ~49% of layers run on CPU
  • CPU is the bottleneck — upgrading RAM bandwidth helps more than VRAM here
ollama run llama3.3:70b
1
tok/s
E
Weights
48.13 GB
KV cache
2.68 GB
Activations
10.60 GB
Runtime
1.80 GB
Time to first token (prefill, 512-token prompt): ~1434 ms (noticeable)
Qwen 3 32B
32B
qwen
Commercial OK
Quant: Q4_K_MContext: 2,048VRAM: 28.1 GBHeadroom: 3.9 GBTTFT: noticeable
  • Tight VRAM fit — only 3.9 GB headroom left for context growth
ollama run qwen3:32b
60
tok/s
E
Weights
19.32 GB
KV cache
4.00 GB
Activations
3.01 GB
Runtime
1.80 GB
Time to first token (prefill, 512-token prompt): ~655 ms (noticeable)
DeepSeek R1 Distill Llama 70B
70B
deepseek
Commercial OK
Quant: Q4_K_MContext: 2,048VRAM: 57.0 GBHeadroom: 13.4 GBTTFT: noticeable
  • Partial CPU offload: ~44% of layers run on CPU
  • CPU is the bottleneck — upgrading RAM bandwidth helps more than VRAM here
ollama run deepseek-r1:70b
2
tok/s
E
Weights
42.26 GB
KV cache
8.75 GB
Activations
4.16 GB
Runtime
1.80 GB
Time to first token (prefill, 512-token prompt): ~1434 ms (noticeable)
DeepSeek R1 Distill Qwen 32B
32B
deepseek
Commercial OK
Quant: Q4_K_MContext: 2,048VRAM: 28.1 GBHeadroom: 3.9 GBTTFT: noticeable
  • Tight VRAM fit — only 3.9 GB headroom left for context growth
ollama run deepseek-r1:32b
60
tok/s
E
Weights
19.32 GB
KV cache
4.00 GB
Activations
3.01 GB
Runtime
1.80 GB
Time to first token (prefill, 512-token prompt): ~655 ms (noticeable)
Llama 3.1 70B Instruct
70B
llama
Commercial OK
Quant: Q4_K_MContext: 2,048VRAM: 57.0 GBHeadroom: 13.4 GBTTFT: noticeable
  • Partial CPU offload: ~44% of layers run on CPU
  • CPU is the bottleneck — upgrading RAM bandwidth helps more than VRAM here
ollama run llama3.1:70b
2
tok/s
E
Weights
42.26 GB
KV cache
8.75 GB
Activations
4.16 GB
Runtime
1.80 GB
Time to first token (prefill, 512-token prompt): ~1434 ms (noticeable)
Mistral Nemo 12B Instruct
12B
mistral
Commercial OK
Quant: Q8_0Context: 8,192VRAM: 29.4 GBHeadroom: 2.6 GBTTFT: fast
  • Tight VRAM fit — only 2.6 GB headroom left for context growth
ollama run mistral-nemo:12b
91
tok/s
E
Weights
12.75 GB
KV cache
6.00 GB
Activations
8.83 GB
Runtime
1.80 GB
Time to first token (prefill, 512-token prompt): ~246 ms (fast)
Qwen 2.5 32B Instruct
32B
qwen
Commercial OK
Quant: Q4_K_MContext: 2,048VRAM: 28.1 GBHeadroom: 3.9 GBTTFT: noticeable
  • Tight VRAM fit — only 3.9 GB headroom left for context growth
ollama run qwen2.5:32b
60
tok/s
E
Weights
19.32 GB
KV cache
4.00 GB
Activations
3.01 GB
Runtime
1.80 GB
Time to first token (prefill, 512-token prompt): ~655 ms (noticeable)
QwQ 32B Preview
32B
qwen
Commercial OK
Quant: Q4_K_MContext: 2,048VRAM: 28.1 GBHeadroom: 3.9 GBTTFT: noticeable
  • Tight VRAM fit — only 3.9 GB headroom left for context growth
ollama run qwq:32b
60
tok/s
E
Weights
19.32 GB
KV cache
4.00 GB
Activations
3.01 GB
Runtime
1.80 GB
Time to first token (prefill, 512-token prompt): ~655 ms (noticeable)

What if you upgraded?

Hypothetical scenarios. We re-ran the compatibility engine for each.

+32 GB system RAM

~$80–150

Doubles your CPU-offload working set. Helps when models don't quite fit in VRAM.

Unlocks: 21 new tradeoff

  • Llama 4 Scout
  • Llama 3.3 70B Instruct
  • Qwen 3 32B
  • DeepSeek R1 Distill Llama 70B

Upgrade to NVIDIA A100 40GB

see current pricing

40 GB VRAM (vs your 32 GB) plus a bandwidth jump from ~1792 GB/s to ~? GB/s.

Unlocks: 11 new comfortable

  • DeepSeek Coder V2 Lite (16B)
  • Mistral Nemo 12B Instruct
  • Gemma 3 12B
  • Pixtral 12B

Add a second NVIDIA GeForce RTX 5090

~$2499

Tensor parallelism splits the model across both cards, effectively doubling VRAM. Bandwidth doesn't double — runs ~1.5× the single-card speed in practice.

Unlocks: 13 new comfortable

  • DeepSeek Coder V2 Lite (16B)
  • Mistral Nemo 12B Instruct
  • Gemma 3 12B
  • Pixtral 12B

Some links above are affiliate links. We may earn a commission at no extra cost to you. How we make money.

Won't run
top 5 popular models

Need more memory than you have. Shown for orientation.

Qwen 3 235B-A22B
235B
qwen
Commercial OK

Even with CPU offload, needs more memory than your VRAM (32 GB) + 60% of system RAM (38 GB) combined.

Llama 4 Scout
109B
llama
Commercial OK

Even with CPU offload, needs more memory than your VRAM (32 GB) + 60% of system RAM (38 GB) combined.

DeepSeek R1 (671B reasoning)
671B
deepseek
Commercial OK

Even with CPU offload, needs more memory than your VRAM (32 GB) + 60% of system RAM (38 GB) combined.

GLM-5
200B
other
Commercial OK

Even with CPU offload, needs more memory than your VRAM (32 GB) + 60% of system RAM (38 GB) combined.

DeepSeek V3 (671B MoE)
671B
deepseek
Commercial OK

Even with CPU offload, needs more memory than your VRAM (32 GB) + 60% of system RAM (38 GB) combined.

How to read these numbers

M
Measured — we ran this exact combo on owner hardware.

~
Extrapolated — predicted from a measured benchmark on similar-bandwidth hardware.

E
Estimated — pure formula based on VRAM bandwidth and model architecture.

Full methodology →

Want a specific benchmark we don't have? Email benchmarks@runlocalai.co and we'll prioritize it.