RUNLOCALAIv38
→WILL IT RUNBEST GPUCOMPARETROUBLESHOOTSTARTPULSEMODELSHARDWARETOOLSBENCH
RUNLOCALAI

Operator-grade instrument for local-AI hardware intelligence. Hand-written verdicts. Real benchmarks. Reproducible commands.

OP·Fredoline Eruo
DIR
  • Models
  • Hardware
  • Tools
  • Benchmarks
  • Will it run?
GUIDES
  • Best GPU
  • Best laptop
  • Best Mac
  • Best used GPU
  • Best budget GPU
  • Best GPU for Ollama
  • Best GPU for SD
  • AI PC build $2K
  • CUDA vs ROCm
  • 16 vs 24 GB
  • Compare hardware
  • Custom compare
REF
  • Systems
  • Ecosystem maps
  • Pillar guides
  • Methodology
  • Glossary
  • Errors KB
  • Troubleshooting
  • Resources
  • Public API
EDITOR
  • About
  • About the author
  • Changelog
  • Latest
  • Updates
  • Submit benchmark
  • Send feedback
  • Trust
  • Editorial policy
  • How we make money
  • Contact
LEGAL
  • Privacy
  • Terms
  • Sitemap
MAIL · MONTHLY DIGEST
Get monthly local AI changes
Monthly recap. No spam.
DISCLOSURE

Some links on this site are affiliate links (Amazon Associates and other first-class retailers). When you buy through them, we earn a small commission at no extra cost to you. Affiliate links do not influence our verdicts — there are cards we rate highly that we don't have affiliate relationships with, and cards that sell well that we refuse to recommend. Read more →

SYS · ONLINEUPTIME · 100%2026 · operator-owned
RUNLOCALAI · v38
Will it run? / NVIDIA GeForce RTX 4080 Super

What can NVIDIA GeForce RTX 4080 Super run?

Build: NVIDIA GeForce RTX 4080 Super + — + 32 GB RAM (windows)

Memory: 16 GB VRAM + 32 GB system RAM
Runner: llama.cpp / Ollama (CUDA)
AnyChatCodingAgentsReasoningVisionLong contextCreative

Runs comfortably
53 models

Full-VRAM resident, with room for context. No compromises.

#1Gemma 3 1B
1B
gemma
Commercial OK
Quant: Q4_K_MContext: 8,192VRAM: 11.1 GBHeadroom: 4.9 GBTTFT: instant
ollama run gemma3:1b
792
tok/s
E
Weights
0.60 GB
KV cache
0.50 GB
Activations
8.22 GB
Runtime
1.80 GB
Time to first token (prefill, 512-token prompt): ~49 ms (instant)
Model details →Run-on benchmark page →
#2Llama 3.2 1B Instruct
1B
llama
Commercial OK
Quant: Q8_0Context: 8,192VRAM: 11.6 GBHeadroom: 4.4 GBTTFT: instant
ollama run llama3.2:1b
450
tok/s
E
Weights
1.06 GB
KV cache
0.50 GB
Activations
8.25 GB
Runtime
1.80 GB
Time to first token (prefill, 512-token prompt): ~49 ms (instant)
Model details →Run-on benchmark page →
#3DeepSeek R1 Distill Qwen 7B
7B
deepseek
Commercial OK
Quant: Q4_K_MContext: 2,048VRAM: 9.2 GBHeadroom: 6.8 GBTTFT: fast
ollama run deepseek-r1:7b
113
tok/s
E
Weights
4.23 GB
KV cache
0.88 GB
Activations
2.26 GB
Runtime
1.80 GB
Time to first token (prefill, 512-token prompt): ~343 ms (fast)
Model details →Run-on benchmark page →
#4Llama 3.1 8B Instruct
8B
llama
Commercial OK
Quant: Q4_K_MContext: 2,048VRAM: 9.2 GBHeadroom: 6.8 GBTTFT: fast
ollama run llama3.1:8b
99
tok/s
E
Weights
4.83 GB
KV cache
0.27 GB
Activations
2.29 GB
Runtime
1.80 GB
Time to first token (prefill, 512-token prompt): ~392 ms (fast)
Model details →Run-on benchmark page →
#5Qwen 3 8B
8B
qwen
Commercial OK
Quant: Q4_K_MContext: 2,048VRAM: 9.9 GBHeadroom: 6.1 GBTTFT: fast
ollama run qwen3:8b
99
tok/s
E
Weights
4.83 GB
KV cache
1.00 GB
Activations
2.29 GB
Runtime
1.80 GB
Time to first token (prefill, 512-token prompt): ~392 ms (fast)
Model details →Run-on benchmark page →
#6Mistral 7B Instruct v0.3
7B
mistral
Commercial OK
Quant: Q4_K_MContext: 2,048VRAM: 9.2 GBHeadroom: 6.8 GBTTFT: fast
ollama run mistral:7b
113
tok/s
E
Weights
4.23 GB
KV cache
0.88 GB
Activations
2.26 GB
Runtime
1.80 GB
Time to first token (prefill, 512-token prompt): ~343 ms (fast)
Model details →Run-on benchmark page →
#7Hermes 3 Llama 3.1 8B
8B
hermes
Commercial OK
Quant: Q4_K_MContext: 2,048VRAM: 9.9 GBHeadroom: 6.1 GBTTFT: fast
ollama run hermes3:8b
99
tok/s
E
Weights
4.83 GB
KV cache
1.00 GB
Activations
2.29 GB
Runtime
1.80 GB
Time to first token (prefill, 512-token prompt): ~392 ms (fast)
Model details →Run-on benchmark page →
#8Llama 3.1 Nemotron Nano 8B
8B
llama
Commercial OK
Quant: Q4_K_MContext: 2,048VRAM: 9.9 GBHeadroom: 6.1 GBTTFT: fast
99
tok/s
E
Weights
4.83 GB
KV cache
1.00 GB
Activations
2.29 GB
Runtime
1.80 GB
Time to first token (prefill, 512-token prompt): ~392 ms (fast)
Model details →Run-on benchmark page →
#9CodeGemma 7B
7B
gemma
Commercial OK
Quant: Q4_K_MContext: 2,048VRAM: 9.2 GBHeadroom: 6.8 GBTTFT: fast
ollama run codegemma:7b
113
tok/s
E
Weights
4.23 GB
KV cache
0.88 GB
Activations
2.26 GB
Runtime
1.80 GB
Time to first token (prefill, 512-token prompt): ~343 ms (fast)
Model details →Run-on benchmark page →
#10Gemma 2 9B Instruct
9B
gemma
Commercial OK
Quant: Q4_K_MContext: 2,048VRAM: 10.7 GBHeadroom: 5.3 GBTTFT: fast
ollama run gemma2:9b
88
tok/s
E
Weights
5.43 GB
KV cache
1.13 GB
Activations
2.32 GB
Runtime
1.80 GB
Time to first token (prefill, 512-token prompt): ~441 ms (fast)
Model details →Run-on benchmark page →
#11Moondream 2
1.9B
other
Commercial OK
Quant: Q4_K_MContext: 2,048VRAM: 5.3 GBHeadroom: 10.7 GBTTFT: instant
417
tok/s
E
Weights
1.15 GB
KV cache
0.24 GB
Activations
2.11 GB
Runtime
1.80 GB
Time to first token (prefill, 512-token prompt): ~93 ms (instant)
Model details →Run-on benchmark page →
#12StarCoder 2 7B
7B
other
Commercial OK
Quant: Q4_K_MContext: 2,048VRAM: 9.2 GBHeadroom: 6.8 GBTTFT: fast
113
tok/s
E
Weights
4.23 GB
KV cache
0.88 GB
Activations
2.26 GB
Runtime
1.80 GB
Time to first token (prefill, 512-token prompt): ~343 ms (fast)
Model details →Run-on benchmark page →

Runs with tradeoffs
73 models

Tight VRAM, partial CPU offload, or context-limited.

Qwen 3 30B-A3B
30B
qwen
Commercial OK
Quant: Q4_K_MContext: 2,048VRAM: 26.6 GBHeadroom: 8.6 GBTTFT: noticeable
  • • Partial CPU offload: ~40% of layers run on CPU
ollama run qwen3:30b
26
tok/s
E
Weights
18.11 GB
KV cache
3.75 GB
Activations
2.95 GB
Runtime
1.80 GB
Time to first token (prefill, 512-token prompt): ~1471 ms (noticeable)
Model details →Run-on benchmark page →
Qwen 2.5 Coder 32B Instruct
32B
qwen
Commercial OK
Quant: Q4_K_MContext: 8,192VRAM: 32.4 GBHeadroom: 2.8 GBTTFT: noticeable
  • • Partial CPU offload: ~51% of layers run on CPU
ollama run qwen2.5-coder:32b
25
tok/s
E
Weights
19.32 GB
KV cache
2.15 GB
Activations
9.16 GB
Runtime
1.80 GB
Time to first token (prefill, 512-token prompt): ~1569 ms (noticeable)
Model details →Run-on benchmark page →
Gemma 4 31B Dense
31B
gemma
Commercial OK
Quant: Q4_K_MContext: 2,048VRAM: 27.4 GBHeadroom: 7.8 GBTTFT: noticeable
  • • Partial CPU offload: ~42% of layers run on CPU
ollama run gemma4:31b
26
tok/s
E
Weights
18.72 GB
KV cache
3.88 GB
Activations
2.98 GB
Runtime
1.80 GB
Time to first token (prefill, 512-token prompt): ~1520 ms (noticeable)
Model details →Run-on benchmark page →
Qwen 3 32B
32B
qwen
Commercial OK
Quant: Q4_K_MContext: 2,048VRAM: 28.1 GBHeadroom: 7.1 GBTTFT: noticeable
  • • Partial CPU offload: ~43% of layers run on CPU
ollama run qwen3:32b
25
tok/s
E
Weights
19.32 GB
KV cache
4.00 GB
Activations
3.01 GB
Runtime
1.80 GB
Time to first token (prefill, 512-token prompt): ~1569 ms (noticeable)
Model details →Run-on benchmark page →
DeepSeek R1 Distill Qwen 32B
32B
deepseek
Commercial OK
Quant: Q4_K_MContext: 2,048VRAM: 28.1 GBHeadroom: 7.1 GBTTFT: noticeable
  • • Partial CPU offload: ~43% of layers run on CPU
ollama run deepseek-r1:32b
25
tok/s
E
Weights
19.32 GB
KV cache
4.00 GB
Activations
3.01 GB
Runtime
1.80 GB
Time to first token (prefill, 512-token prompt): ~1569 ms (noticeable)
Model details →Run-on benchmark page →
Llama 3.2 3B Instruct
3B
llama
Commercial OK
Quant: Q8_0Context: 8,192VRAM: 14.8 GBHeadroom: 1.2 GBTTFT: fast
  • • Tight VRAM fit — only 1.2 GB headroom left for context growth
ollama run llama3.2:3b
150
tok/s
E
Weights
3.19 GB
KV cache
1.50 GB
Activations
8.35 GB
Runtime
1.80 GB
Time to first token (prefill, 512-token prompt): ~147 ms (fast)
Model details →Run-on benchmark page →
Gemma 4 26B MoE
26B
gemma
Commercial OK
Quant: Q4_K_MContext: 2,048VRAM: 23.6 GBHeadroom: 11.6 GBTTFT: noticeable
  • • Partial CPU offload: ~32% of layers run on CPU
ollama run gemma4:26b-moe
30
tok/s
E
Weights
15.70 GB
KV cache
3.25 GB
Activations
2.83 GB
Runtime
1.80 GB
Time to first token (prefill, 512-token prompt): ~1275 ms (noticeable)
Model details →Run-on benchmark page →
Qwen 3 14B
14B
qwen
Commercial OK
Quant: Q4_K_MContext: 2,048VRAM: 14.5 GBHeadroom: 1.5 GBTTFT: noticeable
  • • Tight VRAM fit — only 1.5 GB headroom left for context growth
ollama run qwen3:14b
57
tok/s
E
Weights
8.45 GB
KV cache
1.75 GB
Activations
2.47 GB
Runtime
1.80 GB
Time to first token (prefill, 512-token prompt): ~687 ms (noticeable)
Model details →Run-on benchmark page →

What if you upgraded?

Hypothetical scenarios. We re-ran the compatibility engine for each.

+32 GB system RAM

~$80–150

Doubles your CPU-offload working set. Helps when models don't quite fit in VRAM.

Unlocks: 82 new tradeoff

  • • Qwen 3 30B-A3B
  • • Qwen 2.5 Coder 32B Instruct
  • • Llama 3.3 70B Instruct
  • • Gemma 4 31B Dense
Shop this upgrade↗

Upgrade to NVIDIA GeForce RTX 3090 Ti

~$1199

24 GB VRAM (vs your 16 GB) plus a bandwidth jump from ~736 GB/s to ~? GB/s.

Unlocks: 39 new comfortable

  • • Gemma 4 E2B (Effective 2B)
  • • Llama 3.2 3B Instruct
  • • Phi-3.5 Vision
  • • Phi-3.5 Mini Instruct
Shop this upgrade↗

Add a second NVIDIA GeForce RTX 4080 Super

~$1099

Tensor parallelism splits the model across both cards, effectively doubling VRAM. Bandwidth doesn't double — runs ~1.5× the single-card speed in practice.

Unlocks: 56 new comfortable

  • • Gemma 4 E2B (Effective 2B)
  • • Llama 3.2 3B Instruct
  • • Phi-3.5 Vision
  • • Phi-3.5 Mini Instruct
Shop this upgrade↗

Some links above are affiliate links. We may earn a commission at no extra cost to you. How we make money.

Won't run
top 5 popular models

Need more memory than you have. Shown for orientation.

DeepSeek V4 Pro (1.6T MoE)
1600B
deepseek
Commercial OK

Even with CPU offload, needs more memory than your VRAM (16 GB) + 60% of system RAM (19 GB) combined.

—
Qwen 3.5 235B-A17B (MoE)
397B
qwen
Commercial OK

Even with CPU offload, needs more memory than your VRAM (16 GB) + 60% of system RAM (19 GB) combined.

—
Qwen 3 235B-A22B
235B
qwen
Commercial OK

Even with CPU offload, needs more memory than your VRAM (16 GB) + 60% of system RAM (19 GB) combined.

—
DeepSeek V4 Flash (284B MoE)
284B
deepseek
Commercial OK

Even with CPU offload, needs more memory than your VRAM (16 GB) + 60% of system RAM (19 GB) combined.

—
Llama 4 Scout
109B
llama
Commercial OK

Even with CPU offload, needs more memory than your VRAM (16 GB) + 60% of system RAM (19 GB) combined.

—

How to read these numbers

M
Measured — we ran this exact combo on owner hardware.

~
Extrapolated — predicted from a measured benchmark on similar-bandwidth hardware.

E
Estimated — pure formula based on VRAM bandwidth and model architecture.

Full methodology →

Want a specific benchmark we don't have? Email support@runlocalai.co and we'll prioritize it.