RUNLOCALAIv38
→WILL IT RUNBEST GPUCOMPARETROUBLESHOOTSTARTPULSEMODELSHARDWARETOOLSBENCH
RUNLOCALAI

Independently operated catalog for local-AI hardware and software. Hand-written verdicts. Source-cited claims. Reproducible commands when we have them.

OP·Fredoline Eruo
DIR
  • Models
  • Hardware
  • Tools
  • Benchmarks
TOOLS
  • Will it run?
  • Compare hardware
  • Cost vs cloud
  • Choose my GPU
  • Quick answers
REF
  • All buyer guides
  • Methodology
  • Glossary
  • Errors KB
  • Trust
EDITOR
  • About
  • Author
  • How we make money
  • Editorial policy
  • Contact
LEGAL
  • Privacy
  • Terms
  • Sitemap
MAIL · MONTHLY DIGEST
Get monthly local AI changes
Monthly recap. No spam.
DISCLOSURE

Some links on this site are affiliate links (Amazon Associates and other first-class retailers). When you buy through them, we earn a small commission at no extra cost to you. Affiliate links do not influence our verdicts — there are cards we rate highly that we don't have affiliate relationships with, and cards that sell well that we refuse to recommend. Read more →

© 2026 runlocalai.coIndependently operated
RUNLOCALAI · v38
Will it run? / NVIDIA RTX 2080 Ti 22GB (China-mod) / chat

What can NVIDIA RTX 2080 Ti 22GB (China-mod) run for chat?

Build: NVIDIA RTX 2080 Ti 22GB (China-mod) + — + 32 GB RAM (windows)

Memory: 22 GB VRAM + 32 GB system RAM
Runner: llama.cpp / Ollama (CUDA)
AnyChatCodingAgentsReasoningVisionLong contextCreative

Runs comfortably
71 models

Ranked by fit for chat use case + predicted speed. Click a row for VRAM breakdown.

#1Llama 3.2 3B Instruct
3B
llama
Commercial OK
Quant: Q8_0Context: 8,192VRAM: 14.8 GBHeadroom: 7.2 GBTTFT: fast
ollama run llama3.2:3b
126
tok/s
E
Weights
3.19 GB
KV cache
1.50 GB
Activations
8.35 GB
Runtime
1.80 GB
Time to first token (prefill, 512-token prompt): ~286 ms (fast)
Model details →Run-on benchmark page →
#2Dolphin 3.0 Llama 3.2 3B
3B
dolphin
Commercial OK
Quant: Q4_K_MContext: 8,192VRAM: 13.4 GBHeadroom: 8.6 GBTTFT: fast
221
tok/s
E
Weights
1.81 GB
KV cache
1.50 GB
Activations
8.28 GB
Runtime
1.80 GB
Time to first token (prefill, 512-token prompt): ~286 ms (fast)
Model details →Run-on benchmark page →
#3Hermes 3 Llama 3.2 3B
3B
hermes
Commercial OK
Quant: Q4_K_MContext: 8,192VRAM: 13.4 GBHeadroom: 8.6 GBTTFT: fast
221
tok/s
E
Weights
1.81 GB
KV cache
1.50 GB
Activations
8.28 GB
Runtime
1.80 GB
Time to first token (prefill, 512-token prompt): ~286 ms (fast)
Model details →Run-on benchmark page →
#4Granite 3.0 2B Instruct
2B
granite
Commercial OK
Quant: Q4_K_MContext: 4,096VRAM: 7.7 GBHeadroom: 14.3 GBTTFT: fast
332
tok/s
E
Weights
1.21 GB
KV cache
0.50 GB
Activations
4.16 GB
Runtime
1.80 GB
Time to first token (prefill, 512-token prompt): ~190 ms (fast)
Model details →Run-on benchmark page →
#5Qwen 3 7B
7B
qwen
Commercial OK
Quant: Q4_K_MContext: 8,192VRAM: 17.9 GBHeadroom: 4.1 GBTTFT: noticeable
95
tok/s
E
Weights
4.23 GB
KV cache
3.50 GB
Activations
8.40 GB
Runtime
1.80 GB
Time to first token (prefill, 512-token prompt): ~666 ms (noticeable)
Model details →Run-on benchmark page →
#6InternLM 2.5 7B Chat
7B
internlm
Commercial OK
Quant: Q4_K_MContext: 8,192VRAM: 17.9 GBHeadroom: 4.1 GBTTFT: noticeable
95
tok/s
E
Weights
4.23 GB
KV cache
3.50 GB
Activations
8.40 GB
Runtime
1.80 GB
Time to first token (prefill, 512-token prompt): ~666 ms (noticeable)
Model details →Run-on benchmark page →
#7Falcon 3 7B Instruct
7B
falcon
Commercial OK
Quant: Q4_K_MContext: 8,192VRAM: 17.9 GBHeadroom: 4.1 GBTTFT: noticeable
95
tok/s
E
Weights
4.23 GB
KV cache
3.50 GB
Activations
8.40 GB
Runtime
1.80 GB
Time to first token (prefill, 512-token prompt): ~666 ms (noticeable)
Model details →Run-on benchmark page →
#8Granite 3.0 8B Instruct
8B
granite
Commercial OK
Quant: Q4_K_MContext: 4,096VRAM: 13.0 GBHeadroom: 9.0 GBTTFT: noticeable
83
tok/s
E
Weights
4.83 GB
KV cache
2.00 GB
Activations
4.34 GB
Runtime
1.80 GB
Time to first token (prefill, 512-token prompt): ~761 ms (noticeable)
Model details →Run-on benchmark page →
#9Mistral Nemo 12B Instruct
12B
mistral
Commercial OK
Quant: Q4_K_MContext: 2,048VRAM: 13.0 GBHeadroom: 9.0 GBTTFT: noticeable
ollama run mistral-nemo:12b
55
tok/s
E
Weights
7.25 GB
KV cache
1.50 GB
Activations
2.41 GB
Runtime
1.80 GB
Time to first token (prefill, 512-token prompt): ~1142 ms (noticeable)
Model details →Run-on benchmark page →
#10Gemma 3 12B
12B
gemma
Commercial OK
Quant: Q4_K_MContext: 2,048VRAM: 13.0 GBHeadroom: 9.0 GBTTFT: noticeable
ollama run gemma3:12b
55
tok/s
E
Weights
7.25 GB
KV cache
1.50 GB
Activations
2.41 GB
Runtime
1.80 GB
Time to first token (prefill, 512-token prompt): ~1142 ms (noticeable)
Model details →Run-on benchmark page →
#11Stable LM 2 12B
12B
other
Quant: Q4_K_MContext: 4,096VRAM: 16.5 GBHeadroom: 5.5 GBTTFT: noticeable
55
tok/s
E
Weights
7.25 GB
KV cache
3.00 GB
Activations
4.46 GB
Runtime
1.80 GB
Time to first token (prefill, 512-token prompt): ~1142 ms (noticeable)
Model details →Run-on benchmark page →
#12Qwen 3 14B
14B
qwen
Commercial OK
Quant: Q4_K_MContext: 2,048VRAM: 14.5 GBHeadroom: 7.5 GBTTFT: noticeable
ollama run qwen3:14b
47
tok/s
E
Weights
8.45 GB
KV cache
1.75 GB
Activations
2.47 GB
Runtime
1.80 GB
Time to first token (prefill, 512-token prompt): ~1332 ms (noticeable)
Model details →Run-on benchmark page →

Runs with tradeoffs
55 models

Tight VRAM, partial CPU offload, or context-limited.

Mistral 7B Instruct v0.3
7B
mistral
Commercial OK
Quant: Q5_K_MContext: 8,192VRAM: 18.5 GBHeadroom: 3.5 GBTTFT: noticeable
  • • Tight VRAM fit — only 3.5 GB headroom left for context growth
ollama run mistral:7b
83
tok/s
E
Weights
4.81 GB
KV cache
3.50 GB
Activations
8.43 GB
Runtime
1.80 GB
Time to first token (prefill, 512-token prompt): ~666 ms (noticeable)
Model details →Run-on benchmark page →
Qwen 3 8B
8B
qwen
Commercial OK
Quant: Q4_K_MContext: 8,192VRAM: 19.1 GBHeadroom: 2.9 GBTTFT: noticeable
  • • Tight VRAM fit — only 2.9 GB headroom left for context growth
ollama run qwen3:8b
83
tok/s
E
Weights
4.83 GB
KV cache
4.00 GB
Activations
8.43 GB
Runtime
1.80 GB
Time to first token (prefill, 512-token prompt): ~761 ms (noticeable)
Model details →Run-on benchmark page →
Llama 3.3 8B Instruct
8B
llama
Commercial OK
Quant: Q4_K_MContext: 8,192VRAM: 19.1 GBHeadroom: 2.9 GBTTFT: noticeable
  • • Tight VRAM fit — only 2.9 GB headroom left for context growth
83
tok/s
E
Weights
4.83 GB
KV cache
4.00 GB
Activations
8.43 GB
Runtime
1.80 GB
Time to first token (prefill, 512-token prompt): ~761 ms (noticeable)
Model details →Run-on benchmark page →
Tulu 3 8B
8B
other
Commercial OK
Quant: Q4_K_MContext: 8,192VRAM: 19.1 GBHeadroom: 2.9 GBTTFT: noticeable
  • • Tight VRAM fit — only 2.9 GB headroom left for context growth
83
tok/s
E
Weights
4.83 GB
KV cache
4.00 GB
Activations
8.43 GB
Runtime
1.80 GB
Time to first token (prefill, 512-token prompt): ~761 ms (noticeable)
Model details →Run-on benchmark page →
Ministral 8B Instruct
8B
mistral
Quant: Q4_K_MContext: 8,192VRAM: 19.1 GBHeadroom: 2.9 GBTTFT: noticeable
  • • Tight VRAM fit — only 2.9 GB headroom left for context growth
83
tok/s
E
Weights
4.83 GB
KV cache
4.00 GB
Activations
8.43 GB
Runtime
1.80 GB
Time to first token (prefill, 512-token prompt): ~761 ms (noticeable)
Model details →Run-on benchmark page →
Gemma 2 9B Instruct
9B
gemma
Commercial OK
Quant: Q4_K_MContext: 8,192VRAM: 20.2 GBHeadroom: 1.8 GBTTFT: noticeable
  • • Tight VRAM fit — only 1.8 GB headroom left for context growth
ollama run gemma2:9b
74
tok/s
E
Weights
5.43 GB
KV cache
4.50 GB
Activations
8.46 GB
Runtime
1.80 GB
Time to first token (prefill, 512-token prompt): ~857 ms (noticeable)
Model details →Run-on benchmark page →
Llama 3.1 8B Instruct
8B
llama
Commercial OK
Quant: Q8_0Context: 8,192VRAM: 20.0 GBHeadroom: 2.0 GBTTFT: noticeable
  • • Tight VRAM fit — only 2.0 GB headroom left for context growth
ollama run llama3.1:8b
47
tok/s
E
Weights
8.50 GB
KV cache
1.07 GB
Activations
8.62 GB
Runtime
1.80 GB
Time to first token (prefill, 512-token prompt): ~761 ms (noticeable)
Model details →Run-on benchmark page →
Mistral Small 3 24B
24B
mistral
Commercial OK
Quant: Q4_K_MContext: 8,192VRAM: 37.2 GBHeadroom: 4.0 GBTTFT: slow
  • • Partial CPU offload: ~41% of layers run on CPU
ollama run mistral-small:24b
28
tok/s
E
Weights
14.49 GB
KV cache
12.00 GB
Activations
8.92 GB
Runtime
1.80 GB
Time to first token (prefill, 512-token prompt): ~2284 ms (slow)
Model details →Run-on benchmark page →

What if you upgraded?

Hypothetical scenarios. We re-ran the compatibility engine for each.

+32 GB system RAM

~$80–150

Doubles your CPU-offload working set. Helps when models don't quite fit in VRAM.

Unlocks: 1 new comfortable, 72 new tradeoff

  • • SmolLM 2 360M Instruct
  • • Llama 3.1 8B Instruct
  • • Qwen 3 30B-A3B
  • • Qwen 2.5 Coder 32B Instruct
Shop this upgrade↗

Upgrade to NVIDIA GeForce RTX 5090 Mobile

see current pricing

24 GB VRAM (vs your 22 GB) plus a bandwidth jump from ~616 GB/s to ~896 GB/s.

Unlocks: 18 new comfortable

  • • Llama 3.1 Nemotron Nano 8B
  • • Mistral 7B Instruct v0.3
  • • Qwen 2.5 7B Instruct
  • • Llama 3.1 8B Instruct
Shop this upgrade↗

Add a second NVIDIA RTX 2080 Ti 22GB (China-mod)

~$350

Tensor parallelism splits the model across both cards, effectively doubling VRAM. Bandwidth doesn't double — runs ~1.5× the single-card speed in practice.

Unlocks: 54 new comfortable

  • • Llama 3.1 Nemotron Nano 8B
  • • Mistral 7B Instruct v0.3
  • • Qwen 2.5 7B Instruct
  • • DeepSeek R1 Distill Qwen 7B
Shop this upgrade↗

Some links above are affiliate links. We may earn a commission at no extra cost to you. How we make money.

Won't run
top 5 popular models

Need more memory than you have. Shown for orientation.

DeepSeek V4 Pro (1.6T MoE)
1600B
deepseek
Commercial OK

Even with CPU offload, needs more memory than your VRAM (22 GB) + 60% of system RAM (19 GB) combined.

—
Qwen 3.5 235B-A17B (MoE)
397B
qwen
Commercial OK

Even with CPU offload, needs more memory than your VRAM (22 GB) + 60% of system RAM (19 GB) combined.

—
Qwen 3 235B-A22B
235B
qwen
Commercial OK

Even with CPU offload, needs more memory than your VRAM (22 GB) + 60% of system RAM (19 GB) combined.

—
Llama 4 Scout
109B
llama
Commercial OK

Even with CPU offload, needs more memory than your VRAM (22 GB) + 60% of system RAM (19 GB) combined.

—
DeepSeek R1 (671B reasoning)
671B
deepseek
Commercial OK

Even with CPU offload, needs more memory than your VRAM (22 GB) + 60% of system RAM (19 GB) combined.

—

How to read these numbers

M
Measured — we ran this exact combo on owner hardware.

~
Extrapolated — predicted from a measured benchmark on similar-bandwidth hardware.

E
Estimated — pure formula based on VRAM bandwidth and model architecture.

Full methodology →

Want a specific benchmark we don't have? Email support@runlocalai.co and we'll prioritize it.