RUNLOCALAIv38
→WILL IT RUNBEST GPUCOMPARETROUBLESHOOTSTARTPULSEMODELSHARDWARETOOLSBENCH
RUNLOCALAI

Independently operated catalog for local-AI hardware and software. Hand-written verdicts. Source-cited claims. Reproducible commands when we have them.

OP·Fredoline Eruo
DIR
  • Models
  • Hardware
  • Tools
  • Benchmarks
TOOLS
  • Will it run?
  • Compare hardware
  • Cost vs cloud
  • Choose my GPU
  • Quick answers
REF
  • All buyer guides
  • Methodology
  • Glossary
  • Errors KB
  • Trust
EDITOR
  • About
  • Author
  • How we make money
  • Editorial policy
  • Contact
LEGAL
  • Privacy
  • Terms
  • Sitemap
MAIL · MONTHLY DIGEST
Get monthly local AI changes
Monthly recap. No spam.
DISCLOSURE

Some links on this site are affiliate links (Amazon Associates and other first-class retailers). When you buy through them, we earn a small commission at no extra cost to you. Affiliate links do not influence our verdicts — there are cards we rate highly that we don't have affiliate relationships with, and cards that sell well that we refuse to recommend. Read more →

© 2026 runlocalai.coIndependently operated
RUNLOCALAI · v38
Will it run? / NVIDIA RTX A5000 / creative

What can NVIDIA RTX A5000 run for creative?

Build: NVIDIA RTX A5000 + — + 32 GB RAM (windows)

Memory: 24 GB VRAM + 32 GB system RAM
Runner: llama.cpp / Ollama (CUDA)
AnyChatCodingAgentsReasoningVisionLong contextCreative

Runs comfortably
77 models

Ranked by fit for creative use case + predicted speed. Click a row for VRAM breakdown.

#1Dolphin 3.0 Llama 3.2 3B
3B
dolphin
Commercial OK
Quant: Q4_K_MContext: 8,192VRAM: 13.4 GBHeadroom: 10.6 GB
276
tok/s
E
Weights
1.81 GB
KV cache
1.50 GB
Activations
8.28 GB
Runtime
1.80 GB
Model details →Run-on benchmark page →
#2Hermes 3 Llama 3.2 3B
3B
hermes
Commercial OK
Quant: Q4_K_MContext: 8,192VRAM: 13.4 GBHeadroom: 10.6 GB
276
tok/s
E
Weights
1.81 GB
KV cache
1.50 GB
Activations
8.28 GB
Runtime
1.80 GB
Model details →Run-on benchmark page →
#3Gemma 4 E2B (Effective 2B)
2B
gemma
Commercial OK
Quant: Q8_0Context: 8,192VRAM: 13.2 GBHeadroom: 10.8 GB
ollama run gemma4:e2b
235
tok/s
E
Weights
2.13 GB
KV cache
1.00 GB
Activations
8.30 GB
Runtime
1.80 GB
Model details →Run-on benchmark page →
#4CodeGemma 7B
7B
gemma
Commercial OK
Quant: Q4_K_MContext: 8,192VRAM: 17.9 GBHeadroom: 6.1 GB
ollama run codegemma:7b
118
tok/s
E
Weights
4.23 GB
KV cache
3.50 GB
Activations
8.40 GB
Runtime
1.80 GB
Model details →Run-on benchmark page →
#5Gemma 4 E4B (Effective 4B)
4B
gemma
Commercial OK
Quant: Q8_0Context: 8,192VRAM: 16.5 GBHeadroom: 7.5 GB
ollama run gemma4:e4b
117
tok/s
E
Weights
4.25 GB
KV cache
2.00 GB
Activations
8.40 GB
Runtime
1.80 GB
Model details →Run-on benchmark page →
#6Gemma 3 4B
4B
gemma
Commercial OK
Quant: Q8_0Context: 8,192VRAM: 16.5 GBHeadroom: 7.5 GB
ollama run gemma3:4b
117
tok/s
E
Weights
4.25 GB
KV cache
2.00 GB
Activations
8.40 GB
Runtime
1.80 GB
Model details →Run-on benchmark page →
#7Llama 3.2 3B Instruct
3B
llama
Commercial OK
Quant: Q8_0Context: 8,192VRAM: 14.8 GBHeadroom: 9.2 GB
ollama run llama3.2:3b
157
tok/s
E
Weights
3.19 GB
KV cache
1.50 GB
Activations
8.35 GB
Runtime
1.80 GB
Model details →Run-on benchmark page →
#8Phi-3.5 Vision
4.2B
phi
Commercial OK
Quant: Q4_K_MContext: 8,192VRAM: 14.8 GBHeadroom: 9.2 GB
197
tok/s
E
Weights
2.54 GB
KV cache
2.10 GB
Activations
8.32 GB
Runtime
1.80 GB
Model details →Run-on benchmark page →
#9Phi-3.5 Mini Instruct
3.8B
phi
Commercial OK
Quant: Q8_0Context: 8,192VRAM: 16.1 GBHeadroom: 7.9 GB
ollama run phi3.5:3.8b
124
tok/s
E
Weights
4.04 GB
KV cache
1.90 GB
Activations
8.39 GB
Runtime
1.80 GB
Model details →Run-on benchmark page →
#10StarCoder 2 3B
3B
other
Commercial OK
Quant: Q4_K_MContext: 8,192VRAM: 13.4 GBHeadroom: 10.6 GB
276
tok/s
E
Weights
1.81 GB
KV cache
1.50 GB
Activations
8.28 GB
Runtime
1.80 GB
Model details →Run-on benchmark page →
#11Whisper Large v3
1.55B
other
Commercial OK
Quant: FP16Context: 0VRAM: 5.1 GBHeadroom: 18.9 GB
161
tok/s
E
Weights
3.10 GB
KV cache
0.00 GB
Activations
0.16 GB
Runtime
1.80 GB
Model details →Run-on benchmark page →
#12Nemotron Mini 4B Instruct
4B
other
Commercial OK
Quant: Q4_K_MContext: 4,096VRAM: 9.4 GBHeadroom: 14.6 GB
207
tok/s
E
Weights
2.42 GB
KV cache
1.00 GB
Activations
4.22 GB
Runtime
1.80 GB
Model details →Run-on benchmark page →

Runs with tradeoffs
49 models

Tight VRAM, partial CPU offload, or context-limited.

Hermes 3 Llama 3.1 8B
8B
hermes
Commercial OK
Quant: Q8_0Context: 8,192VRAM: 22.9 GBHeadroom: 1.1 GB
  • • Tight VRAM fit — only 1.1 GB headroom left for context growth
ollama run hermes3:8b
59
tok/s
E
Weights
8.50 GB
KV cache
4.00 GB
Activations
8.62 GB
Runtime
1.80 GB
Model details →Run-on benchmark page →
Gemma 2 9B Instruct
9B
gemma
Commercial OK
Quant: Q4_K_MContext: 8,192VRAM: 20.2 GBHeadroom: 3.8 GB
  • • Tight VRAM fit — only 3.8 GB headroom left for context growth
ollama run gemma2:9b
92
tok/s
E
Weights
5.43 GB
KV cache
4.50 GB
Activations
8.46 GB
Runtime
1.80 GB
Model details →Run-on benchmark page →
DeepSeek MoE 16B Base
16B
deepseek
Commercial OK
Quant: Q4_K_MContext: 4,096VRAM: 20.0 GBHeadroom: 4.0 GB
  • • Tight VRAM fit — only 4.0 GB headroom left for context growth
345
tok/s
E
Weights
9.66 GB
KV cache
4.00 GB
Activations
4.58 GB
Runtime
1.80 GB
Model details →Run-on benchmark page →
Dolphin 3.0 Mistral 24B
24B
dolphin
Commercial OK
Quant: Q4_K_MContext: 2,048VRAM: 22.1 GBHeadroom: 1.9 GB
  • • Tight VRAM fit — only 1.9 GB headroom left for context growth
ollama run dolphin-mistral:24b
34
tok/s
E
Weights
14.49 GB
KV cache
3.00 GB
Activations
2.77 GB
Runtime
1.80 GB
Model details →Run-on benchmark page →
Gemma 3 12B
12B
gemma
Commercial OK
Quant: Q4_K_MContext: 8,192VRAM: 23.6 GBHeadroom: 0.4 GB
  • • Tight VRAM fit — only 0.4 GB headroom left for context growth
ollama run gemma3:12b
69
tok/s
E
Weights
7.25 GB
KV cache
6.00 GB
Activations
8.55 GB
Runtime
1.80 GB
Model details →Run-on benchmark page →
GLM-4 9B
9B
glm
Quant: Q4_K_MContext: 8,192VRAM: 20.2 GBHeadroom: 3.8 GB
  • • Tight VRAM fit — only 3.8 GB headroom left for context growth
92
tok/s
E
Weights
5.43 GB
KV cache
4.50 GB
Activations
8.46 GB
Runtime
1.80 GB
Model details →Run-on benchmark page →
Nemotron 3 Nano 9B
9B
other
Commercial OK
Quant: Q4_K_MContext: 8,192VRAM: 20.2 GBHeadroom: 3.8 GB
  • • Tight VRAM fit — only 3.8 GB headroom left for context growth
92
tok/s
E
Weights
5.43 GB
KV cache
4.50 GB
Activations
8.46 GB
Runtime
1.80 GB
Model details →Run-on benchmark page →
Yi Coder 9B
9B
yi
Commercial OK
Quant: Q4_K_MContext: 8,192VRAM: 20.2 GBHeadroom: 3.8 GB
  • • Tight VRAM fit — only 3.8 GB headroom left for context growth
92
tok/s
E
Weights
5.43 GB
KV cache
4.50 GB
Activations
8.46 GB
Runtime
1.80 GB
Model details →Run-on benchmark page →

What if you upgraded?

Hypothetical scenarios. We re-ran the compatibility engine for each.

+32 GB system RAM

~$80–150

Doubles your CPU-offload working set. Helps when models don't quite fit in VRAM.

Unlocks: 7 new comfortable, 61 new tradeoff

  • • Gemma 3 1B
  • • Llama 3.2 1B Instruct
  • • Whisper Large v3 Turbo
  • • SmolLM 2 360M Instruct
Shop this upgrade↗

Upgrade to NVIDIA GeForce RTX 5090

~$2499

32 GB VRAM (vs your 24 GB) plus a bandwidth jump from ~768 GB/s to ~1792 GB/s.

Unlocks: 35 new comfortable

  • • Gemma 3 1B
  • • Llama 3.2 1B Instruct
  • • DeepSeek R1 Distill Qwen 7B
  • • Qwen 3 8B
Shop this upgrade↗

Add a second NVIDIA RTX A5000

see current pricing

Tensor parallelism splits the model across both cards, effectively doubling VRAM. Bandwidth doesn't double — runs ~1.5× the single-card speed in practice.

Unlocks: 47 new comfortable

  • • Gemma 3 1B
  • • Llama 3.2 1B Instruct
  • • DeepSeek R1 Distill Qwen 7B
  • • Qwen 3 8B
Shop this upgrade↗

Some links above are affiliate links. We may earn a commission at no extra cost to you. How we make money.

Won't run
top 5 popular models

Need more memory than you have. Shown for orientation.

DeepSeek V4 Pro (1.6T MoE)
1600B
deepseek
Commercial OK

Even with CPU offload, needs more memory than your VRAM (24 GB) + 60% of system RAM (19 GB) combined.

—
Qwen 3.5 235B-A17B (MoE)
397B
qwen
Commercial OK

Even with CPU offload, needs more memory than your VRAM (24 GB) + 60% of system RAM (19 GB) combined.

—
Qwen 3 235B-A22B
235B
qwen
Commercial OK

Even with CPU offload, needs more memory than your VRAM (24 GB) + 60% of system RAM (19 GB) combined.

—
Llama 4 Scout
109B
llama
Commercial OK

Even with CPU offload, needs more memory than your VRAM (24 GB) + 60% of system RAM (19 GB) combined.

—
DeepSeek R1 (671B reasoning)
671B
deepseek
Commercial OK

Even with CPU offload, needs more memory than your VRAM (24 GB) + 60% of system RAM (19 GB) combined.

—

How to read these numbers

M
Measured — we ran this exact combo on owner hardware.

~
Extrapolated — predicted from a measured benchmark on similar-bandwidth hardware.

E
Estimated — pure formula based on VRAM bandwidth and model architecture.

Full methodology →

Want a specific benchmark we don't have? Email support@runlocalai.co and we'll prioritize it.