RUNLOCALAIv38
→WILL IT RUNBEST GPUCOMPARETROUBLESHOOTSTARTPULSEMODELSHARDWARETOOLSBENCH
RUNLOCALAI

Operator-grade instrument for local-AI hardware intelligence. Hand-written verdicts. Real benchmarks. Reproducible commands.

OP·Fredoline Eruo
DIR
  • Models
  • Hardware
  • Tools
  • Benchmarks
  • Will it run?
GUIDES
  • Best GPU
  • Best laptop
  • Best Mac
  • Best used GPU
  • Best budget GPU
  • Best GPU for Ollama
  • Best GPU for SD
  • AI PC build $2K
  • CUDA vs ROCm
  • 16 vs 24 GB
  • Compare hardware
  • Custom compare
REF
  • Systems
  • Ecosystem maps
  • Pillar guides
  • Methodology
  • Glossary
  • Errors KB
  • Troubleshooting
  • Resources
  • Public API
EDITOR
  • About
  • About the author
  • Changelog
  • Latest
  • Updates
  • Submit benchmark
  • Send feedback
  • Trust
  • Editorial policy
  • How we make money
  • Contact
LEGAL
  • Privacy
  • Terms
  • Sitemap
MAIL · MONTHLY DIGEST
Get monthly local AI changes
Monthly recap. No spam.
DISCLOSURE

Some links on this site are affiliate links (Amazon Associates and other first-class retailers). When you buy through them, we earn a small commission at no extra cost to you. Affiliate links do not influence our verdicts — there are cards we rate highly that we don't have affiliate relationships with, and cards that sell well that we refuse to recommend. Read more →

SYS · ONLINEUPTIME · 100%2026 · operator-owned
RUNLOCALAI · v38
← Back to Will-it-run

Custom build engine

Describe your build — any GPUs, CPU, RAM, OS, runtime, use case. We'll compute effective VRAM honestly, recommend a runtime, and tell you which models fit comfortably, which are borderline, and which aren't practical.

Total VRAM ≠ pooled VRAM. We never sum VRAM unless the silicon truly pools (Apple unified memory). We always explain why effective is lower than total.

Describe your build

Add GPUs, set CPU/RAM/OS, optionally pick a runtime + use case. URL updates as you change fields — share a build by copying the URL.

Build summary

Total VRAM
16 GB
Effective VRAM
~15 GB
range 13-14 GB
Topology
single gpu
none
Setup difficulty
beginner
speed penalty ~0%
Why effective VRAM is lower than total

Single NVIDIA GeForce RTX 5070 Ti — 16 GB VRAM minus ~1.5 GB runtime overhead = ~14 GB usable for weights + KV cache + activations. The 8% headroom we reserve covers the typical OS/driver footprint and gives KV-cache room for an 8K-32K context.

Coding agents — agentic tool-call burst

Workload-specific bottleneck. Where this kind of work actually breaks first, and what to budget for.

Bottleneck: kv cache

Coding agents emit 5-15 tool calls per task. Each call carries the full agent system prompt + context. KV-cache budget for that prompt × concurrent requests is the limit. The decode side is well-served by any modern card; the prefill side bottlenecks first.

Budget for
  • •32K context with KV-cache room to spare (~3-4 GB on 4090 AWQ-INT4)
  • •Prefix cache: prefer SGLang for >5 tool calls / task
  • •Decode latency: aim for >40 tok/s sustained

Recommended runtime

Best engine for this topology + skill level + use case.

vLLM
primary
involved

AWQ-INT4 path fits 32B-class models on a 24 GB card with concurrent users. The production-default for self-hosted coding agents and multi-user serving.

ExLlamaV2
alternative
involved

Single-stream throughput king on consumer NVIDIA. EXL2 4.65bpw on a 4090 hits the highest tok/s in this class.

Models that fit your build

44 models considered (filtered by coding). Categorized by headroom at the recommended quant + a sensible context for your use case.

Comfortable
0 models · ≥15% headroom

No model fits comfortably on this build.

Borderline
0 models · tight, may need quant downgrade

No borderline models — clean fit ladder.

Not practical
16 models · oversize for this build
ModelParamsQuantVRAM est.ContextNote
Qwen 2.5 Coder 1.5B2BQ4_K_M38.5 GB32,768~38.5 GB needed at Q4_K_M + 32,768 ctx — overshoots effective VRAM by 157%. Drop quant or move to a larger build.
Qwen 2.5 Coder 3B3BQ4_K_M42.5 GB32,768~42.5 GB needed at Q4_K_M + 32,768 ctx — overshoots effective VRAM by 183%. Drop quant or move to a larger build.
StarCoder 2 3B3BQ4_K_M23.1 GB16,384~23.1 GB needed at Q4_K_M + 16,384 ctx — overshoots effective VRAM by 54%. Drop quant or move to a larger build.
CodeGemma 7B7BQ4_K_M17.9 GB8,192~17.9 GB needed at Q4_K_M + 8,192 ctx — overshoots effective VRAM by 20%. Drop quant or move to a larger build.
Qwen 2.5 Coder 7B Instruct7BQ4_K_M53 GB32,768~53.0 GB needed at Q4_K_M + 32,768 ctx — overshoots effective VRAM by 253%. Drop quant or move to a larger build.
Qwen 2.5 7B Instruct7BQ4_K_M40.9 GB32,768~40.9 GB needed at Q4_K_M + 32,768 ctx — overshoots effective VRAM by 173%. Drop quant or move to a larger build.
Codestral Mamba 7B7BQ4_K_M53 GB32,768~53.0 GB needed at Q4_K_M + 32,768 ctx — overshoots effective VRAM by 253%. Drop quant or move to a larger build.
StarCoder 2 7B7BQ4_K_M29.6 GB16,384~29.6 GB needed at Q4_K_M + 16,384 ctx — overshoots effective VRAM by 97%. Drop quant or move to a larger build.
CodeQwen 1.5 7B7BQ4_K_M53 GB32,768~53.0 GB needed at Q4_K_M + 32,768 ctx — overshoots effective VRAM by 253%. Drop quant or move to a larger build.
DeepSeek R1 Distill Llama 8B8BQ4_K_M55.6 GB32,768~55.6 GB needed at Q4_K_M + 32,768 ctx — overshoots effective VRAM by 271%. Drop quant or move to a larger build.
OpenCoder 8B8BQ4_K_M55.6 GB32,768~55.6 GB needed at Q4_K_M + 32,768 ctx — overshoots effective VRAM by 271%. Drop quant or move to a larger build.
Qwen 3 8B8BQ4_K_M55.6 GB32,768~55.6 GB needed at Q4_K_M + 32,768 ctx — overshoots effective VRAM by 271%. Drop quant or move to a larger build.
Llama 3.1 8B Instruct8BQ4_K_M43.9 GB32,768~43.9 GB needed at Q4_K_M + 32,768 ctx — overshoots effective VRAM by 193%. Drop quant or move to a larger build.
Yi Coder 9B9BQ4_K_M58.3 GB32,768~58.3 GB needed at Q4_K_M + 32,768 ctx — overshoots effective VRAM by 288%. Drop quant or move to a larger build.
Qwen 2.5 14B Instruct14BQ4_K_M71.4 GB32,768~71.4 GB needed at Q4_K_M + 32,768 ctx — overshoots effective VRAM by 376%. Drop quant or move to a larger build.
Qwen 2.5 Coder 14B Instruct14BQ4_K_M71.4 GB32,768~71.4 GB needed at Q4_K_M + 32,768 ctx — overshoots effective VRAM by 376%. Drop quant or move to a larger build.

Related

Multi-GPU buying guide →

NVLink vs PCIe, tensor- vs pipeline-parallel, mixed-card honesty.

Hardware combinations →

Curated multi-GPU / cluster setups with effective-VRAM math.

Setup path-finder →

OS + runtime install commands for your stack.

Compatibility matrix →

Runtime × OS × hardware support truth table.

Shopping a full build instead of a single card?

If you're sizing a fresh AI build (not just a card to drop into an existing system), the build-budget walkthroughs cover the whole BOM honestly: AI PC build under $1,000 or AI PC build under $2,000 cover the realistic 2026 budget tiers.

Vertical-fit shopping? AI PC for students covers the budget + portability tradeoffs; AI PC for developers covers the coding workflow specifics; AI PC for small business covers the document-RAG / always-on machine.

Form-factor first? See best laptop for local AI, best Mac for local AI, best mini PC for local AI, or best used GPU for local AI.

See something off?Submit a benchmark·Report outdated·Suggest a correctionWe read every submission. Editorial review takes 1-7 days.