RUNLOCALAIv38
→WILL IT RUNBEST GPUCOMPARETROUBLESHOOTSTARTPULSEMODELSHARDWARETOOLSBENCH
RUNLOCALAI

Operator-grade instrument for local-AI hardware intelligence. Hand-written verdicts. Real benchmarks. Reproducible commands.

OP·Fredoline Eruo
DIR
  • Models
  • Hardware
  • Tools
  • Benchmarks
  • Will it run?
GUIDES
  • Best GPU
  • Best laptop
  • Best Mac
  • Best used GPU
  • Best budget GPU
  • Best GPU for Ollama
  • Best GPU for SD
  • AI PC build $2K
  • CUDA vs ROCm
  • 16 vs 24 GB
  • Compare hardware
  • Custom compare
REF
  • Systems
  • Ecosystem maps
  • Pillar guides
  • Methodology
  • Glossary
  • Errors KB
  • Troubleshooting
  • Resources
  • Public API
EDITOR
  • About
  • About the author
  • Changelog
  • Latest
  • Updates
  • Submit benchmark
  • Send feedback
  • Trust
  • Editorial policy
  • How we make money
  • Contact
LEGAL
  • Privacy
  • Terms
  • Sitemap
MAIL · MONTHLY DIGEST
Get monthly local AI changes
Monthly recap. No spam.
DISCLOSURE

Some links on this site are affiliate links (Amazon Associates and other first-class retailers). When you buy through them, we earn a small commission at no extra cost to you. Affiliate links do not influence our verdicts — there are cards we rate highly that we don't have affiliate relationships with, and cards that sell well that we refuse to recommend. Read more →

SYS · ONLINEUPTIME · 100%2026 · operator-owned
RUNLOCALAI · v38
← Back to Will-it-run

Custom build engine

Describe your build — any GPUs, CPU, RAM, OS, runtime, use case. We'll compute effective VRAM honestly, recommend a runtime, and tell you which models fit comfortably, which are borderline, and which aren't practical.

Total VRAM ≠ pooled VRAM. We never sum VRAM unless the silicon truly pools (Apple unified memory). We always explain why effective is lower than total.

Describe your build

Add GPUs, set CPU/RAM/OS, optionally pick a runtime + use case. URL updates as you change fields — share a build by copying the URL.

Build summary

Total VRAM
0 GB
Effective VRAM
~0 GB
range 0-0 GB
Topology
apple cluster
thunderbolt
Setup difficulty
advanced
speed penalty ~60%

Recommended runtime

Best engine for this topology + skill level + use case.

Exo Labs
primary
expert

Designed for multi-Mac clustering — shards model layers across Macs over Thunderbolt. Only viable runtime for spanning Apple Silicon machines today.

MLX-LM (single-node)
alternative
moderate

If your largest model fits a single Mac, run on one Mac. Cluster latency makes single-stream inference 3-5× slower; only cluster when capacity demands it.

Models that fit your build

183 models considered. Categorized by headroom at the recommended quant + a sensible context for your use case.

Comfortable
0 models · ≥15% headroom

No model fits comfortably on this build.

Borderline
0 models · tight, may need quant downgrade

No borderline models — clean fit ladder.

Not practical
16 models · oversize for this build
ModelParamsQuantVRAM est.ContextNote
SmolLM 2 360M Instruct0BQ4_K_M9.3 GB8,192~9.3 GB needed at Q4_K_M + 8,192 ctx — overshoots effective VRAM by Infinity%. Drop quant or move to a larger build.
Qwen 2.5 0.5B Instruct1BQ4_K_M9.5 GB8,192~9.5 GB needed at Q4_K_M + 8,192 ctx — overshoots effective VRAM by Infinity%. Drop quant or move to a larger build.
BGE M31BFP1610.4 GB8,192~10.4 GB needed at FP16 + 8,192 ctx — overshoots effective VRAM by Infinity%. Drop quant or move to a larger build.
BGE Reranker v2 M31BFP1610.4 GB8,192~10.4 GB needed at FP16 + 8,192 ctx — overshoots effective VRAM by Infinity%. Drop quant or move to a larger build.
Whisper Large v3 Turbo1BFP162.4 GB0~2.4 GB needed at FP16 + 0 ctx — overshoots effective VRAM by Infinity%. Drop quant or move to a larger build.
Llama 3.2 1B Instruct1BQ4_K_M10 GB8,192~10.0 GB needed at Q4_K_M + 8,192 ctx — overshoots effective VRAM by Infinity%. Drop quant or move to a larger build.
Gemma 3 1B1BQ4_K_M10 GB8,192~10.0 GB needed at Q4_K_M + 8,192 ctx — overshoots effective VRAM by Infinity%. Drop quant or move to a larger build.
DeepSeek R1 Distill Qwen 1.5B2BQ4_K_M10.6 GB8,192~10.6 GB needed at Q4_K_M + 8,192 ctx — overshoots effective VRAM by Infinity%. Drop quant or move to a larger build.
Qwen 2.5 Coder 1.5B2BQ4_K_M10.6 GB8,192~10.6 GB needed at Q4_K_M + 8,192 ctx — overshoots effective VRAM by Infinity%. Drop quant or move to a larger build.
RWKV 7 'Goose' 1.5B2BQ5_K_M10.7 GB8,192~10.7 GB needed at Q5_K_M + 8,192 ctx — overshoots effective VRAM by Infinity%. Drop quant or move to a larger build.
Qwen 2.5 1.5B Instruct2BQ4_K_M10.6 GB8,192~10.6 GB needed at Q4_K_M + 8,192 ctx — overshoots effective VRAM by Infinity%. Drop quant or move to a larger build.
Whisper Large v32BFP164 GB0~4.0 GB needed at FP16 + 0 ctx — overshoots effective VRAM by Infinity%. Drop quant or move to a larger build.
SmolLM 2 1.7B Instruct2BQ4_K_M10.8 GB8,192~10.8 GB needed at Q4_K_M + 8,192 ctx — overshoots effective VRAM by Infinity%. Drop quant or move to a larger build.
Moondream 22BQ4_K_M4.2 GB2,048~4.2 GB needed at Q4_K_M + 2,048 ctx — overshoots effective VRAM by Infinity%. Drop quant or move to a larger build.
Gemma 4 E2B (Effective 2B)2BQ4_K_M11.2 GB8,192~11.2 GB needed at Q4_K_M + 8,192 ctx — overshoots effective VRAM by Infinity%. Drop quant or move to a larger build.
Granite 3.0 2B Instruct2BQ4_K_M6.6 GB4,096~6.6 GB needed at Q4_K_M + 4,096 ctx — overshoots effective VRAM by Infinity%. Drop quant or move to a larger build.

Related

Multi-GPU buying guide →

NVLink vs PCIe, tensor- vs pipeline-parallel, mixed-card honesty.

Hardware combinations →

Curated multi-GPU / cluster setups with effective-VRAM math.

Setup path-finder →

OS + runtime install commands for your stack.

Compatibility matrix →

Runtime × OS × hardware support truth table.

Shopping a full build instead of a single card?

If you're sizing a fresh AI build (not just a card to drop into an existing system), the build-budget walkthroughs cover the whole BOM honestly: AI PC build under $1,000 or AI PC build under $2,000 cover the realistic 2026 budget tiers.

Vertical-fit shopping? AI PC for students covers the budget + portability tradeoffs; AI PC for developers covers the coding workflow specifics; AI PC for small business covers the document-RAG / always-on machine.

Form-factor first? See best laptop for local AI, best Mac for local AI, best mini PC for local AI, or best used GPU for local AI.

See something off?Submit a benchmark·Report outdated·Suggest a correctionWe read every submission. Editorial review takes 1-7 days.