Describe your build — any GPUs, CPU, RAM, OS, runtime, use case. We'll compute effective VRAM honestly, recommend a runtime, and tell you which models fit comfortably, which are borderline, and which aren't practical.
Total VRAM ≠ pooled VRAM. We never sum VRAM unless the silicon truly pools (Apple unified memory). We always explain why effective is lower than total.
Add GPUs, set CPU/RAM/OS, optionally pick a runtime + use case. URL updates as you change fields — share a build by copying the URL.
Apple Silicon unified memory IS genuinely pooled — 0 GB shared between CPU and GPU. macOS reserves ~8 GB for the OS and apps, leaving ~0 GB for inference. Unlike NVIDIA multi-GPU, you don't pay an interconnect penalty here. The trade is bandwidth: ~400-800 GB/s vs an RTX 4090's 1 TB/s.
Best engine for this topology + skill level + use case.
183 models considered. Categorized by headroom at the recommended quant + a sensible context for your use case.
No model fits comfortably on this build.
No borderline models — clean fit ladder.
| Model | Params | Quant | VRAM est. | Context | Note |
|---|---|---|---|---|---|
| SmolLM 2 360M Instruct | 0B | Q4_K_M | 9.1 GB | 8,192 | ~9.1 GB needed at Q4_K_M + 8,192 ctx — overshoots effective VRAM by Infinity%. Drop quant or move to a larger build. |
| Qwen 2.5 0.5B Instruct | 1B | Q4_K_M | 9.3 GB | 8,192 | ~9.3 GB needed at Q4_K_M + 8,192 ctx — overshoots effective VRAM by Infinity%. Drop quant or move to a larger build. |
| BGE M3 | 1B | FP16 | 10.2 GB | 8,192 | ~10.2 GB needed at FP16 + 8,192 ctx — overshoots effective VRAM by Infinity%. Drop quant or move to a larger build. |
| BGE Reranker v2 M3 | 1B | FP16 | 10.2 GB | 8,192 | ~10.2 GB needed at FP16 + 8,192 ctx — overshoots effective VRAM by Infinity%. Drop quant or move to a larger build. |
| Whisper Large v3 Turbo | 1B | FP16 | 2.2 GB | 0 | ~2.2 GB needed at FP16 + 0 ctx — overshoots effective VRAM by Infinity%. Drop quant or move to a larger build. |
| Llama 3.2 1B Instruct | 1B | Q4_K_M | 9.8 GB | 8,192 | ~9.8 GB needed at Q4_K_M + 8,192 ctx — overshoots effective VRAM by Infinity%. Drop quant or move to a larger build. |
| Gemma 3 1B | 1B | Q4_K_M | 9.8 GB | 8,192 | ~9.8 GB needed at Q4_K_M + 8,192 ctx — overshoots effective VRAM by Infinity%. Drop quant or move to a larger build. |
| DeepSeek R1 Distill Qwen 1.5B | 2B | Q4_K_M | 10.4 GB | 8,192 | ~10.4 GB needed at Q4_K_M + 8,192 ctx — overshoots effective VRAM by Infinity%. Drop quant or move to a larger build. |
| Qwen 2.5 Coder 1.5B | 2B | Q4_K_M | 10.4 GB | 8,192 | ~10.4 GB needed at Q4_K_M + 8,192 ctx — overshoots effective VRAM by Infinity%. Drop quant or move to a larger build. |
| RWKV 7 'Goose' 1.5B | 2B | Q5_K_M | 10.5 GB | 8,192 | ~10.5 GB needed at Q5_K_M + 8,192 ctx — overshoots effective VRAM by Infinity%. Drop quant or move to a larger build. |
| Qwen 2.5 1.5B Instruct | 2B | Q4_K_M | 10.4 GB | 8,192 | ~10.4 GB needed at Q4_K_M + 8,192 ctx — overshoots effective VRAM by Infinity%. Drop quant or move to a larger build. |
| Whisper Large v3 | 2B | FP16 | 3.8 GB | 0 | ~3.8 GB needed at FP16 + 0 ctx — overshoots effective VRAM by Infinity%. Drop quant or move to a larger build. |
| SmolLM 2 1.7B Instruct | 2B | Q4_K_M | 10.6 GB | 8,192 | ~10.6 GB needed at Q4_K_M + 8,192 ctx — overshoots effective VRAM by Infinity%. Drop quant or move to a larger build. |
| Moondream 2 | 2B | Q4_K_M | 4 GB | 2,048 | ~4.0 GB needed at Q4_K_M + 2,048 ctx — overshoots effective VRAM by Infinity%. Drop quant or move to a larger build. |
| Gemma 4 E2B (Effective 2B) | 2B | Q4_K_M | 11 GB | 8,192 | ~11.0 GB needed at Q4_K_M + 8,192 ctx — overshoots effective VRAM by Infinity%. Drop quant or move to a larger build. |
| Granite 3.0 2B Instruct | 2B | Q4_K_M | 6.4 GB | 4,096 | ~6.4 GB needed at Q4_K_M + 4,096 ctx — overshoots effective VRAM by Infinity%. Drop quant or move to a larger build. |
NVLink vs PCIe, tensor- vs pipeline-parallel, mixed-card honesty.
Curated multi-GPU / cluster setups with effective-VRAM math.
OS + runtime install commands for your stack.
Runtime × OS × hardware support truth table.
If you're sizing a fresh AI build (not just a card to drop into an existing system), the build-budget walkthroughs cover the whole BOM honestly: AI PC build under $1,000 or AI PC build under $2,000 cover the realistic 2026 budget tiers.
Vertical-fit shopping? AI PC for students covers the budget + portability tradeoffs; AI PC for developers covers the coding workflow specifics; AI PC for small business covers the document-RAG / always-on machine.
Form-factor first? See best laptop for local AI, best Mac for local AI, best mini PC for local AI, or best used GPU for local AI.