Hardware combinations for local AI
Dual GPUs, quad GPUs, mixed cards, Apple unified memory, Exo clusters, distributed serving. The honest answer to “what hardware combination should I build to run this model well?” — with effective-VRAM math, runtime compatibility, failure modes, and who should avoid each setup.
Combinations (2)
Each combo links to operator-grade detail with topology diagram, runtime compatibility matrix, failure modes, and recommended models.
Dual RTX 4090 (24 GB × 2)
Two consumer-flagship cards. PCIe 4.0 only — no NVLink on 4090. 48 GB total / ~45 GB effective with tensor parallelism. ~30% faster decode than dual 3090 at 2× the cost.
RTX 4090 + RTX 3090 (asymmetric 24+24 GB)
Asymmetric multi-GPU: a 4090 paired with a 3090. PCIe 4.0 only — different SM counts, different memory bandwidth. Effective VRAM is bottlenecked by the slower card on most split strategies.
Going deeper
- Running local AI on multiple GPUs in 2026 — the flagship buying / deployment guide.
- Distributed inference systems — architectural depth on tensor / pipeline / expert routing.
- Execution stacks — full deployment recipes that pair combos with runtimes and models.
- Hardware catalog — single-GPU baselines that the combos here build on.