Build: NVIDIA GeForce RTX 3090 Ti + — + 32 GB RAM (windows)
Ranked by fit for long context use case + predicted speed. Click a row for VRAM breakdown.
ollama run phi3.5:3.8bollama run gemma4:e4bollama run qwen2.5:7bollama run qwen3:14bollama run qwen2.5:14bollama run llama3.1:8bollama run phi4:14bTight VRAM, partial CPU offload, or context-limited.
ollama run nemotron3:nanoollama run mistral-nemo:12bollama run qwen3:8bollama run gemma3:27bollama run qwen3:30bollama run gemma4:31bollama run qwen3:32bollama run qwen2.5:32bHypothetical scenarios. We re-ran the compatibility engine for each.
~$80–150
Doubles your CPU-offload working set. Helps when models don't quite fit in VRAM.
Unlocks: 17 new comfortable, 61 new tradeoff
~$2499
32 GB VRAM (vs your 24 GB) plus a bandwidth jump from ~? GB/s to ~1792 GB/s.
Unlocks: 45 new comfortable
~$1199
Tensor parallelism splits the model across both cards, effectively doubling VRAM. Bandwidth doesn't double — runs ~1.5× the single-card speed in practice.
Unlocks: 57 new comfortable
Some links above are affiliate links. We may earn a commission at no extra cost to you. How we make money.
Need more memory than you have. Shown for orientation.
Even with CPU offload, needs more memory than your VRAM (24 GB) + 60% of system RAM (19 GB) combined.
Even with CPU offload, needs more memory than your VRAM (24 GB) + 60% of system RAM (19 GB) combined.
Even with CPU offload, needs more memory than your VRAM (24 GB) + 60% of system RAM (19 GB) combined.
Even with CPU offload, needs more memory than your VRAM (24 GB) + 60% of system RAM (19 GB) combined.
Even with CPU offload, needs more memory than your VRAM (24 GB) + 60% of system RAM (19 GB) combined.
Want a specific benchmark we don't have? Email support@runlocalai.co and we'll prioritize it.