Build: NVIDIA GeForce RTX 4080 Super + — + 32 GB RAM (windows)
Ranked by fit for vision use case + predicted speed. Click a row for VRAM breakdown.
Tight VRAM, partial CPU offload, or context-limited.
ollama run gemma4:e4bollama run gemma3:4bollama run gemma4:e2bollama run llama3.2-vision:11bHypothetical scenarios. We re-ran the compatibility engine for each.
~$80–150
Doubles your CPU-offload working set. Helps when models don't quite fit in VRAM.
Unlocks: 44 new comfortable, 82 new tradeoff
~$1199
24 GB VRAM (vs your 16 GB) plus a bandwidth jump from ~736 GB/s to ~? GB/s.
Unlocks: 75 new comfortable
~$1099
Tensor parallelism splits the model across both cards, effectively doubling VRAM. Bandwidth doesn't double — runs ~1.5× the single-card speed in practice.
Unlocks: 100 new comfortable
Some links above are affiliate links. We may earn a commission at no extra cost to you. How we make money.
Need more memory than you have. Shown for orientation.
Even with CPU offload, needs more memory than your VRAM (16 GB) + 60% of system RAM (19 GB) combined.
Even with CPU offload, needs more memory than your VRAM (16 GB) + 60% of system RAM (19 GB) combined.
Even with CPU offload, needs more memory than your VRAM (16 GB) + 60% of system RAM (19 GB) combined.
Even with CPU offload, needs more memory than your VRAM (16 GB) + 60% of system RAM (19 GB) combined.
Even with CPU offload, needs more memory than your VRAM (16 GB) + 60% of system RAM (19 GB) combined.
Want a specific benchmark we don't have? Email support@runlocalai.co and we'll prioritize it.