Build: NVIDIA RTX 2080 Ti 22GB (China-mod) + — + 32 GB RAM (windows)
Full-VRAM resident, with room for context. No compromises.
ollama run gemma3:1bollama run llama3.2:1bollama run gemma4:e2bollama run llama3.2:3bollama run phi3.5:3.8bollama run gemma4:e4bollama run qwen3:4bollama run gemma3:4bollama run codegemma:7bollama run llama3.2-vision:11bollama run mistral-nemo:12bTight VRAM, partial CPU offload, or context-limited.
ollama run llama3.1:8bollama run qwen3:30bollama run qwen2.5-coder:32bollama run qwen3:32bollama run gemma4:31bollama run qwen3:8bollama run deepseek-r1:32bollama run gemma4:26b-moeHypothetical scenarios. We re-ran the compatibility engine for each.
~$80–150
Doubles your CPU-offload working set. Helps when models don't quite fit in VRAM.
Unlocks: 72 new tradeoff
see current pricing
24 GB VRAM (vs your 22 GB) plus a bandwidth jump from ~616 GB/s to ~896 GB/s.
Unlocks: 17 new comfortable
~$350
Tensor parallelism splits the model across both cards, effectively doubling VRAM. Bandwidth doesn't double — runs ~1.5× the single-card speed in practice.
Unlocks: 53 new comfortable
Some links above are affiliate links. We may earn a commission at no extra cost to you. How we make money.
Need more memory than you have. Shown for orientation.
Even with CPU offload, needs more memory than your VRAM (22 GB) + 60% of system RAM (19 GB) combined.
Even with CPU offload, needs more memory than your VRAM (22 GB) + 60% of system RAM (19 GB) combined.
Even with CPU offload, needs more memory than your VRAM (22 GB) + 60% of system RAM (19 GB) combined.
Even with CPU offload, needs more memory than your VRAM (22 GB) + 60% of system RAM (19 GB) combined.
Even with CPU offload, needs more memory than your VRAM (22 GB) + 60% of system RAM (19 GB) combined.
Want a specific benchmark we don't have? Email support@runlocalai.co and we'll prioritize it.