Build: AMD Instinct MI300A (APU) + — + 32 GB RAM (windows)
Ranked by fit for agents use case + predicted speed. Click a row for VRAM breakdown.
ollama run hermes3:8bollama run dolphin-mistral:24bollama run qwen2.5:7bollama run qwen2.5:14bollama run mistral:7bollama run qwen3:30bollama run qwen2.5-coder:32bollama run qwen3:32bollama run qwen2.5:32bollama run nemotron3:nanoollama run gemma4:26b-moeTight VRAM, partial CPU offload, or context-limited.
ollama run command-r-plus:104bHypothetical scenarios. We re-ran the compatibility engine for each.
~$80–150
Doubles your CPU-offload working set. Helps when models don't quite fit in VRAM.
Unlocks: 36 new comfortable, 4 new tradeoff
see current pricing
192 GB VRAM (vs your 128 GB) plus a bandwidth jump from ~? GB/s to ~5325 GB/s.
Unlocks: 44 new comfortable
see current pricing
Tensor parallelism splits the model across both cards, effectively doubling VRAM. Bandwidth doesn't double — runs ~1.5× the single-card speed in practice.
Unlocks: 47 new comfortable
Some links above are affiliate links. We may earn a commission at no extra cost to you. How we make money.
Need more memory than you have. Shown for orientation.
Even with CPU offload, needs more memory than your VRAM (128 GB) + 60% of system RAM (19 GB) combined.
Even with CPU offload, needs more memory than your VRAM (128 GB) + 60% of system RAM (19 GB) combined.
Even with CPU offload, needs more memory than your VRAM (128 GB) + 60% of system RAM (19 GB) combined.
Even with CPU offload, needs more memory than your VRAM (128 GB) + 60% of system RAM (19 GB) combined.
Even with CPU offload, needs more memory than your VRAM (128 GB) + 60% of system RAM (19 GB) combined.
Want a specific benchmark we don't have? Email support@runlocalai.co and we'll prioritize it.