AMD Radeon 780M (Phoenix iGPU)
AMD's 780M iGPU (Ryzen 7040/8040 series Phoenix). Shares system RAM via unified memory architecture; 32 GB DDR5 system gives effective 16-20 GB usable for inference. Bandwidth (89 GB/s) is the bottleneck — ~6-12 tok/s on 7B Q4. The 'I have a thin laptop' audience can run AI but slowly.
Extrapolated from 89 GB/s bandwidth — 8.9 tok/s estimated. No measured benchmarks yet.
Plain-English: Doesn't fit modern chat models usefully — vision models won't fit.
Verdicts extrapolated from catalog VRAM + bandwidth + ecosystem flags. Hover any chip for the rationale. Want measured numbers? Submit your own run with runlocalai-bench --submit.
This card is for the operator who already owns a Ryzen 7040/8040 laptop and wants to experiment with local AI without buying a discrete GPU. It runs 7B Q4 models at ~6-12 tok/s — usable for chat but too slow for real-time interaction. 13B models dip to ~3-5 tok/s, and anything larger is impractical. The shared memory architecture means the system RAM (ideally 32 GB) is split between GPU and OS, leaving ~16-20 GB for models. What breaks: any model above 13B, any workload requiring sustained throughput, and ROCm support on iGPUs is still rough — expect manual setup and limited compatibility. Pass if you need interactive speeds or plan to run models larger than 7B; a used RTX 3060 12GB will cost ~$200 and deliver 5-10x the performance. Price/value note: the 780M is free with the CPU, so its value is purely incremental — if you already have the laptop, it's a bonus, not a reason to buy.
›Why this rating
The 780M is a capable iGPU for casual local AI experimentation, but its shared memory and low bandwidth severely limit model size and speed. It earns a 3.0 for being a free add-on that can run small models, but it's not a primary inference card.
Overview
AMD's 780M iGPU (Ryzen 7040/8040 series Phoenix). Shares system RAM via unified memory architecture; 32 GB DDR5 system gives effective 16-20 GB usable for inference. Bandwidth (89 GB/s) is the bottleneck — ~6-12 tok/s on 7B Q4. The 'I have a thin laptop' audience can run AI but slowly.
Specs
| VRAM | 0 GB |
| Power draw | 28 W |
| Released | 2023 |
| Backends | ROCm Vulkan |
Frequently asked
Does AMD Radeon 780M (Phoenix iGPU) support CUDA?
Where next?
Reviewed by RunLocalAI Editorial. See our editorial policy for how we research and verify hardware specifications.