AMD Radeon RX 6800 XT
RDNA 2 enthusiast. 16 GB VRAM, 512 GB/s bandwidth, more compute units than the base 6800. ROCm officially supported. ~85-110 tok/s on 7B Q4, 35-50 tok/s on 13B Q4. The peak of RDNA 2 consumer for AI.
AMD Radeon RX 6800 XT
Affiliate disclosure: as an Amazon Associate and partner of other retailers, we earn from qualifying purchases. The verdict on this page is our editorial opinion; affiliate links never influence what we recommend.
Extrapolated from 512 GB/s bandwidth — 51.2 tok/s estimated. No measured benchmarks yet.
Plain-English: Comfortable at 14B and below — snappy enough for a coding agent.
Verdicts extrapolated from catalog VRAM + bandwidth + ecosystem flags. Hover any chip for the rationale. Want measured numbers? Submit your own run with runlocalai-bench --submit.
This card is for the operator who wants a 16 GB VRAM workhorse for local LLMs without paying Nvidia's premium. The RX 6800 XT runs 7B Q4 models at 55-75 tok/s and 13B Q4 at ~30-40 tok/s, comfortably handling 30B Q4 models in the ~15-20 tok/s range. The 16 GB VRAM fits 30B Q4 with room for context, and even 70B Q2 or Q3 quantizations can run, though slowly (5-8 tok/s).
What breaks: ROCm is officially supported but still has rough edges compared to CUDA. Some inference frameworks (e.g., llama.cpp with ROCm backend) may need specific versions or patches. Flash attention and other optimizations are less mature on AMD. The card also lacks tensor cores, so compute-heavy operations like training or large batch inference are slower than Nvidia equivalents.
When to pass: If the workload requires CUDA-only features (e.g., vLLM, TensorRT-LLM) or if the operator wants plug-and-play compatibility with the widest range of tools. Also pass if training or fine-tuning is a primary use case, as Nvidia's ecosystem is significantly more mature.
Price/value note: At ~$450 used, this card offers the best VRAM-per-dollar for local inference, beating similarly priced Nvidia options (RTX 3080 10 GB, RTX 4060 Ti 16 GB) on both VRAM and bandwidth.
›Why this rating
The RX 6800 XT delivers strong inference performance for its price, with 16 GB VRAM enabling larger models than similarly priced Nvidia cards. The rating is slightly reduced due to ROCm's ecosystem immaturity and lack of tensor cores, which limits its appeal for training or cutting-edge inference features.
Overview
RDNA 2 enthusiast. 16 GB VRAM, 512 GB/s bandwidth, more compute units than the base 6800. ROCm officially supported. ~85-110 tok/s on 7B Q4, 35-50 tok/s on 13B Q4. The peak of RDNA 2 consumer for AI.
Some links above are affiliate links. We may earn a commission at no extra cost to you. How we make money.
Specs
| VRAM | 16 GB |
| Power draw | 300 W |
| Released | 2020 |
| MSRP | $649 |
| Backends | ROCm Vulkan |
Models that fit
Open-weight models small enough to run on AMD Radeon RX 6800 XT with usable context.
Frequently asked
What models can AMD Radeon RX 6800 XT run?
Does AMD Radeon RX 6800 XT support CUDA?
How much does AMD Radeon RX 6800 XT cost?
Where next?
Reviewed by RunLocalAI Editorial. See our editorial policy for how we research and verify hardware specifications.