NVIDIA GeForce RTX 2080 Super
Turing 'almost-flagship'. 8 GB VRAM is the ceiling — same as base 2080 — but more bandwidth (496 GB/s) and Tensor compute. Runs 7B Q4 at ~80-105 tok/s with ExLlamaV2. The 8 GB ceiling matters: 13B fits with offload only. Used $280-360 makes it competitive with the 3060 Ti on raw inference.
Extrapolated from 496 GB/s bandwidth — 59.5 tok/s estimated. No measured benchmarks yet.
Plain-English: Comfortable for 7B chat.
Verdicts extrapolated from catalog VRAM + bandwidth + ecosystem flags. Hover any chip for the rationale. Want measured numbers? Submit your own run with runlocalai-bench --submit.
This card is for the operator who needs fast 7B inference on a budget and already has a CUDA pipeline. The 2080 Super delivers ~80-105 tok/s on 7B Q4 with ExLlamaV2, making it one of the fastest sub-$350 options for single-model chat or code completion at that size. The 496 GB/s bandwidth is the draw—it punches above its VRAM class for throughput.
What breaks: 8 GB VRAM is the hard ceiling. 13B models require offloading layers to system RAM, which tanks speed to ~10-20 tok/s depending on CPU/PCIe. 30B+ models are out of reach without aggressive quantization and heavy offload, making them impractical. No Flash Attention or FP8 support—those are Ampere+ features.
Pass on this card if the workload regularly exceeds 8 GB—e.g., running 13B Q4 with a large context window, or any 30B model. A used RTX 3060 12 GB offers more VRAM for similar money, though at lower bandwidth (~70 tok/s on 7B). Also skip if power efficiency matters: 250 W TDP is high for the performance tier.
At $280-360 used, the 2080 Super is a strong pick for dedicated 7B inference servers where VRAM isn't the bottleneck. It competes directly with the RTX 3060 Ti on speed but loses on memory capacity.
›Why this rating
The 2080 Super earns a 7.5 for its excellent 7B inference speed at a low used price, but loses points for the 8 GB VRAM ceiling that limits model size and future-proofing. It's a specialist card for fast small-model workloads, not a generalist local AI GPU.
Overview
Turing 'almost-flagship'. 8 GB VRAM is the ceiling — same as base 2080 — but more bandwidth (496 GB/s) and Tensor compute. Runs 7B Q4 at ~80-105 tok/s with ExLlamaV2. The 8 GB ceiling matters: 13B fits with offload only. Used $280-360 makes it competitive with the 3060 Ti on raw inference.
Search-fallback links. Editorial hasn't yet curated retailer URLs for this card. Approx. $320.
Some links above are affiliate links. We may earn a commission at no extra cost to you. How we make money.
Specs
| VRAM | 8 GB |
| Power draw | 250 W |
| Released | 2019 |
| MSRP | $699 |
| Backends | CUDA Vulkan |
Models that fit
Open-weight models small enough to run on NVIDIA GeForce RTX 2080 Super with usable context.
Frequently asked
What models can NVIDIA GeForce RTX 2080 Super run?
Does NVIDIA GeForce RTX 2080 Super support CUDA?
How much does NVIDIA GeForce RTX 2080 Super cost?
Where next?
Reviewed by RunLocalAI Editorial. See our editorial policy for how we research and verify hardware specifications.