RUNLOCALAIv38
→WILL IT RUNBEST GPUCOMPARETROUBLESHOOTSTARTPULSEMODELSHARDWARETOOLSBENCH
  1. >
  2. Home
  3. /Hardware
  4. /NVIDIA GeForce RTX 2080 Super
UNIT · NVIDIA · GPU
8 GB VRAMhigh·Reviewed May 2026

NVIDIA GeForce RTX 2080 Super

Turing 'almost-flagship'. 8 GB VRAM is the ceiling — same as base 2080 — but more bandwidth (496 GB/s) and Tensor compute. Runs 7B Q4 at ~80-105 tok/s with ExLlamaV2. The 8 GB ceiling matters: 13B fits with offload only. Used $280-360 makes it competitive with the 3060 Ti on raw inference.

Released 2019·~$320 street·496 GB/s memory bandwidth
RUNLOCALAI SCORE
See full leaderboard →
330/ 1000
CC-tier
Estimated
Throughput
173/ 500
VRAM-fit
80/ 200
Ecosystem
200/ 200
Efficiency
19/ 100

Extrapolated from 496 GB/s bandwidth — 59.5 tok/s estimated. No measured benchmarks yet.

WORKLOAD FIT
Try other hardware →

Plain-English: Comfortable for 7B chat.

7B chat✓
Comfortable
14B chat✗
Doesn't fit
32B chat✗
Doesn't fit
70B chat✗
Doesn't fit
Coding agent✗
Doesn't fit
Vision (≤8B VLM)~
Tight
Long context (32K)✗
Doesn't fit
✓Comfortable — fits with headroom
~Tight — works, no slack
△Marginal — needs aggressive quant
✗Doesn't fit usefully

Verdicts extrapolated from catalog VRAM + bandwidth + ecosystem flags. Hover any chip for the rationale. Want measured numbers? Submit your own run with runlocalai-bench --submit.

BLK · VERDICT

Our verdict

OP · Fredoline Eruo|VERIFIED MAY 10, 2026
5.1/10

This card is for the operator who needs fast 7B inference on a budget and already has a CUDA pipeline. The 2080 Super delivers ~80-105 tok/s on 7B Q4 with ExLlamaV2, making it one of the fastest sub-$350 options for single-model chat or code completion at that size. The 496 GB/s bandwidth is the draw—it punches above its VRAM class for throughput.

What breaks: 8 GB VRAM is the hard ceiling. 13B models require offloading layers to system RAM, which tanks speed to ~10-20 tok/s depending on CPU/PCIe. 30B+ models are out of reach without aggressive quantization and heavy offload, making them impractical. No Flash Attention or FP8 support—those are Ampere+ features.

Pass on this card if the workload regularly exceeds 8 GB—e.g., running 13B Q4 with a large context window, or any 30B model. A used RTX 3060 12 GB offers more VRAM for similar money, though at lower bandwidth (~70 tok/s on 7B). Also skip if power efficiency matters: 250 W TDP is high for the performance tier.

At $280-360 used, the 2080 Super is a strong pick for dedicated 7B inference servers where VRAM isn't the bottleneck. It competes directly with the RTX 3060 Ti on speed but loses on memory capacity.

›Why this rating

The 2080 Super earns a 7.5 for its excellent 7B inference speed at a low used price, but loses points for the 8 GB VRAM ceiling that limits model size and future-proofing. It's a specialist card for fast small-model workloads, not a generalist local AI GPU.

BLK · OVERVIEW

Overview

Turing 'almost-flagship'. 8 GB VRAM is the ceiling — same as base 2080 — but more bandwidth (496 GB/s) and Tensor compute. Runs 7B Q4 at ~80-105 tok/s with ExLlamaV2. The 8 GB ceiling matters: 13B fits with offload only. Used $280-360 makes it competitive with the 3060 Ti on raw inference.

Retailers we'd check:Amazon

Search-fallback links. Editorial hasn't yet curated retailer URLs for this card. Approx. $320.

Some links above are affiliate links. We may earn a commission at no extra cost to you. How we make money.

BLK · SPECS

Specs

VRAM8 GB
Power draw250 W
Released2019
MSRP$699
Backends
CUDA
Vulkan

Models that fit

Open-weight models small enough to run on NVIDIA GeForce RTX 2080 Super with usable context.

Llama 3.2 3B Instruct
3B · llama
Gemma 4 E4B (Effective 4B)
4B · gemma
Qwen 3 4B
4B · qwen
Phi-3.5 Mini Instruct
3.8B · phi
Llama 3.2 1B Instruct
1B · llama
Gemma 3 4B
4B · gemma
Gemma 4 E2B (Effective 2B)
2B · gemma
Phi-3.5 Vision
4.2B · phi

Frequently asked

What models can NVIDIA GeForce RTX 2080 Super run?

With 8GB VRAM, the NVIDIA GeForce RTX 2080 Super runs 7B models comfortably in Q4 quantization. See the model list below for tested combinations.

Does NVIDIA GeForce RTX 2080 Super support CUDA?

Yes — NVIDIA GeForce RTX 2080 Super is an NVIDIA card with full CUDA support, the most mature local-AI backend. llama.cpp, Ollama, vLLM, and ExLlamaV2 all run natively.

How much does NVIDIA GeForce RTX 2080 Super cost?

Current street price for NVIDIA GeForce RTX 2080 Super is around $320 (MSRP $699). Prices vary by region and supply.

Where next?

Buyer guides
  • Best GPU for local AI →
  • Best laptop for local AI →
  • Best Mac for local AI →
  • Best used GPU for local AI →
Troubleshooting
  • CUDA out of memory →
  • Ollama running slowly →
  • ROCm not detected →
  • Model keeps crashing →

Reviewed by RunLocalAI Editorial. See our editorial policy for how we research and verify hardware specifications.

RUNLOCALAI

Operator-grade instrument for local-AI hardware intelligence. Hand-written verdicts. Real benchmarks. Reproducible commands.

OP·Fredoline Eruo
DIR
  • Models
  • Hardware
  • Tools
  • Benchmarks
  • Will it run?
GUIDES
  • Best GPU
  • Best laptop
  • Best Mac
  • Best used GPU
  • Best budget GPU
  • Best GPU for Ollama
  • Best GPU for SD
  • AI PC build $2K
  • CUDA vs ROCm
  • 16 vs 24 GB
  • Compare hardware
  • Custom compare
REF
  • Systems
  • Ecosystem maps
  • Pillar guides
  • Methodology
  • Glossary
  • Errors KB
  • Troubleshooting
  • Resources
  • Public API
EDITOR
  • About
  • About the author
  • Changelog
  • Latest
  • Updates
  • Submit benchmark
  • Send feedback
  • Trust
  • Editorial policy
  • How we make money
  • Contact
LEGAL
  • Privacy
  • Terms
  • Sitemap
MAIL · MONTHLY DIGEST
Get monthly local AI changes
Monthly recap. No spam.
DISCLOSURE

Some links on this site are affiliate links (Amazon Associates and other first-class retailers). When you buy through them, we earn a small commission at no extra cost to you. Affiliate links do not influence our verdicts — there are cards we rate highly that we don't have affiliate relationships with, and cards that sell well that we refuse to recommend. Read more →

SYS · ONLINEUPTIME · 100%2026 · operator-owned
RUNLOCALAI · v38
Compare alternatives

Hardware worth comparing

Same VRAM tier and the one step above and below — so you can frame the buying decision against real options.

Same VRAM tier
Cards in the same memory band
  • AMD Radeon RX 5700 XT
    amd · 8 GB VRAM
    3.5/10
  • AMD Radeon RX 6800
    amd · 16 GB VRAM
    7.3/10
  • AMD Radeon RX 6750 XT
    amd · 12 GB VRAM
    7.1/10
  • AMD Radeon RX 6700 XT
    amd · 12 GB VRAM
    6.8/10
  • NVIDIA GeForce RTX 2070 Super
    nvidia · 8 GB VRAM
    4.8/10
  • Intel Arc B580
    intel · 12 GB VRAM
    6.3/10
Step up
More VRAM — bigger models, more context
  • AMD Radeon RX 6800
    amd · 16 GB VRAM
    7.3/10
  • NVIDIA GeForce GTX 1080 Ti
    nvidia · 11 GB VRAM
    6.6/10
  • Intel Arc B580
    intel · 12 GB VRAM
    6.3/10
Step down
Less VRAM — cheaper, more constrained
  • NVIDIA GeForce RTX 2060 Super
    nvidia · 8 GB VRAM
    4.8/10
  • AMD Radeon RX 6600 XT
    amd · 8 GB VRAM
    4.8/10
  • Intel Arc B580
    intel · 12 GB VRAM
    6.3/10