UNIT · NVIDIA · GPU
12 GB VRAMenthusiastReviewed May 2026

NVIDIA GeForce RTX 3080 Ti

Ampere flagship-minus-one. 12 GB GDDR6X at 912 GB/s — closer to the 3090 in raw bandwidth than to the 3080. Fits 13B Q4 with full context, 32B Q4 with offload. ~155-205 tok/s on 7B Q4. Used $440-520 in 2026; the 'budget 3090' for operators willing to give up 12 GB of VRAM headroom for a meaningful price cut.

Released 2021·~$480 street·912 GB/s memory bandwidth
▼ CHECK CURRENT PRICE· 1 retailer

NVIDIA GeForce RTX 3080 Ti

Affiliate disclosure: as an Amazon Associate and partner of other retailers, we earn from qualifying purchases. The verdict on this page is our editorial opinion; affiliate links never influence what we recommend.

RUNLOCALAI SCORE
See full leaderboard →
456/ 1000
CC-tier
Estimated
Throughput
317/ 500
VRAM-fit
110/ 200
Ecosystem
200/ 200
Efficiency
25/ 100

Extrapolated from 912 GB/s bandwidth — 109.4 tok/s estimated. No measured benchmarks yet.

Plain-English: Comfortable at 14B and below — snappy enough for a coding agent; vision models supported.

7B chat
Comfortable
14B chat
Comfortable
32B chat
Doesn't fit
70B chat
Doesn't fit
Coding agent
Comfortable
Vision (≤8B VLM)
Comfortable
Long context (32K)~
Tight
Comfortable — fits with headroom
~Tight — works, no slack
Marginal — needs aggressive quant
Doesn't fit usefully

Verdicts extrapolated from catalog VRAM + bandwidth + ecosystem flags. Hover any chip for the rationale. Want measured numbers? Submit your own run with runlocalai-bench --submit.

BLK · VERDICT

Our verdict

OP · Fredoline Eruo|VERIFIED MAY 10, 2026
7.3/10

WHO THIS CARD IS FOR
The operator who wants 3090-like inference speed on 7B-13B models but can work within a 12 GB VRAM budget and prefers to spend under $500. This is the card for running 13B Q4 with full context or 30B Q4 with partial offload, not for 70B or large multimodal models.

WHAT IT RUNS WELL
7B Q4 models run at ~155-205 tok/s, making this one of the fastest cards for small-to-medium local inference. 13B Q4 fits entirely in VRAM at ~80-110 tok/s. 30B Q4 (with ~20 GB weights) requires offloading to system RAM, but the high bandwidth keeps the CPU-GPU transfer bottleneck tolerable for interactive use.

WHAT BREAKS
12 GB VRAM is the hard ceiling. 70B Q4 is out of reach without aggressive quantization and heavy offload. 32B Q4 with full context may spill over, causing performance drops. The card runs hot and draws 350W under load; operators need adequate cooling and a 750W+ PSU.

WHEN TO PASS
If the workload regularly exceeds 12 GB (e.g., 30B+ models with long context, or running multiple models simultaneously), step up to a used 3090 or 4090. For pure 7B inference at lower cost, a used 3080 10 GB or 3070 may suffice.

PRICE / VALUE NOTE
At ~$480 used, this is the best price-to-bandwidth ratio for 7B-13B inference, effectively a 'budget 3090' for operators who can live with 12 GB.

Why this rating

The RTX 3080 Ti delivers exceptional inference speed for its price, with bandwidth rivaling the 3090. The 12 GB VRAM is the main limiter, but for the target workload of 7B-13B models, it's a top value. Loses points for high power draw and lack of future-proofing for larger models.

BLK · OVERVIEW

Overview

Ampere flagship-minus-one. 12 GB GDDR6X at 912 GB/s — closer to the 3090 in raw bandwidth than to the 3080. Fits 13B Q4 with full context, 32B Q4 with offload. ~155-205 tok/s on 7B Q4. Used $440-520 in 2026; the 'budget 3090' for operators willing to give up 12 GB of VRAM headroom for a meaningful price cut.

Retailers we'd check:Amazon

Some links above are affiliate links. We may earn a commission at no extra cost to you. How we make money.

BLK · SPECS

Specs

VRAM12 GB
Power draw350 W
Released2021
MSRP$1199
Backends
CUDA
Vulkan

Models that fit

Open-weight models small enough to run on NVIDIA GeForce RTX 3080 Ti with usable context.

Compare alternatives

Hardware worth comparing

Same VRAM tier and the one step above and below — so you can frame the buying decision against real options.

Frequently asked

What models can NVIDIA GeForce RTX 3080 Ti run?

With 12GB VRAM, the NVIDIA GeForce RTX 3080 Ti runs models up to 14B in 4-bit, or 7B at higher quantizations. See the model list below for tested combinations.

Does NVIDIA GeForce RTX 3080 Ti support CUDA?

Yes — NVIDIA GeForce RTX 3080 Ti is an NVIDIA card with full CUDA support, the most mature local-AI backend. llama.cpp, Ollama, vLLM, and ExLlamaV2 all run natively.

How much does NVIDIA GeForce RTX 3080 Ti cost?

Current street price for NVIDIA GeForce RTX 3080 Ti is around $480 (MSRP $1199). Prices vary by region and supply.

Where next?

Reviewed by RunLocalAI Editorial. See our editorial policy for how we research and verify hardware specifications.