RUNLOCALAIv38
→WILL IT RUNBEST GPUCOMPARETROUBLESHOOTSTARTPULSEMODELSHARDWARETOOLSBENCH
RUNLOCALAI

Operator-grade instrument for local-AI hardware intelligence. Hand-written verdicts. Real benchmarks. Reproducible commands.

OP·Fredoline Eruo
DIR
  • Models
  • Hardware
  • Tools
  • Benchmarks
  • Will it run?
GUIDES
  • Best GPU
  • Best laptop
  • Best Mac
  • Best used GPU
  • Best budget GPU
  • Best GPU for Ollama
  • Best GPU for SD
  • AI PC build $2K
  • CUDA vs ROCm
  • 16 vs 24 GB
  • Compare hardware
  • Custom compare
REF
  • Systems
  • Ecosystem maps
  • Pillar guides
  • Methodology
  • Glossary
  • Errors KB
  • Troubleshooting
  • Resources
  • Public API
EDITOR
  • About
  • About the author
  • Changelog
  • Latest
  • Updates
  • Submit benchmark
  • Send feedback
  • Trust
  • Editorial policy
  • How we make money
  • Contact
LEGAL
  • Privacy
  • Terms
  • Sitemap
MAIL · MONTHLY DIGEST
Get monthly local AI changes
Monthly recap. No spam.
DISCLOSURE

Some links on this site are affiliate links (Amazon Associates and other first-class retailers). When you buy through them, we earn a small commission at no extra cost to you. Affiliate links do not influence our verdicts — there are cards we rate highly that we don't have affiliate relationships with, and cards that sell well that we refuse to recommend. Read more →

SYS · ONLINEUPTIME · 100%2026 · operator-owned
RUNLOCALAI · v38
  1. >
  2. Home
  3. /Hardware
  4. /NVIDIA GeForce GTX 1080
UNIT · NVIDIA · GPU
8 GB VRAMmid·Reviewed May 2026

NVIDIA GeForce GTX 1080

Pascal flagship for two years. 8 GB GDDR5X at 320 GB/s — better bandwidth than the 1070. Runs 7B Q4 at ~30-45 tok/s; 13B Q4 with offload but slow. The pre-Turing flagship that still has a meaningful used-market presence.

Released 2016·~$180 street·320 GB/s memory bandwidth
RUNLOCALAI SCORE
See full leaderboard →
286/ 1000
DD-tier
Estimated
Throughput
111/ 500
VRAM-fit
80/ 200
Ecosystem
200/ 200
Efficiency
17/ 100

Extrapolated from 320 GB/s bandwidth — 38.4 tok/s estimated. No measured benchmarks yet.

WORKLOAD FIT
Try other hardware →

Plain-English: Comfortable for 7B chat.

7B chat✓
Comfortable
14B chat✗
Doesn't fit
32B chat✗
Doesn't fit
70B chat✗
Doesn't fit
Coding agent✗
Doesn't fit
Vision (≤8B VLM)~
Tight
Long context (32K)✗
Doesn't fit
✓Comfortable — fits with headroom
~Tight — works, no slack
△Marginal — needs aggressive quant
✗Doesn't fit usefully

Verdicts extrapolated from catalog VRAM + bandwidth + ecosystem flags. Hover any chip for the rationale. Want measured numbers? Submit your own run with runlocalai-bench --submit.

BLK · VERDICT

Our verdict

OP · Fredoline Eruo|VERIFIED MAY 10, 2026
4.6/10

This card is for the operator building a budget local AI rig who needs CUDA and can work within 8 GB VRAM. It's not for running large models or production workloads.

On 7B Q4 models, the GTX 1080 delivers ~30-45 tok/s, which is usable for chat and code completion. 13B Q4 models require offloading to system RAM, dropping to ~5-10 tok/s — barely interactive. The 320 GB/s bandwidth keeps small models snappy.

8 GB VRAM is the hard ceiling. Anything above 7B Q4 or 13B Q3_K_M will not fit. No support for FP8 or Tensor Cores, so inference relies on CUDA cores. Flash Attention and other optimizations may be limited.

Pass on this card if you need to run 13B models entirely on GPU, or if you plan to use models with context windows beyond 8K tokens. Also skip if you want to experiment with the latest quantization formats that require Turing or newer.

At ~$180 used, this is a cheap entry point for local AI. It's a stopgap, not a long-term solution.

›Why this rating

The GTX 1080 offers decent performance for small models at a low used price, but its 8 GB VRAM and lack of modern features limit its usefulness for larger or more demanding workloads. It's a passable starter card, not a workhorse.

BLK · OVERVIEW

Overview

Pascal flagship for two years. 8 GB GDDR5X at 320 GB/s — better bandwidth than the 1070. Runs 7B Q4 at ~30-45 tok/s; 13B Q4 with offload but slow. The pre-Turing flagship that still has a meaningful used-market presence.

Retailers we'd check:Amazon

Search-fallback links. Editorial hasn't yet curated retailer URLs for this card. Approx. $180.

Some links above are affiliate links. We may earn a commission at no extra cost to you. How we make money.

BLK · SPECS

Specs

VRAM8 GB
Power draw180 W
Released2016
MSRP$599
Backends
CUDA
Vulkan

Models that fit

Open-weight models small enough to run on NVIDIA GeForce GTX 1080 with usable context.

Llama 3.2 3B Instruct
3B · llama
Gemma 4 E4B (Effective 4B)
4B · gemma
Qwen 3 4B
4B · qwen
Phi-3.5 Mini Instruct
3.8B · phi
Llama 3.2 1B Instruct
1B · llama
Gemma 3 4B
4B · gemma
Gemma 4 E2B (Effective 2B)
2B · gemma
Phi-3.5 Vision
4.2B · phi
Compare alternatives

Hardware worth comparing

Same VRAM tier and the one step above and below — so you can frame the buying decision against real options.

Same VRAM tier
Cards in the same memory band
  • AMD Radeon RX 6600 XT
    amd · 8 GB VRAM
    4.8/10
  • AMD Radeon RX 6600
    amd · 8 GB VRAM
    4.8/10
  • AMD Radeon RX 6650 XT
    amd · 8 GB VRAM
    5.1/10
  • AMD Radeon RX 5600 XT
    amd · 6 GB VRAM
    1.7/10
  • NVIDIA GeForce GTX 1070 Ti
    nvidia · 8 GB VRAM
    5.1/10
  • Intel Arc B570
    intel · 10 GB VRAM
    5.8/10
Step up
More VRAM — bigger models, more context
  • AMD Radeon RX 5700 XT
    amd · 8 GB VRAM
    3.5/10
  • NVIDIA GeForce RTX 2070
    nvidia · 8 GB VRAM
    5.1/10
  • Intel Arc B570
    intel · 10 GB VRAM
    5.8/10
Step down
Less VRAM — cheaper, more constrained
  • AMD Radeon RX 5600 XT
    amd · 6 GB VRAM
    1.7/10
  • AMD Radeon RX 580 8GB
    amd · 8 GB VRAM
    3.8/10
  • NVIDIA GeForce RTX 2060
    nvidia · 6 GB VRAM
    2.8/10

Frequently asked

What models can NVIDIA GeForce GTX 1080 run?

With 8GB VRAM, the NVIDIA GeForce GTX 1080 runs 7B models comfortably in Q4 quantization. See the model list below for tested combinations.

Does NVIDIA GeForce GTX 1080 support CUDA?

Yes — NVIDIA GeForce GTX 1080 is an NVIDIA card with full CUDA support, the most mature local-AI backend. llama.cpp, Ollama, vLLM, and ExLlamaV2 all run natively.

How much does NVIDIA GeForce GTX 1080 cost?

Current street price for NVIDIA GeForce GTX 1080 is around $180 (MSRP $599). Prices vary by region and supply.

Where next?

Buyer guides
  • Best GPU for local AI →
  • Best laptop for local AI →
  • Best Mac for local AI →
  • Best used GPU for local AI →
Troubleshooting
  • CUDA out of memory →
  • Ollama running slowly →
  • ROCm not detected →
  • Model keeps crashing →

Reviewed by RunLocalAI Editorial. See our editorial policy for how we research and verify hardware specifications.