Every catalog hardware unit ranked by composite score (0–1000): measured tok/s, VRAM fit, ecosystem support, perf-per-watt. 2 of 128 ranks anchored to a measured benchmark — the rest are honestly flagged as extrapolated or estimated.
Methodology: /methodology · Run your own: curl -fsSL runlocalai.co/bench.mjs -o bench.mjs && node bench.mjs
12 units shown · sorted by score
| # | Hardware | Tier | Score | Data |
|---|---|---|---|---|
| 1 | NVIDIA GeForce RTX 3080 Ti nvidia · enthusiast · 12GB | C | 456 | Estimated |
| 2 | NVIDIA GeForce RTX 4080 Super nvidia · high · 16GB | C | 433 | Estimated |
| 3 | NVIDIA GeForce RTX 4060 Ti 16GB nvidia · mid · 16GB · 1 bench | C | 371 | Community |
| 4 | NVIDIA GeForce RTX 2080 Ti nvidia · enthusiast · 11GB | C | 363 | Estimated |
| 5 | NVIDIA GeForce RTX 3070 Ti nvidia · high · 8GB | C | 358 | Estimated |
| 6 | NVIDIA GeForce RTX 2080 Super nvidia · high · 8GB | C | 330 | Estimated |
| 7 | NVIDIA GeForce GTX 1080 Ti nvidia · high · 11GB | C | 327 | Estimated |
| 8 | NVIDIA GeForce RTX 2060 Super nvidia · mid · 8GB | C | 323 | Estimated |
| 9 | NVIDIA GeForce RTX 2070 nvidia · high · 8GB | C | 323 | Estimated |
| 10 | NVIDIA GeForce RTX 3060 Ti nvidia · high · 8GB | C | 321 | Estimated |
| 11 | NVIDIA GeForce RTX 2070 Super nvidia · high · 8GB | C | 319 | Estimated |
| 12 | NVIDIA GeForce RTX 3060 12GB nvidia · mid · 12GB | C | 319 | Estimated |
Steady-state tok/s on a representative 7B/8B Q4 model. Measured from real benchmark rows, or extrapolated from VRAM bandwidth × runtime-stack efficiency.
How comfortably the rig holds 7B / 32B / 70B class models. Apple unified memory counts; NPU/SoC system RAM counts.
CUDA / MLX / ROCm / Vulkan reach. Real-world friction the operator hits when installing tools.
Tok/s per watt. Mobile / NPU class scores well; dense desktop GPUs trade efficiency for absolute throughput.
A confidence multiplier (1.0 measured · 0.85 extrapolated · 0.7 estimated) discounts the headline so we don't pretend to know more than we do. Score is recomputed on every page load against the latest catalog + benchmark data — submit your own run with runlocalai-bench --submit --hardware your-rig to firm up the numbers.