RUNLOCALAIv38
→WILL IT RUNBEST GPUCOMPARETROUBLESHOOTSTARTPULSEMODELSHARDWARETOOLSBENCH
RUNLOCALAI

Operator-grade instrument for local-AI hardware intelligence. Hand-written verdicts. Real benchmarks. Reproducible commands.

OP·Fredoline Eruo
DIR
  • Models
  • Hardware
  • Tools
  • Benchmarks
  • Will it run?
GUIDES
  • Best GPU
  • Best laptop
  • Best Mac
  • Best used GPU
  • Best budget GPU
  • Best GPU for Ollama
  • Best GPU for SD
  • AI PC build $2K
  • CUDA vs ROCm
  • 16 vs 24 GB
  • Compare hardware
  • Custom compare
REF
  • Systems
  • Ecosystem maps
  • Pillar guides
  • Methodology
  • Glossary
  • Errors KB
  • Troubleshooting
  • Resources
  • Public API
EDITOR
  • About
  • About the author
  • Changelog
  • Latest
  • Updates
  • Submit benchmark
  • Send feedback
  • Trust
  • Editorial policy
  • How we make money
  • Contact
LEGAL
  • Privacy
  • Terms
  • Sitemap
MAIL · MONTHLY DIGEST
Get monthly local AI changes
Monthly recap. No spam.
DISCLOSURE

Some links on this site are affiliate links (Amazon Associates and other first-class retailers). When you buy through them, we earn a small commission at no extra cost to you. Affiliate links do not influence our verdicts — there are cards we rate highly that we don't have affiliate relationships with, and cards that sell well that we refuse to recommend. Read more →

SYS · ONLINEUPTIME · 100%2026 · operator-owned
RUNLOCALAI · v38
  1. >
  2. Home
  3. /Hardware
  4. /Leaderboard
BLK · LEADERBOARD

> RunLocalAI Score

Every catalog hardware unit ranked by composite score (0–1000): measured tok/s, VRAM fit, ecosystem support, perf-per-watt. 2 of 128 ranks anchored to a measured benchmark — the rest are honestly flagged as extrapolated or estimated.

Methodology: /methodology · Run your own: curl -fsSL runlocalai.co/bench.mjs -o bench.mjs && node bench.mjs

Vendor:Allamdapplegoogleintelnvidiaqualcomm
VRAM:All≤8GB9–16GB17–24GB25–48GB49–96GB97GB+
Tier:AllSABCD
Data:AllMeasuredMeasured-nearCommunityExtrapolatedEstimated

1 unit shown · sorted by score

#HardwareTierScoreThroughputVRAM-fitEcosystemEfficiencyData
1NVIDIA GeForce RTX 3090
nvidia · enthusiast · 24GB · 1 bench
B66430517020024Measured-near
Get monthly local AI changes

Monthly recap of local-AI changes. No spam, unsubscribe with one click.

HOW THE SCORE IS DERIVED
Throughput · 0–500

Steady-state tok/s on a representative 7B/8B Q4 model. Measured from real benchmark rows, or extrapolated from VRAM bandwidth × runtime-stack efficiency.

VRAM-fit · 0–200

How comfortably the rig holds 7B / 32B / 70B class models. Apple unified memory counts; NPU/SoC system RAM counts.

Ecosystem · 0–200

CUDA / MLX / ROCm / Vulkan reach. Real-world friction the operator hits when installing tools.

Efficiency · 0–100

Tok/s per watt. Mobile / NPU class scores well; dense desktop GPUs trade efficiency for absolute throughput.

A confidence multiplier (1.0 measured · 0.85 extrapolated · 0.7 estimated) discounts the headline so we don't pretend to know more than we do. Score is recomputed on every page load against the latest catalog + benchmark data — submit your own run with runlocalai-bench --submit --hardware your-rig to firm up the numbers.