RUNLOCALAIv38
→WILL IT RUNBEST GPUCOMPARETROUBLESHOOTSTARTPULSEMODELSHARDWARETOOLSBENCH
RUNLOCALAI

Operator-grade instrument for local-AI hardware intelligence. Hand-written verdicts. Real benchmarks. Reproducible commands.

OP·Fredoline Eruo
DIR
  • Models
  • Hardware
  • Tools
  • Benchmarks
  • Will it run?
GUIDES
  • Best GPU
  • Best laptop
  • Best Mac
  • Best used GPU
  • Best budget GPU
  • Best GPU for Ollama
  • Best GPU for SD
  • AI PC build $2K
  • CUDA vs ROCm
  • 16 vs 24 GB
  • Compare hardware
  • Custom compare
REF
  • Systems
  • Ecosystem maps
  • Pillar guides
  • Methodology
  • Glossary
  • Errors KB
  • Troubleshooting
  • Resources
  • Public API
EDITOR
  • About
  • About the author
  • Changelog
  • Latest
  • Updates
  • Submit benchmark
  • Send feedback
  • Trust
  • Editorial policy
  • How we make money
  • Contact
LEGAL
  • Privacy
  • Terms
  • Sitemap
MAIL · MONTHLY DIGEST
Get monthly local AI changes
Monthly recap. No spam.
DISCLOSURE

Some links on this site are affiliate links (Amazon Associates and other first-class retailers). When you buy through them, we earn a small commission at no extra cost to you. Affiliate links do not influence our verdicts — there are cards we rate highly that we don't have affiliate relationships with, and cards that sell well that we refuse to recommend. Read more →

SYS · ONLINEUPTIME · 100%2026 · operator-owned
RUNLOCALAI · v38
Families/Text & Reasoning/Gemma
Text & Reasoning
Open-weight
Gemma Terms of Use (broadly commercial)

Gemma

by Google DeepMind

Google's open-weight derivative of Gemini research. Gemma 2 + Gemma 3 cover sub-30B chat; CodeGemma adds code-specialized variants. Tight integration with Vertex AI / Google Cloud / Android Studio.

Best entry point for local use

Start with Gemma 3 12B at Q4_K_M via Ollama — fits on single RTX 3060 12GB at Q4 (7 GB VRAM). The 12B delivers MMLU ~82% and punches above its weight class on instruction-following (IFEval ~78%). Google's distillation from Gemini training data gives Gemma 3 context-handling quality that smaller models rarely achieve — usable 32K context without perplexity collapse. For minimum VRAM (<8 GB), use Gemma 3 4B Q4_K_M (3 GB) — runs on any laptop with integrated GPU at 15+ tok/s via llama.cpp. Skip Gemma 2 27B for local deployment — its 256K vocab tokenizer wastes ~25% more tokens on English vs Llama tokenizer, inflating effective context cost. Skip Gemma 1 entirely — Gemma 3 12B matches or exceeds Gemma 2 27B on benchmarks at half the VRAM.

Deployment guidance

For single-user local: Ollama + gemma3:12b Q4_K_M on RTX 3060 12GB or Apple M3 via MLX-LM. Gemma's GeGLU activation and 256K vocab require GGUF format built with latest llama.cpp (b3400+) for correct RoPE theta. For multi-user serving: vLLM 0.6.1+ with AWQ 4-bit on L4 24 GB — Gemma's dense architecture parallelizes efficiently. For mobile/edge: MediaPipe LLM Inference on Tensor G4 via Google AI Edge — Gemma 3 4B runs entirely on-device at ~12 tok/s with 4-bit WebGPU acceleration. For NVIDIA GPU maximum throughput: TensorRT-LLM FP8 on L40S. Note: Gemma's bespoke license prohibits use for generating training data for competing models — review terms before production deployment. See GPU buyer guide.

Featured models

Models in this family with our verdicts

CodeGemma 7BGemma 4 31B Dense

Recommended runtimes

llama.cppOllamavLLM

Related families

PhiLlama

Related — keep moving

Compare hardware
  • RTX 3090 vs RTX 4090 →
  • RTX 4090 vs RTX 5090 →
Buyer guides
  • Best GPU for local AI →
  • Best laptop for local AI →
  • Best Mac for local AI →
  • Best used GPU for local AI →
  • Will it run on my hardware? →
When it doesn't work
  • CUDA out of memory →
  • Ollama running slowly →
  • ROCm not detected →
  • Model keeps crashing →
Runtimes that fit
  • llama.cpp →
  • Ollama →
  • vLLM →
Alternatives
PhiLlama
Before you buy

Verify Gemma runs on your specific hardware before committing money.

Will it run on my hardware? →Custom hardware comparison →GPU recommender (4 questions) →