Training & optimization
Q4_K_M Quantization
Q4_K_M is the most-downloaded GGUF quantization on Hugging Face — the default tradeoff for local inference. It mixes 6-bit precision on the most sensitive layers (attention output, FFN gate) with 4-bit elsewhere, plus a per-row importance matrix learned during conversion.
Per-parameter cost averages ~4.83 bits (not 4 — naive sizing under-predicts file size by ~20%). A 7B model is ~4.4 GB, a 13B is ~7.9 GB, a 70B is ~42 GB. Perplexity vs FP16 is typically 0.1–0.2 points — invisible in chat, slightly visible on coding/math benchmarks.
Use Q4_K_M as the default. Step up to Q5_K_M only with VRAM headroom; step down to Q3_K_M only when desperate.
Related terms
See also
Reviewed by Fredoline Eruo. See our editorial policy.
Buyer guides
When it doesn't work