Llama 4 70B
Llama 4 dense at 70B. Drop-in successor to Llama 3.3 70B; same hardware envelope, better on reasoning benchmarks.
Overview
Llama 4 dense at 70B. Drop-in successor to Llama 3.3 70B; same hardware envelope, better on reasoning benchmarks.
How to run it
Llama 4 70B is Meta's 70B dense model in the Llama 4 family. Run at Q4_K_M via Ollama (ollama pull llama4:70b) or llama.cpp with -ngl 999 -fa -c 8192. Q4_K_M file size ~40 GB on disk. Minimum VRAM: 48 GB — RTX A6000 (48GB) at Q4_K_M for 4K context. RTX 4090 24GB: Q3_K_M with KV offload or dual-card Q4_K_M. Recommended: A100 80GB at AWQ-INT4 for serving. Throughput: ~15-25 tok/s on A6000 at Q4_K_M; ~30-45 tok/s on A100. Llama 4 architecture — newer than Llama 3, may include architectural improvements (RoPE scaling, GQA variants). Verify llama.cpp/vLLM support for Llama 4 specifically. Meta's 70B tier is the workstation sweet spot — strong quality with consumer-GPU accessibility. Llama 4 70B competes with Qwen 3 72B and Mistral Large 2 in the ~70B dense category. Context: likely 128K+ (Meta hasn't released final specs — verify). Practical at Q4 on 48 GB is 4-8K. Use for: general chat, reasoning, coding, agent workflows — the 70B is Meta's workhorse tier.
Hardware guidance
Minimum: RTX 3090 24GB at Q3_K_M (4K). Recommended: RTX A6000 48GB at Q4_K_M (8K). Optimal: A100 80GB at AWQ-INT4. VRAM math: 70B dense at Q4 ≈ 40 GB. KV cache at 8K: ~10 GB. Total ~50 GB. A6000 48GB: borderline — trim to 4K. RTX 4090 24GB: Q3 ≈ 30 GB. RTX 5090 32GB: Q4 requires KV offload. Dual RTX 4090 48 GB: Q4 at 8K. Mac Studio M4 Max 64GB+: Q4 at 5-10 tok/s. Cloud: A100 80GB at $5-10/hr. AWQ-INT4 enables 32K context. Llama 4 may introduce architectural changes (RoPE, GQA dimensions) that affect quant quality — benchmark Q4 vs Q8 before committing.
What breaks first
- Llama 4 architecture support. Llama 4 may introduce changes that require updated GGUF conversion scripts and inference kernels. Verify your llama.cpp/vLLM version supports Llama 4 before downloading 40 GB files. 2. License changes. Meta's Llama 4 license may differ from Llama 3. Check the updated license for commercial use restrictions — Llama licenses have changed between versions. 3. Chat template evolution. Llama 4's chat template may differ from Llama 3's. Using the old template produces garbled output. Verify template on the hf repo. 4. Benchmark regression on specific tasks. New model versions sometimes regress on specific benchmarks while improving overall. Test your specific use case — Llama 4 70B may be better or worse than Llama 3.3 70B on your exact task.
Runtime recommendation
Common beginner mistakes
Mistake: Assuming llama.cpp supports Llama 4 out of the box. Fix: Llama 4 may introduce architectural changes. Always verify your llama.cpp build supports the specific model version. Check the release notes. Mistake: Comparing Llama 4 70B to Llama 4 405B directly. Fix: 70B vs 405B is a 5.8× parameter difference. The 70B competes in the workstation tier; the 405B competes in the frontier tier. Different use cases, different hardware. Mistake: Assuming Llama 4's license matches Llama 3. Fix: Read Meta's updated license before commercial deployment. MAU clauses, usage restrictions, and attribution requirements may have changed. Mistake: Pulling ollama run llama4:70b without checking disk space. Fix: Q4 is ~40 GB. Q8 is ~80 GB. Verify df -h before pulling.
Family & lineage
How this model relates to others in its lineage. Family members share architecture and training-data roots; parent / children edges record direct distillation or fine-tune relationships.
Strengths
- Drop-in upgrade from Llama 3.3 70B
- Better reasoning quality
Weaknesses
- Tight at 24GB VRAM via offload (~50% offload tax)
Quantization variants
Each quantization trades model quality for file size and VRAM. Q4_K_M is the most popular starting point.
| Quantization | File size | VRAM required |
|---|---|---|
| AWQ-INT4 | 40.0 GB | 48 GB |
Get the model
HuggingFace
Original weights
Source repository — direct quantization required.
Hardware that runs this
Cards with enough VRAM for at least one quantization of Llama 4 70B.
Frequently asked
What's the minimum VRAM to run Llama 4 70B?
Can I use Llama 4 70B commercially?
What's the context length of Llama 4 70B?
Source: huggingface.co/meta-llama/Llama-4-70B
Reviewed by RunLocalAI Editorial. See our editorial policy for how we research and verify model claims.
Related — keep moving
Verify Llama 4 70B runs on your specific hardware before committing money.