Hermes 3 Llama 3.1 8B
NousResearch's Hermes fine-tune of Llama 3.1 8B. Stronger system-prompt adherence, JSON output, role-play, and agent steering than the base Llama.
Hermes 3 is the uncensored / less-aligned alternative on the Llama 3.1 8B base. Right pick for security research, red-team work, technical writing on dual-use topics, or any case where the base Llama's refusal layer gets in the way of legitimate work.
Strengths- Refusals dramatically reduced vs base Llama 3.1 8B without losing instruction quality.
- Same VRAM, same Llama license — drop-in replacement.
- Tool-use compatibility preserved.
- Niche use case — most users don't need this; default to Llama 3.1 8B.
- Slightly weaker on creative writing than base Llama (alignment training adds polish).
- Reduced refusals can be too eager — produces content that requires judgment to use.
- Q4_K_M (4.6 GB): 90–110 tok/s decode
- Q5_K_M (5.6 GB): 80–95 tok/s
- Q8_0 (8.5 GB): 65–80 tok/s
Yes, for security/research work where base Llama's refusals are blocking legitimate tasks. No, for general chat — the base Llama 3.1 8B is the right default.
How it compares- vs Llama 3.1 8B (base) → Hermes 3 is base Llama minus alignment layer. Pick base for general use, Hermes for technical/research work.
- vs Hermes 3 Llama 3.1 70B → 70B is meaningfully smarter at higher VRAM cost.
- vs Dolphin 3.0 Mistral 24B → similar philosophy, different base model. Dolphin is bigger and on Apache base.
ollama pull nous-hermes-3:8b-llama-3.1-q4_K_M
ollama run nous-hermes-3:8b-llama-3.1-q4_K_M
Settings: Q4_K_M GGUF, 8192 ctx, llama.cpp/CUDA, RTX 4090
›Why this rating
7.7/10 — the right pick when Llama 3.1 8B's alignment refusals get in the way. NousResearch's Hermes 3 strips the over-cautious layer while keeping instruction-following intact. Loses points only on niche use case.
Overview
NousResearch's Hermes fine-tune of Llama 3.1 8B. Stronger system-prompt adherence, JSON output, role-play, and agent steering than the base Llama.
Strengths
- Excellent system-prompt obedience
- JSON / structured output
- Agent-friendly
Weaknesses
- Inherits Llama 3.1 license
Quantization variants
Each quantization trades model quality for file size and VRAM. Q4_K_M is the most popular starting point.
| Quantization | File size | VRAM required |
|---|---|---|
| Q4_K_M | 4.9 GB | 6 GB |
| Q8_0 | 8.5 GB | 10 GB |
Get the model
Ollama
One-line install
ollama run hermes3:8bRead our Ollama review →HuggingFace
Original weights
Source repository — direct quantization required.
Hardware that runs this
Cards with enough VRAM for at least one quantization of Hermes 3 Llama 3.1 8B.
Models worth comparing
Same parameter band, plus what's one tier above and below — so you can decide what actually fits your hardware.
Frequently asked
What's the minimum VRAM to run Hermes 3 Llama 3.1 8B?
Can I use Hermes 3 Llama 3.1 8B commercially?
What's the context length of Hermes 3 Llama 3.1 8B?
How do I install Hermes 3 Llama 3.1 8B with Ollama?
Source: huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B
Reviewed by RunLocalAI Editorial. See our editorial policy for how we research and verify model claims.