Nemotron 3 Nano (30B-A3B)
NVIDIA's hybrid Mamba-2 + Transformer MoE for on-device agents. 30B total / 3B active. 1M-token context window with reasoning ON/OFF modes and 4× faster inference than the previous Nemotron Nano.
Overview
NVIDIA's hybrid Mamba-2 + Transformer MoE for on-device agents. 30B total / 3B active. 1M-token context window with reasoning ON/OFF modes and 4× faster inference than the previous Nemotron Nano.
Strengths
- 1M-token context
- Reasoning toggle
- Hybrid Mamba architecture
Weaknesses
- Newer architecture — runner support varies
Quantization variants
Each quantization trades model quality for file size and VRAM. Q4_K_M is the most popular starting point.
| Quantization | File size | VRAM required |
|---|---|---|
| Q4_K_M | 18.0 GB | 22 GB |
| Q8_0 | 32.0 GB | 36 GB |
Get the model
Ollama
One-line install
ollama run nemotron3:nanoRead our Ollama review →HuggingFace
Original weights
Source repository — direct quantization required.
Hardware that runs this
Cards with enough VRAM for at least one quantization of Nemotron 3 Nano (30B-A3B).
Models worth comparing
Same parameter band, plus what's one tier above and below — so you can decide what actually fits your hardware.
Frequently asked
What's the minimum VRAM to run Nemotron 3 Nano (30B-A3B)?
Can I use Nemotron 3 Nano (30B-A3B) commercially?
What's the context length of Nemotron 3 Nano (30B-A3B)?
How do I install Nemotron 3 Nano (30B-A3B) with Ollama?
Source: huggingface.co/nvidia/Nemotron-3-Nano
Reviewed by RunLocalAI Editorial. See our editorial policy for how we research and verify model claims.