gemma
1B parameters
Commercial OK
Gemma 3 1B
Smallest text-only Gemma 3 for phones and IoT.
License: Gemma Terms of Use·Released Mar 12, 2025·Context: 32,768 tokens
Overview
Smallest text-only Gemma 3 for phones and IoT.
Strengths
- Phone-class
- Text-only fast inference
Weaknesses
- No vision
- Limited reasoning
Quantization variants
Each quantization trades model quality for file size and VRAM. Q4_K_M is the most popular starting point.
| Quantization | File size | VRAM required |
|---|---|---|
| Q4_K_M | 0.7 GB | 2 GB |
Get the model
Ollama
One-line install
ollama run gemma3:1bRead our Ollama review →HuggingFace
Original weights
huggingface.co/google/gemma-3-1b-it
Source repository — direct quantization required.
Hardware that runs this
Cards with enough VRAM for at least one quantization of Gemma 3 1B.
Compare alternatives
Models worth comparing
Same parameter band, plus what's one tier above and below — so you can decide what actually fits your hardware.
Same tier
Models in the same parameter band as this one
Step up
More capable — bigger memory footprint
Step down
Smaller — faster, runs on weaker hardware
No verdicted models in the next tier down yet.
Frequently asked
What's the minimum VRAM to run Gemma 3 1B?
2GB of VRAM is enough to run Gemma 3 1B at the Q4_K_M quantization (file size 0.7 GB). Higher-quality quantizations need more.
Can I use Gemma 3 1B commercially?
Yes — Gemma 3 1B ships under the Gemma Terms of Use, which permits commercial use. Always read the license text before deployment.
What's the context length of Gemma 3 1B?
Gemma 3 1B supports a context window of 32,768 tokens (about 33K).
How do I install Gemma 3 1B with Ollama?
Run `ollama pull gemma3:1b` to download, then `ollama run gemma3:1b` to start a chat session. The default quantization is Q4_K_M.
Source: huggingface.co/google/gemma-3-1b-it
Reviewed by RunLocalAI Editorial. See our editorial policy for how we research and verify model claims.