Qwen 3 Coder 32B
Coding-specialized fine-tune of Qwen 3 32B. Curated coding corpus; outperforms Qwen 2.5 Coder 32B on SWE-Bench by ~6 points. Apache 2.0.
Overview
Coding-specialized fine-tune of Qwen 3 32B. Curated coding corpus; outperforms Qwen 2.5 Coder 32B on SWE-Bench by ~6 points. Apache 2.0.
Family & lineage
How this model relates to others in its lineage. Family members share architecture and training-data roots; parent / children edges record direct distillation or fine-tune relationships.
Strengths
- Strongest open coding model in 32B class as of late 2025
- Reasoning toggle for complex bugs
- Apache 2.0
Weaknesses
- AWQ-INT4 fits 24GB tightly with reasoning blocks
Quantization variants
Each quantization trades model quality for file size and VRAM. Q4_K_M is the most popular starting point.
| Quantization | File size | VRAM required |
|---|---|---|
| AWQ-INT4 | 19.0 GB | 22 GB |
Get the model
HuggingFace
Original weights
Source repository — direct quantization required.
Hardware that runs this
Cards with enough VRAM for at least one quantization of Qwen 3 Coder 32B.
Models worth comparing
Same parameter band, plus what's one tier above and below — so you can decide what actually fits your hardware.
Frequently asked
What's the minimum VRAM to run Qwen 3 Coder 32B?
Can I use Qwen 3 Coder 32B commercially?
What's the context length of Qwen 3 Coder 32B?
Source: huggingface.co/Qwen/Qwen3-Coder-32B
Reviewed by RunLocalAI Editorial. See our editorial policy for how we research and verify model claims.