OpenCoder 8B
Fully-open coding model — training data + recipes published. Apache 2.0 with verifiable open-data lineage. The right pick for academic / reproducibility-sensitive work.
Overview
Fully-open coding model — training data + recipes published. Apache 2.0 with verifiable open-data lineage. The right pick for academic / reproducibility-sensitive work.
Strengths
- Apache 2.0
- Fully-open training data + recipes
- Reproducibility-friendly
Weaknesses
- Trails Qwen Coder on raw HumanEval
Quantization variants
Each quantization trades model quality for file size and VRAM. Q4_K_M is the most popular starting point.
| Quantization | File size | VRAM required |
|---|---|---|
| Q4_K_M | 4.7 GB | 6 GB |
Get the model
HuggingFace
Original weights
Source repository — direct quantization required.
Hardware that runs this
Cards with enough VRAM for at least one quantization of OpenCoder 8B.
Models worth comparing
Same parameter band, plus what's one tier above and below — so you can decide what actually fits your hardware.
Frequently asked
What's the minimum VRAM to run OpenCoder 8B?
Can I use OpenCoder 8B commercially?
What's the context length of OpenCoder 8B?
Source: huggingface.co/infly/OpenCoder-8B-Instruct
Reviewed by RunLocalAI Editorial. See our editorial policy for how we research and verify model claims.