InternLM 3 8B
Shanghai AI Lab's open-research line. InternLM 3 at 8B; strong on Chinese-language tasks.
Overview
Shanghai AI Lab's open-research line. InternLM 3 at 8B; strong on Chinese-language tasks.
Strengths
- Chinese-language strength
- Active research lineage
Weaknesses
- Commercial use restricted
Quantization variants
Each quantization trades model quality for file size and VRAM. Q4_K_M is the most popular starting point.
| Quantization | File size | VRAM required |
|---|---|---|
| Q4_K_M | 4.7 GB | 6 GB |
Get the model
HuggingFace
Original weights
Source repository — direct quantization required.
Hardware that runs this
Cards with enough VRAM for at least one quantization of InternLM 3 8B.
Frequently asked
What's the minimum VRAM to run InternLM 3 8B?
Can I use InternLM 3 8B commercially?
What's the context length of InternLM 3 8B?
Source: huggingface.co/internlm/internlm3-8b-instruct
Reviewed by RunLocalAI Editorial. See our editorial policy for how we research and verify model claims.
Related — keep moving
Verify InternLM 3 8B runs on your specific hardware before committing money.