Mistral Large 2 (123B)
Mistral's flagship dense model. Open weights but restricted commercial license — research and non-commercial only.
Overview
Mistral's flagship dense model. Open weights but restricted commercial license — research and non-commercial only.
Strengths
- Top-tier dense quality
- 128K context
- Strong multilingual
Weaknesses
- Non-commercial license
- Workstation-only
Quantization variants
Each quantization trades model quality for file size and VRAM. Q4_K_M is the most popular starting point.
| Quantization | File size | VRAM required |
|---|---|---|
| Q4_K_M | 73.0 GB | 88 GB |
Get the model
Ollama
One-line install
ollama run mistral-large:123bRead our Ollama review →HuggingFace
Original weights
Source repository — direct quantization required.
Hardware that runs this
Cards with enough VRAM for at least one quantization of Mistral Large 2 (123B).
Models worth comparing
Same parameter band, plus what's one tier above and below — so you can decide what actually fits your hardware.
Frequently asked
What's the minimum VRAM to run Mistral Large 2 (123B)?
Can I use Mistral Large 2 (123B) commercially?
What's the context length of Mistral Large 2 (123B)?
How do I install Mistral Large 2 (123B) with Ollama?
Source: huggingface.co/mistralai/Mistral-Large-Instruct-2407
Reviewed by RunLocalAI Editorial. See our editorial policy for how we research and verify model claims.