GLM-5 Pro
Zhipu's GLM-5 flagship. 144B total / 16B active MoE. Strong on Chinese-language tasks; competitive on English at the workstation-cluster tier.
Overview
Zhipu's GLM-5 flagship. 144B total / 16B active MoE. Strong on Chinese-language tasks; competitive on English at the workstation-cluster tier.
Family & lineage
How this model relates to others in its lineage. Family members share architecture and training-data roots; parent / children edges record direct distillation or fine-tune relationships.
Strengths
- Strong CJK
- MoE efficiency
Weaknesses
- Restricted commercial license
- Multi-GPU only
Quantization variants
Each quantization trades model quality for file size and VRAM. Q4_K_M is the most popular starting point.
| Quantization | File size | VRAM required |
|---|---|---|
| AWQ-INT4 | 82.0 GB | 96 GB |
Get the model
HuggingFace
Original weights
Source repository — direct quantization required.
Hardware that runs this
Cards with enough VRAM for at least one quantization of GLM-5 Pro.
Models worth comparing
Same parameter band, plus what's one tier above and below — so you can decide what actually fits your hardware.
Frequently asked
What's the minimum VRAM to run GLM-5 Pro?
Can I use GLM-5 Pro commercially?
What's the context length of GLM-5 Pro?
Source: huggingface.co/THUDM/GLM-5-Pro
Reviewed by RunLocalAI Editorial. See our editorial policy for how we research and verify model claims.