other
200B parameters
Commercial OK

GLM-5

Zhipu's GLM-5 currently leads the Open LLM Leaderboard 2026. Strong reasoning and bilingual EN/ZH capability.

License: GLM License·Released Feb 5, 2026·Context: 200,000 tokens

Overview

Zhipu's GLM-5 currently leads the Open LLM Leaderboard 2026. Strong reasoning and bilingual EN/ZH capability.

Strengths

  • Top of leaderboards
  • Bilingual EN/ZH
  • Reasoning-tuned

Weaknesses

  • Less Western ecosystem support

Quantization variants

Each quantization trades model quality for file size and VRAM. Q4_K_M is the most popular starting point.

QuantizationFile sizeVRAM required
Q4_K_M120.0 GB140 GB

Get the model

HuggingFace

Original weights

huggingface.co/THUDM/GLM-5

Source repository — direct quantization required.

Hardware that runs this

Cards with enough VRAM for at least one quantization of GLM-5.

Compare alternatives

Models worth comparing

Same parameter band, plus what's one tier above and below — so you can decide what actually fits your hardware.

Step up
More capable — bigger memory footprint
No verdicted models in the next tier up yet.

Frequently asked

What's the minimum VRAM to run GLM-5?

140GB of VRAM is enough to run GLM-5 at the Q4_K_M quantization (file size 120.0 GB). Higher-quality quantizations need more.

Can I use GLM-5 commercially?

Yes — GLM-5 ships under the GLM License, which permits commercial use. Always read the license text before deployment.

What's the context length of GLM-5?

GLM-5 supports a context window of 200,000 tokens (about 200K).

Source: huggingface.co/THUDM/GLM-5

Reviewed by RunLocalAI Editorial. See our editorial policy for how we research and verify model claims.