EXAONE 3.5 32B
LG AI Research's flagship Korean-ecosystem model. Strong on Korean/Japanese language tasks; competitive on English. License blocks commercial use without LG agreement.
Overview
LG AI Research's flagship Korean-ecosystem model. Strong on Korean/Japanese language tasks; competitive on English. License blocks commercial use without LG agreement.
How to run it
EXAONE 3.5 32B is LG AI Research's 32B dense model. EXAONE is LG's model family optimized for Korean + English bilingual performance with competitive general reasoning. Run at Q4_K_M via Ollama (ollama pull exaone:32b) or llama.cpp with -ngl 999 -fa -c 8192. Q4_K_M file size ~18 GB on disk. Minimum VRAM: 16 GB — RTX 4080 (16GB) at Q4_K_M with KV offload. RTX 4090 24GB: Q4_K_M comfortably at 16K context. Recommended: RTX 4090 24GB at Q4_K_M. Throughput: ~35-55 tok/s on RTX 4090 at Q4_K_M. EXAONE architecture — compatible with standard inference stacks (llama.cpp support verified). EXAONE 3.5 is LG's most recent release — strong Korean performance, competitive English. Less benchmark coverage than Qwen/Mistral same-tier models, but quality is solid. Use for: Korean language tasks, bilingual (KO+EN) applications, general reasoning, coding. Not as strong on: non-Korean/English languages, niche domains. Context: 32K advertised; practical at Q4 on 24 GB is 16-32K. EXAONE is Apache 2.0 licensed — commercial-friendly.
Hardware guidance
Minimum: RTX 3060 12GB at Q3_K_M with KV offload. Recommended: RTX 4090 24GB at Q4_K_M (16K context). Optimal: RTX 5090 32GB at Q4_K_M (32K+ context). VRAM math: 32B dense, Q4_K_M ≈ 18 GB. KV cache at 16K: ~8 GB. Total: ~26 GB. RTX 4090 24GB: Q4 + 8-12K context fits on-GPU. 16K: offload KV. RTX 3090 24GB: same profile. RTX 4080 16GB: Q4 + 2K on-GPU. MacBook Pro M4 Pro 24GB+: Q4 at 10-20 tok/s. Cloud: A10 24GB at Q4_K_M. AWQ-INT4 drops to ~16 GB. EXAONE's 32B is hardware-efficient — one of the best Korean-capable models at consumer GPU size.
What breaks first
- Korean-centric tokenizer. EXAONE's tokenizer is optimized for Korean + English. Other languages (Japanese, Chinese, European) have higher token counts, reducing effective context. Test your language's token efficiency. 2. English quality vs Korean. English performance is competitive but lower than Korean. For English-only tasks, same-tier models like Qwen 3 32B may outperform. 3. LG licensing updates. LG may update EXAONE's license between versions. Verify Apache 2.0 status on the specific 3.5 release before commercial use. 4. Ecosystem maturity. EXAONE has less community quant coverage than Qwen/Mistral. Pre-converted GGUFs may be harder to find. Check bartowski/TheBloke for availability.
Runtime recommendation
Common beginner mistakes
Mistake: Assuming EXAONE matches Qwen/Mistral in English-only benchmarks. Fix: EXAONE is Korean-optimized. English quality is good but not best-in-class. Benchmark your English task against same-tier models before committing. Mistake: Using EXAONE for non-Korean/English languages. Fix: Tokenizer and training distribution are KO+EN heavy. Other languages underperform. Test your specific language. Mistake: Expecting broad GGUF availability. Fix: EXAONE has less community coverage. You may need to convert from hf yourself. Check bartowski's repo first. Mistake: Using Llama chat template. Fix: EXAONE uses LG's chat template. Verify on hf tokenizer_config.json. Using Llama template produces garbled Korean and awkward English.
Family & lineage
How this model relates to others in its lineage. Family members share architecture and training-data roots; parent / children edges record direct distillation or fine-tune relationships.
Strengths
- Best open Korean-language model in May 2026
- Strong CJK multilingual
Weaknesses
- License blocks unrestricted commercial use
Quantization variants
Each quantization trades model quality for file size and VRAM. Q4_K_M is the most popular starting point.
| Quantization | File size | VRAM required |
|---|---|---|
| AWQ-INT4 | 19.0 GB | 22 GB |
Get the model
HuggingFace
Original weights
Source repository — direct quantization required.
Hardware that runs this
Cards with enough VRAM for at least one quantization of EXAONE 3.5 32B.
Frequently asked
What's the minimum VRAM to run EXAONE 3.5 32B?
Can I use EXAONE 3.5 32B commercially?
What's the context length of EXAONE 3.5 32B?
Source: huggingface.co/LGAI-EXAONE/EXAONE-3.5-32B-Instruct
Reviewed by RunLocalAI Editorial. See our editorial policy for how we research and verify model claims.
Related — keep moving
Verify EXAONE 3.5 32B runs on your specific hardware before committing money.