RUNLOCALAIv38
→WILL IT RUNBEST GPUCOMPARETROUBLESHOOTSTARTPULSEMODELSHARDWARETOOLSBENCH
  1. >
  2. Home
  3. /Models
  4. /Qwen 2.5-VL 72B
qwen
72B parameters
Commercial OK
Multimodal
·Reviewed May 2026

Qwen 2.5-VL 72B

Qwen 2.5 vision-language flagship at 72B. Strong on document understanding + multi-image queries. Apache 2.0.

License: Apache 2.0·Released Mar 10, 2025·Context: 32,768 tokens

Overview

Qwen 2.5 vision-language flagship at 72B. Strong on document understanding + multi-image queries. Apache 2.0.

How to run it

Qwen 2.5 VL 72B is Alibaba's 72B vision-language model — 72B dense text backbone + Qwen-VL vision encoder. Run at Q4_K_M via llama.cpp with llava-server for vision, or Ollama if the VL tag is available. Q4_K_M file size ~41 GB (text) + ~4-6 GB (vision). Minimum VRAM: 48 GB — RTX A6000 at Q3_K_M with vision, or text-only Q4_K_M. Recommended: A100 80GB at AWQ-INT4 for full vision serving. Throughput: ~10-18 tok/s on A6000 at Q4_K_M text-only; vision encoding adds ~2-4s per image. Qwen-VL's vision encoder is well-optimized — lower VRAM overhead than InternVL's InternViT. Strong OCR, document understanding, and visual reasoning. 128K context advertised; practical for vision at Q4 on 80 GB is 4-8K. Qwen 2.5 VL uses the same text backbone as Qwen 2.5 72B — broad ecosystem support. For production: vLLM with multimodal pipeline and tensor-parallel if needed.

Hardware guidance

Minimum: RTX A6000 48GB at Q3_K_M + vision (tight). Recommended: A100 80GB at AWQ-INT4. VRAM math: 72B text at Q4_K_M ≈ 41 GB. Qwen-VL encoder: 3-5 GB. KV cache at 8K: ~10 GB. Total with vision: ~54-56 GB. A6000 48GB: must use Q3_K_M (31 GB) + vision at 4K context. A100 80GB: comfortable for Q4 + vision + 8K. Dual RTX 4090 48 GB: Q4_K_M text-only, or Q3_K_M + vision. Mac Studio M4 Ultra 128GB: Q4_K_M + vision, 3-6 tok/s. Cloud: A100 at $5-10/hr. Qwen-VL's encoder is smaller than InternViT — better VRAM efficiency for vision. AWQ-INT4 drops text weights to ~36 GB, enabling 16K+ context on 80 GB.

What breaks first

  1. Vision tag availability in Ollama. Qwen 2.5 VL may not have an official Ollama tag. Community tags may exist but aren't verified. Test with raw llama.cpp if Ollama fails. 2. Image preprocessing mismatch. Qwen-VL expects specific image preprocessing (resolution, normalization). Feeding raw images without preprocessing degrades vision quality. Use the model's image processor from HF transformers. 3. KV cache with vision. Vision tokens are prepended to the text prompt — each image adds 256-1024 tokens to context. Multiple images inflate context and KV cache proportionally. Budget for image tokens. 4. Qwen 2.5 vs Qwen 3 VL. Qwen 2.5 VL and Qwen 3 VL use different vision encoders. Don't mix model files or chat templates between versions.

Runtime recommendation

llama.cpp with llava-server for local vision. vLLM for production multimodal serving. Qwen-VL is supported in vLLM's multimodal pipeline. Ollama for quick-start if the VL tag is available. MLX-LM on Apple Silicon if Qwen-VL is supported.

Common beginner mistakes

Mistake: Using a standard text-only GGUF and expecting vision to work. Fix: Vision requires a multimodal GGUF with the Qwen-VL encoder included. Download from bartowski or convert from hf. Mistake: Ignoring image token count in context budget. Fix: Each image in Qwen-VL consumes 256-1024 tokens. Subtract image tokens from your available context window. Mistake: Using Llama 3.2 Vision chat template for Qwen-VL. Fix: Different architectures, different templates. Use Qwen's chat template from tokenizer_config.json. Mistake: Sending large images without preprocessing. Fix: Qwen-VL expects images within a specific resolution range. Use Qwen's image processor or resize manually before inference.

Family & lineage

How this model relates to others in its lineage. Family members share architecture and training-data roots; parent / children edges record direct distillation or fine-tune relationships.

Family siblings (qwen-vl)
Qwen 2.5-VL 3B3B
Edge
Qwen 2.5-VL 7B7B
Consumer
Qwen 2-VL 7B7B
Consumer
Qwen 2.5-VL 72B72B
You are here
Distilled / fine-tuned from this
Qwen 2.5-VL 7B7B
Consumer

Strengths

  • Frontier-tier multimodal
  • Apache 2.0
  • Strong document Q&A

Weaknesses

  • 48GB+ VRAM tier

Quantization variants

Each quantization trades model quality for file size and VRAM. Q4_K_M is the most popular starting point.

QuantizationFile sizeVRAM required
AWQ-INT442.0 GB48 GB

Get the model

HuggingFace

Original weights

huggingface.co/Qwen/Qwen2.5-VL-72B-Instruct

Source repository — direct quantization required.

Hardware that runs this

Cards with enough VRAM for at least one quantization of Qwen 2.5-VL 72B.

NVIDIA GB200 NVL72
13824GB · nvidia
AMD Instinct MI355X
288GB · amd
AMD Instinct MI325X
256GB · amd
AMD Instinct MI300X
192GB · amd
NVIDIA B200
192GB · nvidia
NVIDIA H100 NVL
188GB · nvidia
NVIDIA H200
141GB · nvidia
AMD Instinct MI250X
128GB · amd

Frequently asked

What's the minimum VRAM to run Qwen 2.5-VL 72B?

48GB of VRAM is enough to run Qwen 2.5-VL 72B at the AWQ-INT4 quantization (file size 42.0 GB). Higher-quality quantizations need more.

Can I use Qwen 2.5-VL 72B commercially?

Yes — Qwen 2.5-VL 72B ships under the Apache 2.0, which permits commercial use. Always read the license text before deployment.

What's the context length of Qwen 2.5-VL 72B?

Qwen 2.5-VL 72B supports a context window of 32,768 tokens (about 33K).

Does Qwen 2.5-VL 72B support images?

Yes — Qwen 2.5-VL 72B is multimodal and accepts text + vision inputs. Vision support requires a runner that handles its image-conditioning architecture.

Source: huggingface.co/Qwen/Qwen2.5-VL-72B-Instruct

Reviewed by RunLocalAI Editorial. See our editorial policy for how we research and verify model claims.

Related — keep moving

Compare hardware
  • Dual 3090 vs RTX 5090 (48 GB or 32 GB) →
  • RTX 3090 vs RTX 4090 →
Buyer guides
  • 16 GB vs 24 GB VRAM — what 70B-class models need →
  • Best GPU for local AI →
  • Best laptop for local AI →
  • Best Mac for local AI →
  • Best used GPU for local AI →
When it doesn't work
  • CUDA out of memory →
  • Ollama running slowly →
  • ROCm not detected →
  • Model keeps crashing →
Recommended hardware
  • NVIDIA GB200 NVL72 →
  • AMD Instinct MI355X →
  • AMD Instinct MI325X →
  • AMD Instinct MI300X →
  • NVIDIA B200 →
Alternatives
Qwen 2.5-VL 7BQwen 2-VL 7BQwen 2.5-VL 3B
Before you buy

Verify Qwen 2.5-VL 72B runs on your specific hardware before committing money.

Will it run on my hardware? →Custom hardware comparison →GPU recommender (4 questions) →
RUNLOCALAI

Operator-grade instrument for local-AI hardware intelligence. Hand-written verdicts. Real benchmarks. Reproducible commands.

OP·Fredoline Eruo
DIR
  • Models
  • Hardware
  • Tools
  • Benchmarks
  • Will it run?
GUIDES
  • Best GPU
  • Best laptop
  • Best Mac
  • Best used GPU
  • Best budget GPU
  • Best GPU for Ollama
  • Best GPU for SD
  • AI PC build $2K
  • CUDA vs ROCm
  • 16 vs 24 GB
  • Compare hardware
  • Custom compare
REF
  • Systems
  • Ecosystem maps
  • Pillar guides
  • Methodology
  • Glossary
  • Errors KB
  • Troubleshooting
  • Resources
  • Public API
EDITOR
  • About
  • About the author
  • Changelog
  • Latest
  • Updates
  • Submit benchmark
  • Send feedback
  • Trust
  • Editorial policy
  • How we make money
  • Contact
LEGAL
  • Privacy
  • Terms
  • Sitemap
MAIL · MONTHLY DIGEST
Get monthly local AI changes
Monthly recap. No spam.
DISCLOSURE

Some links on this site are affiliate links (Amazon Associates and other first-class retailers). When you buy through them, we earn a small commission at no extra cost to you. Affiliate links do not influence our verdicts — there are cards we rate highly that we don't have affiliate relationships with, and cards that sell well that we refuse to recommend. Read more →

SYS · ONLINEUPTIME · 100%2026 · operator-owned
RUNLOCALAI · v38
Compare alternatives

Models worth comparing

Same parameter band, plus what's one tier above and below — so you can decide what actually fits your hardware.

Same tier
Models in the same parameter band as this one
  • Llama 3.3 70B Instruct
    llama · 70B
    9.1/10
  • DeepSeek R1 Distill Llama 70B
    deepseek · 70B
    9.0/10
  • Qwen 2.5 72B Instruct
    qwen · 72B
    9.0/10
  • Llama 3.1 70B Instruct
    llama · 70B
    8.0/10
Step up
More capable — bigger memory footprint
  • DeepSeek V4 Pro (1.6T MoE)
    deepseek · 1600B
    unrated
  • Qwen 3.5 235B-A17B (MoE)
    qwen · 397B
    unrated
Step down
Smaller — faster, runs on weaker hardware
  • Qwen 3 30B-A3B
    qwen · 30B
    unrated
  • Gemma 4 31B Dense
    gemma · 31B
    unrated