qwen
30B parameters
Commercial OK

Qwen 3 30B-A3B

Mid-tier Qwen 3 MoE. 30B total / 3B active means 70B-class quality at 7B-class inference speed on a single 24GB card. The sweet spot of the Qwen 3 lineup for prosumer hardware.

License: Apache 2.0·Released Apr 29, 2025·Context: 131,072 tokens

Overview

Mid-tier Qwen 3 MoE. 30B total / 3B active means 70B-class quality at 7B-class inference speed on a single 24GB card. The sweet spot of the Qwen 3 lineup for prosumer hardware.

Strengths

  • 3B active params = fast inference
  • Apache 2.0
  • Thinking mode

Weaknesses

  • Total weights still 18GB at Q4
  • MoE routing varies in quality

Quantization variants

Each quantization trades model quality for file size and VRAM. Q4_K_M is the most popular starting point.

QuantizationFile sizeVRAM required
Q4_K_M18.0 GB22 GB
Q8_032.0 GB36 GB

Get the model

Ollama

One-line install

ollama run qwen3:30bRead our Ollama review →

HuggingFace

Original weights

huggingface.co/Qwen/Qwen3-30B-A3B

Source repository — direct quantization required.

Hardware that runs this

Cards with enough VRAM for at least one quantization of Qwen 3 30B-A3B.

Compare alternatives

Models worth comparing

Same parameter band, plus what's one tier above and below — so you can decide what actually fits your hardware.

Frequently asked

What's the minimum VRAM to run Qwen 3 30B-A3B?

22GB of VRAM is enough to run Qwen 3 30B-A3B at the Q4_K_M quantization (file size 18.0 GB). Higher-quality quantizations need more.

Can I use Qwen 3 30B-A3B commercially?

Yes — Qwen 3 30B-A3B ships under the Apache 2.0, which permits commercial use. Always read the license text before deployment.

What's the context length of Qwen 3 30B-A3B?

Qwen 3 30B-A3B supports a context window of 131,072 tokens (about 131K).

How do I install Qwen 3 30B-A3B with Ollama?

Run `ollama pull qwen3:30b` to download, then `ollama run qwen3:30b` to start a chat session. The default quantization is Q4_K_M.

Source: huggingface.co/Qwen/Qwen3-30B-A3B

Reviewed by RunLocalAI Editorial. See our editorial policy for how we research and verify model claims.