qwen
8B parameters
Commercial OK

Qwen 3 8B

Qwen 3 at the 8B scale. Direct head-to-head against Llama 3.1 8B on most benchmarks; usually wins on coding and structured output.

License: Apache 2.0·Released Apr 29, 2025·Context: 131,072 tokens
Our verdict
By Fredoline Eruo·Last verified May 6, 2026
8.5/10
Positioning

Qwen 3 8B introduces a hybrid "thinking" / "non-thinking" toggle into the 7B class. In non-thinking mode it's a tier with Qwen 2.5 7B; in thinking mode it produces visible chain-of-thought and lifts hard-task performance closer to 14B-class models at the cost of latency.

Strengths
  • Hybrid reasoning toggle/think and /no_think per turn lets you pay for reasoning only when needed.
  • Improved tool-use over Qwen 2.5 — function-call format more standardized.
  • Strong multilingual carryover from the 2.5 generation.
Limitations
  • Thinking-mode output is verbose — tokens-per-answer roughly doubles, eating speed.
  • Some prompt-injection vectors specific to the /think toggle that haven't been fully audited.
  • License remains Qwen-flavored with usage caps.
Real-world performance on RTX 4090
  • Q4_K_M (5.0 GB): 95–115 tok/s decode (non-thinking); 90–110 tok/s thinking but 2× output
  • Q5_K_M (5.9 GB): 85–100 tok/s
  • Q8_0 (8.4 GB): 65–82 tok/s
Should you run this locally?

Yes, for users who want the best 8B-class capability and are willing to use thinking mode selectively for hard prompts. No, for users who don't need reasoning — Qwen 2.5 7B is simpler and similar speed.

How it compares
  • vs Qwen 2.5 7B → Qwen 3 8B with thinking mode wins on reasoning; without thinking, near-equal. Pick Qwen 3 if reasoning matters.
  • vs Llama 3.1 8B → Qwen 3 8B wins on raw capability; Llama wins on instruction polish + ecosystem maturity.
  • vs QwQ 32B → QwQ is the dedicated reasoning specialist at 32B; Qwen 3 8B's thinking mode is a poor man's QwQ at lighter VRAM.
  • vs Phi-4 14B → Phi-4 has cleaner reasoning at higher VRAM; Qwen 3 8B fits in less memory.
Run this yourself
ollama pull qwen3:8b
ollama run qwen3:8b
# Toggle reasoning per turn:
#   /think    — enable chain-of-thought
#   /no_think — disable
Settings: Q4_K_M GGUF, 8192 ctx, llama.cpp/CUDA, RTX 4090
Why this rating

8.5/10 — Qwen 3's hybrid reasoning mode in an 8B body. Strong as a 7B-class chat model, with a "thinking" mode that pushes it materially beyond Qwen 2.5 7B on reasoning tasks. Loses points only on ecosystem maturity vs Llama 3.1 8B.

Overview

Qwen 3 at the 8B scale. Direct head-to-head against Llama 3.1 8B on most benchmarks; usually wins on coding and structured output.

Strengths

  • Best 8B coder
  • Apache 2.0
  • Thinking mode

Weaknesses

  • More verbose with thinking enabled

Quantization variants

Each quantization trades model quality for file size and VRAM. Q4_K_M is the most popular starting point.

QuantizationFile sizeVRAM required
Q4_K_M4.8 GB6 GB
Q8_08.2 GB10 GB

Get the model

Ollama

One-line install

ollama run qwen3:8bRead our Ollama review →

HuggingFace

Original weights

huggingface.co/Qwen/Qwen3-8B

Source repository — direct quantization required.

Hardware that runs this

Cards with enough VRAM for at least one quantization of Qwen 3 8B.

Compare alternatives

Models worth comparing

Same parameter band, plus what's one tier above and below — so you can decide what actually fits your hardware.

Frequently asked

What's the minimum VRAM to run Qwen 3 8B?

6GB of VRAM is enough to run Qwen 3 8B at the Q4_K_M quantization (file size 4.8 GB). Higher-quality quantizations need more.

Can I use Qwen 3 8B commercially?

Yes — Qwen 3 8B ships under the Apache 2.0, which permits commercial use. Always read the license text before deployment.

What's the context length of Qwen 3 8B?

Qwen 3 8B supports a context window of 131,072 tokens (about 131K).

How do I install Qwen 3 8B with Ollama?

Run `ollama pull qwen3:8b` to download, then `ollama run qwen3:8b` to start a chat session. The default quantization is Q4_K_M.

Source: huggingface.co/Qwen/Qwen3-8B

Reviewed by RunLocalAI Editorial. See our editorial policy for how we research and verify model claims.