qwen
1.5B parameters
Commercial OK

Qwen 2.5 Coder 1.5B

Smallest Qwen 2.5 Coder. Targets edge / autocomplete on integrated GPUs and Apple Silicon laptops.

License: Apache 2.0·Released Nov 12, 2024·Context: 32,768 tokens

Overview

Smallest Qwen 2.5 Coder. Targets edge / autocomplete on integrated GPUs and Apple Silicon laptops.

Family & lineage

How this model relates to others in its lineage. Family members share architecture and training-data roots; parent / children edges record direct distillation or fine-tune relationships.

Strengths

  • Apache 2.0
  • Edge deployable for code completion

Weaknesses

  • Too small for agentic coding

Quantization variants

Each quantization trades model quality for file size and VRAM. Q4_K_M is the most popular starting point.

QuantizationFile sizeVRAM required
Q4_K_M1.0 GB2 GB

Get the model

HuggingFace

Original weights

huggingface.co/Qwen/Qwen2.5-Coder-1.5B-Instruct

Source repository — direct quantization required.

Hardware that runs this

Cards with enough VRAM for at least one quantization of Qwen 2.5 Coder 1.5B.

Compare alternatives

Models worth comparing

Same parameter band, plus what's one tier above and below — so you can decide what actually fits your hardware.

Same tier
Models in the same parameter band as this one
Step up
More capable — bigger memory footprint
Step down
Smaller — faster, runs on weaker hardware
No verdicted models in the next tier down yet.

Frequently asked

What's the minimum VRAM to run Qwen 2.5 Coder 1.5B?

2GB of VRAM is enough to run Qwen 2.5 Coder 1.5B at the Q4_K_M quantization (file size 1.0 GB). Higher-quality quantizations need more.

Can I use Qwen 2.5 Coder 1.5B commercially?

Yes — Qwen 2.5 Coder 1.5B ships under the Apache 2.0, which permits commercial use. Always read the license text before deployment.

What's the context length of Qwen 2.5 Coder 1.5B?

Qwen 2.5 Coder 1.5B supports a context window of 32,768 tokens (about 33K).

Source: huggingface.co/Qwen/Qwen2.5-Coder-1.5B-Instruct

Reviewed by RunLocalAI Editorial. See our editorial policy for how we research and verify model claims.