phi
4.2B parameters
Commercial OK
Multimodal
Phi-3.5 Vision
Multimodal Phi 3.5. Document and chart understanding at edge size. MIT licensed.
License: MIT·Released Aug 20, 2024·Context: 131,072 tokens
Overview
Multimodal Phi 3.5. Document and chart understanding at edge size. MIT licensed.
Strengths
- MIT license
- Vision in 4B
Weaknesses
- Patchy runner support outside Transformers
Quantization variants
Each quantization trades model quality for file size and VRAM. Q4_K_M is the most popular starting point.
| Quantization | File size | VRAM required |
|---|---|---|
| Q4_K_M | 2.5 GB | 4 GB |
Get the model
HuggingFace
Original weights
huggingface.co/microsoft/Phi-3.5-vision-instruct
Source repository — direct quantization required.
Hardware that runs this
Cards with enough VRAM for at least one quantization of Phi-3.5 Vision.
Compare alternatives
Models worth comparing
Same parameter band, plus what's one tier above and below — so you can decide what actually fits your hardware.
Same tier
Models in the same parameter band as this one
Step up
More capable — bigger memory footprint
Step down
Smaller — faster, runs on weaker hardware
Frequently asked
What's the minimum VRAM to run Phi-3.5 Vision?
4GB of VRAM is enough to run Phi-3.5 Vision at the Q4_K_M quantization (file size 2.5 GB). Higher-quality quantizations need more.
Can I use Phi-3.5 Vision commercially?
Yes — Phi-3.5 Vision ships under the MIT, which permits commercial use. Always read the license text before deployment.
What's the context length of Phi-3.5 Vision?
Phi-3.5 Vision supports a context window of 131,072 tokens (about 131K).
Does Phi-3.5 Vision support images?
Yes — Phi-3.5 Vision is multimodal and accepts text + vision inputs. Vision support requires a runner that handles its image-conditioning architecture.
Source: huggingface.co/microsoft/Phi-3.5-vision-instruct
Reviewed by RunLocalAI Editorial. See our editorial policy for how we research and verify model claims.