Molmo 7B-D
AI2's fully-open VLM. Trained on PixMo dataset; pointing capability for UI grounding.
Overview
AI2's fully-open VLM. Trained on PixMo dataset; pointing capability for UI grounding.
Family & lineage
How this model relates to others in its lineage. Family members share architecture and training-data roots; parent / children edges record direct distillation or fine-tune relationships.
Strengths
- Fully-open data + weights
- UI pointing / grounding
Weaknesses
- Smaller community than LLaVA / Qwen-VL
Quantization variants
Each quantization trades model quality for file size and VRAM. Q4_K_M is the most popular starting point.
| Quantization | File size | VRAM required |
|---|---|---|
| Q4_K_M | 5.2 GB | 8 GB |
Get the model
HuggingFace
Original weights
Source repository — direct quantization required.
Hardware that runs this
Cards with enough VRAM for at least one quantization of Molmo 7B-D.
Frequently asked
What's the minimum VRAM to run Molmo 7B-D?
Can I use Molmo 7B-D commercially?
What's the context length of Molmo 7B-D?
Does Molmo 7B-D support images?
Source: huggingface.co/allenai/Molmo-7B-D-0924
Reviewed by RunLocalAI Editorial. See our editorial policy for how we research and verify model claims.
Related — keep moving
Verify Molmo 7B-D runs on your specific hardware before committing money.