Understanding software UI from screenshots — identifying buttons, fields, widgets, layout. Foundation for browser agents and computer-use AI.
ollama pull minicpm-v (~8 GB — strong at UI element detection).import ollama
with open("screenshot.png", "rb") as f:
img = f.read()
resp = ollama.chat(model="minicpm-v", messages=[{
"role": "user",
"content": "List every clickable button, text field, dropdown, and checkbox in this UI. What actions can the user take?",
"images": [img]
}])
print(resp["message"]["content"])
pip install omni-parser (Microsoft OmniParser — specialized UI element detection) — gives bounding boxes + semantic labels for every UI element.Used RTX 3060 12 GB (~$200-250, see /hardware/rtx-3060-12gb). Runs MiniCPM-V at 5-10 seconds per screenshot — enough for interactive browser agent use (analyze, act, repeat). OmniParser runs on the same GPU at 1-3 seconds per screenshot for element detection. Pair with Ryzen 5 5600 + 16 GB DDR4 + 512 GB NVMe. Total: ~$360-405. UI analysis is practical at this budget — the 8B VL models are fast enough for interactive use. The bottleneck is the VLM's vision reasoning quality, not GPU speed.
Used RTX 3090 24 GB (~$700-900, see /hardware/rtx-3090). Runs Qwen2-VL 72B at 10-20 seconds per screenshot for the most detailed UI analysis. For agent loops needing sub-3-second analysis: Qwen2-VL 7B at 2-4 seconds per screenshot on this GPU. For computer-use agents (analyze → act → screenshot → repeat), a 7B model on RTX 3090 achieves 15-20 agent steps per minute. Total: ~$1,800-2,200. For production browser agents, model quality matters more than speed — a 72B model that correctly identifies 95% of UI elements beats a 7B that gets 70%.
The mistake: Taking a low-resolution screenshot (800×600, JPEG compressed) and expecting accurate UI element detection. Why it fails: VLMs resize images to a fixed grid (typically 448×448 or 980×980 for Qwen2-VL). A low-res screenshot gets upscaled and loses fine text, small icons, and subtle UI state indicators (checkbox ticked vs. unticked). The model literally can't see small elements. The fix: Take screenshots at native resolution (typically 1920×1080 or higher). Save as PNG (lossless). If the VLM supports dynamic resolution (Qwen2-VL does), it will process your image at full resolution with tiling. For UI with tiny elements (mobile screens, dense dashboards), crop to the region of interest before analysis. Resolution is the single biggest factor in UI analysis accuracy.
Browse all tools for runtimes that fit this workload.
Local AI workloads have real hardware constraints that vary by task type. VRAM ceiling decides what model fits; bandwidth decides decode speed; compute decides prefill speed. Pick the GPU tier that fits your actual workload, not the spec sheet.
The errors most operators hit when running ui / screenshot analysis locally. Each links to a diagnose+fix walkthrough.
Verify your specific hardware can handle ui / screenshot analysis before committing money.