Twinny
Free, lightweight VS Code copilot that runs entirely on Ollama. Strong on autocomplete.
Editorial verdict: “Best minimal-surface Copilot-replacement that's been Ollama-native since day one.”
Compatibility at a glance
Which runtime + OS combos this app works against. Source of truth for "will it run on my setup?"
What it is
Twinny is a no-nonsense VS Code extension purpose-built for Ollama. Autocomplete + inline chat + symbol explanation. Smaller surface area than Continue but tighter integration and lower latency for the autocomplete-only use case. Good 'just give me Copilot but local' pick.
✓ Strengths
- +Tiny config — works out of the box with Ollama running locally
- +Lower latency than Continue for autocomplete
- +MIT-licensed, fully open
△ Caveats
- −No JetBrains support
- −Fewer features than Continue (no agentic edit mode)
About the Coding agent category
Editor-integrated or CLI agent that edits code via your model.
Best terminal-native coding agent for local models. Qwen 2.5 Coder 32B is its sweet spot.
Best self-hosted server for teams. SSO + audit logs make it the IT-friendly pick.
Best IDE-integrated agent that fully respects 'all local' as a first-class option.
Best Copilot replacement that defaults to local. Configurable; pair with Qwen 2.5 Coder.
Where to go from here
Pre-filled with this app's recommended use case + budget tier. Get the full rig + runtime + model picks.
The full directory — filter by category, runtime, OS, privacy posture, or VRAM.
What this app talks to: Ollama, vLLM, llama.cpp, MLX, LM Studio. The upstream layer.
Did this app work for you on a specific rig? Submit the benchmark — it powers the model + hardware pages.