Tabby
Self-hosted coding agent server with team SSO, audit logs, and dashboards. Enterprise-grade.
Editorial verdict: “Best self-hosted server for teams. SSO + audit logs make it the IT-friendly pick.”
Compatibility at a glance
Which runtime + OS combos this app works against. Source of truth for "will it run on my setup?"
What it is
Tabby is a self-hosted coding-agent server you point your IDE at. The selling point over Continue is enterprise concerns: SSO, audit logs, per-user usage dashboards, model serving with hot-swap. Comes with editor extensions for VS Code, JetBrains, Vim, Emacs. Pick this when you need to deploy local AI to a team of 20+ and prove what was generated by whom.
✓ Strengths
- +Real team features — SSO, audit logs, dashboards
- +All-in-one server — model + completion + chat + editor extensions
- +Strong language coverage on autocomplete
△ Caveats
- −More moving parts than Continue for solo use
- −Some advanced features (dashboards) are paid-tier
About the Coding agent category
Editor-integrated or CLI agent that edits code via your model.
Best terminal-native coding agent for local models. Qwen 2.5 Coder 32B is its sweet spot.
Best minimal-surface Copilot-replacement that's been Ollama-native since day one.
Best IDE-integrated agent that fully respects 'all local' as a first-class option.
Best Copilot replacement that defaults to local. Configurable; pair with Qwen 2.5 Coder.
Where to go from here
Pre-filled with this app's recommended use case + budget tier. Get the full rig + runtime + model picks.
The full directory — filter by category, runtime, OS, privacy posture, or VRAM.
What this app talks to: Ollama, vLLM, llama.cpp, MLX, LM Studio. The upstream layer.
Did this app work for you on a specific rig? Submit the benchmark — it powers the model + hardware pages.