Privacy-first desktop chat with a curated model catalog. Llama / Mistral / Qwen one click from the app.
Editorial verdict: “Best one-binary desktop chat. Curated catalog removes 'which model?' decision paralysis.”
Which runtime + OS combos this app works against. Source of truth for "will it run on my setup?"
Jan ships as a desktop binary (no Docker, no terminal). The first-run experience picks a model from its curated catalog, downloads, and you're chatting in under three minutes. Talks to its own embedded llama.cpp runtime or to external Ollama / OpenAI-compatible endpoints. Strong choice for non-technical users who want a 'just works' local chat app.
Web or desktop chat client that connects to your local runtime.
Pre-filled with this app's recommended use case + budget tier. Get the full rig + runtime + model picks.
The full directory — filter by category, runtime, OS, privacy posture, or VRAM.
What this app talks to: Ollama, vLLM, llama.cpp, MLX, LM Studio. The upstream layer.
Did this app work for you on a specific rig? Submit the benchmark — it powers the model + hardware pages.