Drop-in OpenAI TTS-compatible server. Self-hosted, talks to local voice models.
Editorial verdict: “Best 'drop-in local TTS for OpenAI clients'. Bridge solution for existing pipelines.”
Which runtime + OS combos this app works against. Source of truth for "will it run on my setup?"
OpenedAI-Speech mimics OpenAI's TTS API endpoint, but routes to local voice models (Piper, XTTS, OpenVoice). Drop a config, point any OpenAI-TTS client at it, get local voice synthesis. Pairs well with assistants that already support OpenAI TTS — drop-in replacement.
Transcription, speech-to-text, or text-to-speech.
Pre-filled with this app's recommended use case + budget tier. Get the full rig + runtime + model picks.
The full directory — filter by category, runtime, OS, privacy posture, or VRAM.
What this app talks to: Ollama, vLLM, llama.cpp, MLX, LM Studio. The upstream layer.
Did this app work for you on a specific rig? Submit the benchmark — it powers the model + hardware pages.