Ollama port 11434 conflict — find what's holding it, fix it
Ollama defaults to port 11434. When something else is on that port — often a previous Ollama process, Docker container, or another LLM server — startup fails. Here's how to find the squatter and reclaim the port.
Diagnostic order — most likely first
Previous Ollama instance still running
Linux/Mac: `lsof -i :11434` lists an `ollama` process. Windows: `netstat -ano | findstr 11434` shows ollama.exe PID.
Kill it: Linux/Mac `pkill ollama`, Windows `taskkill /PID <pid> /F`. Then restart with `ollama serve`.
Ollama installed both as system service AND user-launched
`systemctl status ollama` shows running. You also ran `ollama serve` manually. Both fight for the port.
Pick one. For service: `sudo systemctl stop ollama && sudo systemctl disable ollama` if you prefer manual. For service-managed: don't run `ollama serve` directly; use `systemctl start ollama`.
Docker container also exposing 11434
`docker ps` shows a container with `0.0.0.0:11434->11434/tcp`. Conflict on the host.
Either stop the container (`docker stop <name>`), or remap to a different host port: `-p 11435:11434`. Ollama clients then connect to `:11435`.
Another LLM server (LM Studio, llama.cpp) bound to 11434
`lsof` / `netstat` shows the process. LM Studio and some llama.cpp servers can be configured to imitate the Ollama API on the same port.
Configure either app to a different port. For Ollama specifically: `OLLAMA_HOST=0.0.0.0:11435 ollama serve` to override.
Firewall / corporate antivirus blocking the bind
Process starts then exits. Logs say bind succeeded but connection refused from clients.
Add Ollama to firewall exceptions (Windows Defender, corporate firewall). On macOS: System Settings → Network → Firewall → allow ollama.app.
Frequently asked questions
Can I run multiple Ollama instances on different ports?
Yes. `OLLAMA_HOST=0.0.0.0:11435 ollama serve` runs a second instance. Useful for testing different model libraries side-by-side. Each instance maintains its own model cache by default.
Why does Ollama default to 11434 specifically?
Convention from the project's first release. No technical reason. Override with `OLLAMA_HOST` env var if it conflicts with your stack.
Should I expose Ollama beyond localhost?
Only if you understand the security implications. Ollama has no authentication. Exposing on `0.0.0.0` opens your model serving endpoint to the network. For LAN access, prefer a reverse proxy (Nginx, Caddy) with auth in front.
Related troubleshooting
When the fix is hardware
A surprising fraction of troubleshooting tickets resolve to: this card doesn't have enough VRAM for what you're asking it to do. If you're hitting OOM after every reasonable fix, or your GPU genuinely can't fit the model you need, it's upgrade time: