Apple Silicon: RuntimeError: MPS backend out of memory
Cause
macOS imposes a per-process limit on how much unified memory can be allocated to the GPU — by default ~67% of total system RAM. Jobs that try to allocate more (e.g. loading a large model into MPS) hit this limit even though there's free RAM available.
A separate cause: the MPS allocator pools memory and can fragment, OOMing on a 10 GB allocation while the system has 30 GB free.
Solution
1. Raise the MPS memory ratio:
# Allow MPS to use up to 80% of unified memory
export PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.0
# Or set an absolute cap (in GB)
export PYTORCH_MPS_MEMORY_LIMIT_MB=49152
HIGH_WATERMARK_RATIO=0.0 removes the soft cap entirely; macOS will swap if needed but the allocation will succeed.
2. Use a smaller / quantized model. PyTorch's MPS backend doesn't support 4-bit GGUF; for quantized inference on Apple Silicon, use llama.cpp or MLX-LM directly:
brew install llama.cpp
llama-cli -m model.Q4_K_M.gguf -p "Hello"
# Or MLX
pip install mlx-lm
mlx_lm.generate --model mlx-community/Llama-3.1-8B-Instruct-4bit --prompt "Hello"
3. Free unified memory by closing other apps. macOS shows actual GPU memory usage in Activity Monitor → GPU tab.
4. Restart the Python process after a previous large allocation — MPS allocator pools don't always release back to the OS until process exit.
Related errors
Did this fix it?
If your case was different, email support@runlocalai.co with what you saw and we'll update the page. If it worked but took different commands on your platform, we want to know that too.