RUNLOCALAIv38
→WILL IT RUNBEST GPUCOMPARETROUBLESHOOTSTARTPULSEMODELSHARDWARETOOLSBENCH
RUNLOCALAI

Operator-grade instrument for local-AI hardware intelligence. Hand-written verdicts. Real benchmarks. Reproducible commands.

OP·Fredoline Eruo
DIR
  • Models
  • Hardware
  • Tools
  • Benchmarks
  • Will it run?
GUIDES
  • Best GPU
  • Best laptop
  • Best Mac
  • Best used GPU
  • Best budget GPU
  • Best GPU for Ollama
  • Best GPU for SD
  • AI PC build $2K
  • CUDA vs ROCm
  • 16 vs 24 GB
  • Compare hardware
  • Custom compare
REF
  • Systems
  • Ecosystem maps
  • Pillar guides
  • Methodology
  • Glossary
  • Errors KB
  • Troubleshooting
  • Resources
  • Public API
EDITOR
  • About
  • About the author
  • Changelog
  • Latest
  • Updates
  • Submit benchmark
  • Send feedback
  • Trust
  • Editorial policy
  • How we make money
  • Contact
LEGAL
  • Privacy
  • Terms
  • Sitemap
MAIL · MONTHLY DIGEST
Get monthly local AI changes
Monthly recap. No spam.
DISCLOSURE

Some links on this site are affiliate links (Amazon Associates and other first-class retailers). When you buy through them, we earn a small commission at no extra cost to you. Affiliate links do not influence our verdicts — there are cards we rate highly that we don't have affiliate relationships with, and cards that sell well that we refuse to recommend. Read more →

SYS · ONLINEUPTIME · 100%2026 · operator-owned
RUNLOCALAI · v38
Errors / Driver issues / Docker container can't see GPU — nvidia-container-toolkit missing
Driver issues

Docker container can't see GPU — nvidia-container-toolkit missing

could not select device driver "nvidia" with capabilities: [[gpu]]
By Fredoline Eruo · Last verified May 7, 2026

Cause

docker run --gpus all fails because nvidia-container-toolkit isn't installed (or the Docker daemon wasn't restarted after install). This is the single most common Docker-GPU failure.

Solution

1. Install nvidia-container-toolkit (Ubuntu / Debian):

distribution=$(. /etc/os-release; echo $ID$VERSION_ID)
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg
curl -s -L https://nvidia.github.io/libnvidia-container/$distribution/libnvidia-container.list | \
  sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
  sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list

sudo apt update
sudo apt install -y nvidia-container-toolkit

2. Configure Docker runtime + restart daemon:

sudo nvidia-ctk runtime configure --runtime=docker
sudo systemctl restart docker

3. Verify GPU visible in container:

docker run --rm --gpus all nvidia/cuda:12.6.0-base-ubuntu24.04 nvidia-smi

4. Docker Compose GPU access:

services:
  vllm:
    image: vllm/vllm-openai:latest
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: all
              capabilities: [gpu]

5. WSL2 case: the same toolkit install works inside WSL2 with Docker Desktop's WSL2 backend. The Windows-host driver passes through; you only need the toolkit + Docker engine inside WSL2.

Full pattern in the Linux local AI guide.

Related errors

  • CUDA driver version is insufficient for CUDA runtime version
  • nvidia-smi: command not found
  • PyTorch CUDA error: driver version is insufficient for CUDA runtime
  • WSL2: nvidia-smi works but PyTorch sees no CUDA / libcuda.so missing
  • WSL2 GPU not detected — nvidia-smi missing or empty

Did this fix it?

If your case was different, email support@runlocalai.co with what you saw and we'll update the page. If it worked but took different commands on your platform, we want to know that too.