WSL2: nvidia-smi works but PyTorch sees no CUDA / libcuda.so missing
Cause
WSL2 inherits the NVIDIA driver from the Windows host through a special mount (/usr/lib/wsl/lib). When that mount is missing, broken, or shadowed by a Linux-side libcuda installation, PyTorch can't find the driver library even though nvidia-smi (which uses a different path) works.
A common cause: someone ran apt install nvidia-driver-XXX inside WSL2, which is wrong — it installs Linux driver bits that conflict with the WSL2 host pass-through.
Solution
1. Confirm the WSL2 driver mount is intact:
ls -la /usr/lib/wsl/lib/libcuda*
# Should show libcuda.so.1.1 and libcuda.so symlinks
2. If you installed Linux NVIDIA drivers inside WSL, remove them:
sudo apt purge -y 'nvidia-*' 'libnvidia-*'
sudo apt autoremove
Reboot the WSL distro:
# in Windows PowerShell
wsl --shutdown
3. Update the Windows host driver to a recent version (R535+ for full WSL2 CUDA support). Reboot Windows after.
4. Update WSL itself:
wsl --update
5. Add the WSL lib path explicitly if PyTorch still can't find it:
export LD_LIBRARY_PATH=/usr/lib/wsl/lib:$LD_LIBRARY_PATH
python -c "import torch; print(torch.cuda.is_available())" # True
6. Install the CUDA Toolkit (not driver) inside WSL only if you need nvcc for building:
sudo apt install cuda-toolkit-12-4
Toolkit ≠ driver; the toolkit is safe to install in WSL.
Related errors
Did this fix it?
If your case was different, email support@runlocalai.co with what you saw and we'll update the page. If it worked but took different commands on your platform, we want to know that too.