fatalEditorialReviewed May 2026

PyTorch can't see CUDA — diagnose the install in 60 seconds

PyTorch falsely reporting no CUDA is the most common Python ML setup failure. The cause is almost always: wrong PyTorch wheel for your CUDA version, or a CPU-only build accidentally installed.

PyTorchHugging Face TransformersvLLMany CUDA-using Python lib
By Fredoline Eruo · Last verified 2026-05-08

Diagnostic order — most likely first

#1

CPU-only PyTorch wheel installed

Diagnose

`python -c 'import torch; print(torch.__version__)'` shows e.g. `2.5.1+cpu`. The `+cpu` suffix is the smoking gun.

Fix

Reinstall with the correct index: `pip install --upgrade --force-reinstall torch torchvision --index-url https://download.pytorch.org/whl/cu124` (use cu121 for older drivers, cu126 for cutting-edge).

#2

PyTorch CUDA version mismatches the host driver

Diagnose

`torch.__version__` shows `2.5.1+cu124`. `nvidia-smi` upper-right shows max CUDA `12.0`. Driver doesn't support 12.4.

Fix

Update the NVIDIA driver to support CUDA 12.4+ (driver 550+). OR install a PyTorch wheel matching your driver: `pip install torch --index-url https://download.pytorch.org/whl/cu118` for older drivers.

#3

Conda env has CUDA libraries shadowing the system ones

Diagnose

`echo $LD_LIBRARY_PATH` includes a conda lib path with old CUDA. `conda list` shows `cudatoolkit` package.

Fix

Either: `conda remove cudatoolkit` if you're not using conda's CUDA. OR: ensure conda's `cudatoolkit` matches PyTorch's CUDA expectation. Mixing pip + conda CUDA is a recipe for this exact bug.

#4

No NVIDIA driver installed at all

Diagnose

`nvidia-smi` returns 'command not found' or 'NVIDIA-SMI has failed.' On Linux: `lspci | grep NVIDIA` shows the card but kernel module isn't loaded.

Fix

Install drivers: Ubuntu `sudo ubuntu-drivers autoinstall`. Windows: download from nvidia.com. Reboot. Re-run `nvidia-smi` to verify before reinstalling PyTorch.

#5

Running inside a virtual env that imported a global torch

Diagnose

`pip list` in the env shows torch. But `which python` and `which pip` point to different roots.

Fix

Recreate the venv: `python -m venv .venv && source .venv/bin/activate && pip install torch ...`. Verify `python -c 'import torch; print(torch.__file__)'` points inside `.venv`.

#6

Windows: Python installed from Microsoft Store (sandboxed, can't access GPU DLLs)

Diagnose

`where python` shows a path inside `C:\Program Files\WindowsApps\`. The Store Python runs in an AppContainer sandbox that blocks access to `C:\Windows\System32\nvcuda.dll` — the bridge DLL that PyTorch uses to talk to the NVIDIA driver. `torch.cuda.is_available()` returns False even though `nvidia-smi` works fine in PowerShell.

Fix

Uninstall the Store Python via Settings → Apps. Download Python 3.11 from python.org (not 3.12 — PyTorch ecosystem compatibility is stickiest on 3.11). During install: check 'Add Python to PATH' and 'Disable path length limit.' Verify: open new PowerShell, run `python -c 'import torch; print(torch.cuda.is_available())'`.

#7

Windows: Conda + pip mixed environment linking wrong CUDA DLLs

Diagnose

`conda list` shows `cudatoolkit` at version 11.8. `pip list` shows torch 2.5.1+cu124. Conda's CUDA runtime DLLs at version 11.x are loaded before PyTorch's bundled 12.4 DLLs because conda prepends its Library\bin to the DLL search path. PyTorch sees the old CUDA 11.8 runtime and declares CUDA unavailable.

Fix

Options: (1) Pure pip env — `python -m venv .venv && .venv\Scripts\activate && pip install torch --index-url https://download.pytorch.org/whl/cu124`. No conda. (2) Conda-only — `conda install pytorch torchvision pytorch-cuda=12.4 -c pytorch -c nvidia`. Avoid the pip+conda mix for CUDA packages. Verify with `python -c 'import torch; print(torch.version.cuda)'` should show 12.4.

#8

Windows: NVIDIA driver installed but Windows Update downgraded it

Diagnose

`nvidia-smi` worked yesterday. Today it shows driver version 472.12 (from 2021). Windows Update silently replaced your manually-installed Game Ready Driver with an ancient WHQL version from its own catalog. This is Windows' single most infuriating habit for AI workflows.

Fix

Two defenses: (1) After reinstalling the correct driver from nvidia.com, use `wushowhide.diagcab` (Microsoft's Show/Hide Updates tool) to hide the NVIDIA driver from Windows Update. (2) In Settings → System → About → Advanced system settings → Hardware → Device Installation Settings, set to 'No.' Then reinstall the driver with the 'Clean install' checkbox checked.

Frequently asked questions

What's the right PyTorch + CUDA combo for local AI in 2026?

Driver 550+, CUDA 12.4 toolkit, PyTorch 2.5+ with cu124 wheel. Python 3.11. This is the path with the broadest ecosystem support: vLLM, Transformers, TensorRT-LLM, Flash Attention all build cleanly against it.

Should I install CUDA Toolkit separately or just rely on PyTorch's wheel?

PyTorch's wheel ships with the CUDA runtime libraries it needs. You only need a separate CUDA Toolkit install if you're compiling something (Flash Attention, custom kernels). For pure inference, PyTorch's wheel is enough.

Is `torch.cuda.is_available()` returning True a guarantee everything works?

Mostly. It confirms PyTorch found a usable GPU. It does NOT guarantee any specific kernel works (some operations need newer CUDA than the driver supports), nor that VRAM is sufficient. Always test the actual workload.

How do I permanently prevent Windows Update from breaking my NVIDIA driver?

Download the 'Show or hide updates' troubleshooter from Microsoft (search `wushowhide`). Run it after installing the correct driver, select 'Hide updates,' and check the NVIDIA driver entry. Also disable automatic driver updates in System → About → Advanced system settings → Hardware → Device Installation Settings. These two steps together prevent ~95% of Windows Update driver sabotage.

Is WSL2 or Windows-native better for PyTorch CUDA reliability?

WSL2 wins on reliability by a mile in 2026. The Linux CUDA stack inside WSL has fewer package conflicts, no DLL-search-path surprises, and Windows Update can't touch the driver (WSL uses passthrough which isn't affected by driver downgrades). The trade-off: you need to learn a few WSL commands. For anything beyond ComfyUI/Ollama, WSL2 is the recommended path.

Why does my laptop's NVIDIA GPU show up in Device Manager but PyTorch can't use it?

Laptop GPUs with Optimus (Intel iGPU + NVIDIA dGPU) require the NVIDIA driver to hand frames to the Intel GPU for display. This doesn't break CUDA — `torch.cuda.is_available()` should still return True — but some laptop OEMs ship custom driver branches that strip CUDA support. If `nvidia-smi` works but `torch.cuda.is_available()` returns False, the wheel is likely CPU-only (check the +cpu suffix). If `nvidia-smi` itself fails, the driver is the issue — download the Notebook driver from nvidia.com, not your OEM's outdated version.

Related troubleshooting

When the fix is hardware

A surprising fraction of troubleshooting tickets resolve to: this card doesn't have enough VRAM for what you're asking it to do. If you're hitting OOM after every reasonable fix, or your GPU genuinely can't fit the model you need, it's upgrade time: