DirectML
Microsoft's DirectX 12 inference backend. The Windows-native path for AMD / Intel / Qualcomm GPU + NPU acceleration without ROCm or vendor-specific SDKs. Used through ONNX Runtime as the DML execution provider.
Overview
Microsoft's DirectX 12 inference backend. The Windows-native path for AMD / Intel / Qualcomm GPU + NPU acceleration without ROCm or vendor-specific SDKs. Used through ONNX Runtime as the DML execution provider.
Pros
- Vendor-agnostic on Windows — same code path runs on AMD / Intel / Qualcomm
- No CUDA / ROCm install required — DirectX 12 is pre-installed on Windows 10+
- First-class Snapdragon X Elite + Lunar Lake NPU support via DML drivers
Cons
- Windows-only — no Linux / macOS path
- Throughput trails CUDA + native ROCm by 15-30%
- LLM-specific optimizations behind vLLM / llama.cpp
Compatibility
| Operating systems | Windows |
| GPU backends | NVIDIA AMD Intel Qualcomm |
| License | Closed source · free (Windows-bundled) |
Runtime health
Operator-grade signals on how actively DirectML is being maintained, how fresh its measurements are, and what failure classes operators have flagged. Every label below is anchored to a real date or count — we never infer maintainer activity we can't show.
Release cadence
Derived from the most recent editorial signal on this row.
6 days since last refresh · source: lastUpdated
Benchmark freshness
How recent the editorial measurements on this runtime are.
No editorial benchmarks for this runtime yet.
Community reproduction
Submissions that match an editorial measurement on similar hardware.
No community reproductions on file yet.
Get DirectML
Frequently asked
Is DirectML free?
What operating systems does DirectML support?
Which GPUs work with DirectML?
Reviewed by RunLocalAI Editorial. See our editorial policy for how we evaluate tools.
Related — keep moving
Verify DirectML runs on your specific hardware before committing money.