RUNLOCALAIv38
→WILL IT RUNBEST GPUCOMPARETROUBLESHOOTSTARTPULSEMODELSHARDWARETOOLSBENCH
RUNLOCALAI

Operator-grade instrument for local-AI hardware intelligence. Hand-written verdicts. Real benchmarks. Reproducible commands.

OP·Fredoline Eruo
DIR
  • Models
  • Hardware
  • Tools
  • Benchmarks
  • Will it run?
GUIDES
  • Best GPU
  • Best laptop
  • Best Mac
  • Best used GPU
  • Best budget GPU
  • Best GPU for Ollama
  • Best GPU for SD
  • AI PC build $2K
  • CUDA vs ROCm
  • 16 vs 24 GB
  • Compare hardware
  • Custom compare
REF
  • Systems
  • Ecosystem maps
  • Pillar guides
  • Methodology
  • Glossary
  • Errors KB
  • Troubleshooting
  • Resources
  • Public API
EDITOR
  • About
  • About the author
  • Changelog
  • Latest
  • Updates
  • Submit benchmark
  • Send feedback
  • Trust
  • Editorial policy
  • How we make money
  • Contact
LEGAL
  • Privacy
  • Terms
  • Sitemap
MAIL · MONTHLY DIGEST
Get monthly local AI changes
Monthly recap. No spam.
DISCLOSURE

Some links on this site are affiliate links (Amazon Associates and other first-class retailers). When you buy through them, we earn a small commission at no extra cost to you. Affiliate links do not influence our verdicts — there are cards we rate highly that we don't have affiliate relationships with, and cards that sell well that we refuse to recommend. Read more →

SYS · ONLINEUPTIME · 100%2026 · operator-owned
RUNLOCALAI · v38
  1. >
  2. Home
  3. /Tools
  4. /ExecuTorch
runner
Open source
free + open-source

ExecuTorch

PyTorch's official mobile / edge inference runtime. Compiles PyTorch models to a mobile-optimized format for Android (NNAPI / GPU / NPU) and iOS (Metal / CoreML). The successor to the deprecated PyTorch Mobile path.

By Fredoline Eruo·Last verified May 7, 2026·4,500 GitHub stars

Overview

PyTorch's official mobile / edge inference runtime. Compiles PyTorch models to a mobile-optimized format for Android (NNAPI / GPU / NPU) and iOS (Metal / CoreML). The successor to the deprecated PyTorch Mobile path.

Featured in this stack

The L3 execution stacks that pick this tool as a recommended component, with the one-line note explaining the role it plays in each.

  • Stack · L3·Homelab tier·Role: PyTorch-native alternative (NNAPI / Vulkan delegate)
    Android on-device AI stack — Phi-3.5 Mini / Llama 3.2 3B via MLC LLM or Qualcomm AI Hub

    ExecuTorch is PyTorch-first-party. Backend-pluggable: NNAPI (Android), Vulkan (cross-vendor GPU), custom NPU delegates. Pick when your model authoring is PyTorch-native and you don't want a separate compile pipeline.

Pros

  • First-party PyTorch lineage — model authoring → mobile deploy is one toolchain
  • Backend-pluggable: NNAPI (Android), CoreML (iOS), Vulkan, custom NPU delegates
  • Active development under Meta — production-grade roadmap

Cons

  • Toolchain still maturing — kernel coverage gaps on some ops
  • LLM-specific optimizations behind MLC LLM and Qualcomm AI Hub for Snapdragon
  • Requires PyTorch model source — pre-quantized GGUF doesn't import

Compatibility

Operating systems
iOS
Android
Linux
macOS
GPU backends
n/a
LicenseOpen source · free + open-source

Runtime health

Operator-grade signals on how actively ExecuTorch is being maintained, how fresh its measurements are, and what failure classes operators have flagged. Every label below is anchored to a real date or count — we never infer maintainer activity we can't show.

Release cadence

Derived from the most recent editorial signal on this row.

Active
Updated May 7, 2026

6 days since last refresh · source: lastUpdated

Benchmark freshness

How recent the editorial measurements on this runtime are.

0editorial benchmarks

No editorial benchmarks for this runtime yet.

Community reproduction

Submissions that match an editorial measurement on similar hardware.

0reproduced reports

No community reproductions on file yet.

Get ExecuTorch

Official site
https://pytorch.org/executorch
GitHub
https://github.com/pytorch/executorch

Frequently asked

Is ExecuTorch free?

ExecuTorch has a paid tier (free + open-source). Check the pricing page for current terms.

What operating systems does ExecuTorch support?

ExecuTorch supports iOS, Android, Linux, macOS.

Which GPUs work with ExecuTorch?

ExecuTorch supports n/a. CPU-only inference is also possible but slow.
See something off?Report outdated·Suggest a correctionWe read every submission. Editorial review takes 1-7 days.

Reviewed by RunLocalAI Editorial. See our editorial policy for how we evaluate tools.

Related — keep moving

Compare hardware
  • RTX 3090 vs RTX 4090 →
  • Apple M4 Max vs RTX 4090 →
Buyer guides
  • Best GPU for local AI →
  • Best budget GPU →
When it doesn't work
  • llama.cpp too slow →
  • llama.cpp build failed →
  • llama.cpp Metal crash (Mac) →
  • GGUF tokenizer mismatch →
Recommended hardware
  • RTX 3090 (used) →
  • Apple M4 Max →
Alternatives
MLX-LMExLlamaV2llama.cppLlamafileOllamaIPEX-LLMCTranslate2Intel OpenVINO
Before you buy

Verify ExecuTorch runs on your specific hardware before committing money.

Will it run on my hardware? →Custom hardware comparison →GPU recommender (4 questions) →