Petals
BitTorrent-style decentralized LLM inference. Splits a model into transformer-block shards distributed across volunteer hosts on the public internet — one client runs the input/output layers locally and streams activations through the swarm. ~6 tok/s on Llama-2 70B and ~4 tok/s on Falcon 180B in the public swarm. The right answer when you can't fit the model anywhere and don't have a GPU cluster, but a wrong answer for any privacy-sensitive workload.
Overview
BitTorrent-style decentralized LLM inference. Splits a model into transformer-block shards distributed across volunteer hosts on the public internet — one client runs the input/output layers locally and streams activations through the swarm. ~6 tok/s on Llama-2 70B and ~4 tok/s on Falcon 180B in the public swarm. The right answer when you can't fit the model anywhere and don't have a GPU cluster, but a wrong answer for any privacy-sensitive workload.
Stack & relationships
How Petals relates to other entries in the catalog — recommended pairings, alternatives, dependencies, and edges to avoid. Each edge carries a one-line operator note from our editorial team.
Alternatives
- Alternative tovLLM
Different category, common confusion. Petals is for 'I cannot fit this model anywhere and don't have a GPU cluster'; vLLM is for 'I have a GPU cluster and need throughput.' Surface the boundary explicitly.
- Alternative toExo
Petals shards over WAN volunteers; Exo shards over a controlled LAN cluster. Same architectural shape (pipeline parallel across machines), opposite trust models — public swarm vs personal devices.
- Competes withExo
Both are multi-machine inference; Exo runs over a controlled LAN with strong privacy, Petals runs over WAN volunteers with no privacy. Pick by trust model and what hardware you have.
- Competes withHyperspace (P2P inference network)
Both are consumer P2P inference. Petals is older and BitTorrent-flavoured; Hyperspace is newer and tries to ship a more polished consumer experience. Category still has no undisputed winner — watch the next 6-12 months.
Depends on
- Depends onllama.cpp
Not a runtime dependency, but Petals leans on the broader llama.cpp / HuggingFace ecosystem for tokenizers and model weights. Architecture support tracks what those upstreams ship.
Avoid pairing with
- Works poorly withAnythingLLM
Activations leave your machine through the swarm. Never wire Petals into a RAG workspace that contains anything sensitive — every request leaks the prompt and retrieved chunks to volunteer hosts.
Pros
- Runs 70B-180B models with no high-end GPU — internet is the cluster
- 3-25x lower latency than offloading at comparable hardware tiers
- Public swarm available; private swarms are easy to set up
Cons
- Activations leave your machine — never use for sensitive data
- Public-swarm throughput is variable (whatever volunteer hosts are online)
- Architecture coverage limited (Llama 3.1, Mixtral, Falcon, BLOOM)
Compatibility
| Operating systems | Linux macOS |
| GPU backends | NVIDIA CUDA Apple Metal CPU |
| License | Open source · free (OSS, MIT) |
Get Petals
Frequently asked
Is Petals free?
What operating systems does Petals support?
Which GPUs work with Petals?
Reviewed by RunLocalAI Editorial. See our editorial policy for how we evaluate tools.