Mixture of Experts (MoE)
Mixture of Experts is a neural network architecture where multiple specialized sub-networks ("experts") exist, but only a subset is activated for any given input. A gating network routes each token to the top-k experts (typically k=2).
Practical impact for local AI: an MoE model with 30B total parameters and 3B active parameters needs the VRAM of a 30B model but runs at the speed of a 3B model. Qwen 3 30B-A3B is exactly this — fits on 24GB VRAM, generates tokens at 70B-class speed.
Examples: Mixtral 8x7B (47B total / 13B active), Qwen 3 235B-A22B (235B / 22B), Llama 4 Scout (109B / 17B), DeepSeek V3 (671B / 37B). The cost is non-uniform load — some experts get hot during inference, complicating multi-GPU setups and quantization.
Related terms
See also
Reviewed by Fredoline Eruo. See our editorial policy.