TinyML / Microcontroller AI
AI on microcontrollers — Arduino, ESP32, Raspberry Pi Pico. Sub-100 KB models for sensors and embedded systems.
Setup walkthrough
- Buy an Arduino Nano 33 BLE Sense ($35) or ESP32-S3 ($15) — these have built-in accelerometers, microphones, and enough compute for TinyML.
- Install Arduino IDE or PlatformIO. Install the TensorFlow Lite Micro library via Library Manager.
- Train a model on your laptop:
pip install tensorflow→ train a tiny classification model (e.g., gesture recognition from accelerometer data, keyword spotting from microphone). Export to TensorFlow Lite format. - Convert to C array:
xxd -i model.tflite > model.h→ the model becomes a C header file included in your Arduino sketch. - Arduino sketch: include model.h → load TFLite interpreter → feed sensor data → run inference → trigger action (LED, motor, BLE notification).
- First working TinyML prototype in 2-4 hours (hardware setup + model training + deployment). The model must be tiny: <100 KB, <500K parameters. Typical models: gesture detection, wake-word detection, vibration anomaly detection, simple image classification (96×96 pixels).
The cheap setup
TinyML hardware is the cheapest AI entry point. ESP32-S3 ($15, includes WiFi/BLE + vector extensions for ML) runs keyword spotting at 10-20 inferences/second. Arduino Nano 33 BLE Sense ($35, includes 9-axis IMU + microphone + temperature) runs gesture recognition and environmental sensing. For training: any $300 laptop handles TensorFlow training for sub-500K-parameter models. Total hardware: ~$50-70 for the microcontroller + sensors. TinyML is proof that AI doesn't need a GPU — a $3 microcontroller runs neural networks in real-time.
The serious setup
"Serious" TinyML is about deployment at scale, not hardware cost. For industrial TinyML (1000+ sensor nodes, over-the-air model updates, fleet monitoring): Raspberry Pi 5 ($80) as an edge gateway + ESP32-S3 nodes ($15 each × 100 = $1,500). The gateway runs a local LLM (Llama 3.2 3B) for orchestrating updates and aggregating sensor data. For model training: RTX 3060 12 GB (~$250, see /hardware/rtx-3060-12gb) trains TinyML models in minutes (they're so small that training is near-instant). Total fleet budget: ~$2,000-3,000 for 100 sensor nodes + gateway + training rig. TinyML at scale is a logistics challenge (deployment, OTA updates, power management), not a compute challenge.
Common beginner mistake
The mistake: Training a ResNet-18 (11M parameters, 44 MB) on a laptop, then trying to deploy it to an ESP32 with 512 KB of flash storage. Why it fails: The model is 85× larger than the entire flash storage of the microcontroller. TFLite quantization helps (INT8 reduces by 4× → still 11 MB) but even quantized, it's 20× too large. The microcontroller doesn't have external RAM or an MMU to swap — if the model doesn't fit, it doesn't run. The fix: Design for the target from day one. ESP32-S3 has 512 KB flash, ~320 KB usable after OS + app. Your model must be <100 KB. Use MobileNetV1-0.25 (50K parameters, ~200 KB quantized), not ResNet. Use depthwise-separable convolutions. Use 8-bit quantization. Train with TensorFlow Lite Model Maker which optimizes for microcontrollers. The model architecture choice is dictated by your MCU's flash size, not your laptop's GPU. Measure twice, deploy once.
Recommended setup for tinyml / microcontroller ai
Browse all tools for runtimes that fit this workload.
Reality check
Local AI workloads have real hardware constraints that vary by task type. VRAM ceiling decides what model fits; bandwidth decides decode speed; compute decides prefill speed. Pick the GPU tier that fits your actual workload, not the spec sheet.
Common mistakes
- Buying for spec-sheet VRAM without modeling KV cache + activation overhead
- Underestimating quantization quality loss below Q4
- Skipping flash-attention support (real perf gap on long context)
- Ignoring sustained-load thermals (laptops thermal-throttle within 30 min)
What breaks first
The errors most operators hit when running tinyml / microcontroller ai locally. Each links to a diagnose+fix walkthrough.
Before you buy
Verify your specific hardware can handle tinyml / microcontroller ai before committing money.
Edge and embedded AI lives outside the desktop GPU world, but the iGPU and eGPU buyer questions still apply for the next tier up.