Interactive calculator

LLM electricity calculator

What does it actually cost to run a local LLM on your hardware? Most "is local AI cheaper?" takes ignore that GPUs draw most of their power only under sustained load and that electricity prices vary 5-10× across regions. This one's explicit about both.

Pure client. No tracking. Math is open and citable.

Inputs

450 W
4.0 h
$0.150/kWh
22 days

Cost estimate

kWh / month
39.6
$ / month
$5.94
$ / year
$71.28
ChatGPT Plus is $20/month.
Break-even at 13.5 hours/day on this hardware.
At your current settings (4.0 h/day, $0.150/kWh, 450 W), you spend $5.94/mo on GPU electricity vs $20 for ChatGPT Plus — cheaper to run locally.
$ / mo (incl. CPU/RAM/cooling +120 W)
$7.52
$ / mo if always-on idle (+30 W idle)
$8.32

Caveats

GPU power is only part of total system draw — add ~120 W for CPU, RAM, motherboard, fans, cooling. Reflected in the second stat above.

Idle power isn't counted in the headline number — most modern GPUs idle at 10-40 W, but if the machine is on 24/7 that adds up (the third stat models always-on idle at 30 W).

Electricity prices vary 5-10× across regions. Hawaii is ~$0.40/kWh; Quebec is ~$0.07. Use your actual bill.

This is operating cost only. Hardware amortization (a $1,500 4090 over 3 years = ~$42/mo) typically dwarfs electricity — see the full local-AI cost breakdown.

Embed this calculator

Link to this page from articles, READMEs, or community threads — screenshot welcome, attribution appreciated.

Suggested citation: Calculator by RunLocalAI · runlocalai.co · CC-BY-4.0

Related: How much does local AI cost? · Local AI vs ChatGPT Plus · VRAM calculator