Is Generative AI Killing the Planet?
A kinder-cut walk-through of the facts, so you can focus your concern where it truly helps.
Generative AI has gone from parlour trick to board-room staple in under two years—and with that meteoric rise has come a torrent of headlines about “sky-high energy bills” and “rivers drained dry”. Some of those warnings are grounded; others miss key context.
Before we reach for pitchforks (or, conversely, dismiss every worry as hype), let’s pause, look at the numbers alongside our other daily digital habits, and ask: where is the environmental impact genuinely largest—and what can we actually do about it?
Welcome to another edition of the best damn newsletter in human-centric innovation.
Below, each section follows two beats:
→ What’s the concern? – the perfectly reasonable worry.
→ What’s the context? – the part that rounds the picture and helps us make better decisions.
Let’s get into it. 👇
Concern #1 – “Using ChatGPT for an hour is an energy hog”
People fear that if a single prompt needs power, chaining a hundred of them could be catastrophic. Because prompts feel optional (unlike, say, boiling a kettle), they draw disproportionate guilt.
Using ChatGPT (100 prompts) → 0.034 kWh → 100 prompts
TikTok scrolling on a phone (1 Hour) → 0.007 kWh → ≈ 21 prompts
Spotify streaming on a phone (1 Hour) → 0.016 kWh → ≈ 47 prompts
Storing 1 GB in the cloud (1yr) → 0.10 kWh → ≈ 294 prompts
Streaming Netflix, 55-inch 4K TV (1 Hour) → 0.15 kWh → ≈ 441 prompts
Playing a PS5 online multiplayer (1 Hour) → 0.20 kWh → ≈ 588 prompts
*Prompt energy: 0.034 kWh for a GPT-4 API utility using 500 tokens, which is a typical query.
Context & gentle critique
A 100-prompt session drinks less power than one Netflix episode. AI grabs the spotlight because it’s shiny and measurable, whereas our nightly binge-watch barely registers on the guilt radar.
In truth, scale rules all: billions of prompts, streams or doom-scroll minutes become material only when repeated mindlessly. By setting sensible rate limits or batching prompts, firms can keep usage closer to the TikTok tier rather than the PS5 tier.
Concern 2 – “Training one big model rivals heavy construction”
The charge is that a single pre-training run spews industrial-scale CO₂, erasing any green benefit AI might provide. Photos of GPU racks next to power-station stacks reinforce the narrative.
Training the entire GPT-3 model (data-centre electricity) → ≈ 552 t CO₂e
Concrete gravity base for a one 5MW off-shore wind turbine → ≈ 1190 t CO₂e
Context & gentle critique
Training one AI model really is half a turbine foundation, but that’s a worst-case, legacy snapshot. Today’s distilled or “small language” models need a fraction of the energy and can even run on a mobile handset, turning pockets into tiny data-centres.
Crucially, the same algorithms behind GPT-3 are now slashing concrete emissions—even a single large deployment of low-carbon cement can offset multiple model trainings.
The genuine worry is vanity-sized models built for leaderboard points; the remedy is right-sizing and re-use, not abandoning AI.
Bright-spot list
Shrinking footprints → Mistral-NeMo-Minitron 8B trains with about one-tenth the energy of its 12B “teacher” and that was already a lightweight model.
Edge deployment → Quantised 8B models now demo on flagship phones and Pi-sized boards.
AI-designed green concrete → MIT’s 2025 mix cuts embodied CO₂ 30–40 %.
Concern 3 – “Every prompt drains a bottle of water”
Social media memes show pictures of empty reservoirs and blame generative AI for “drinking rivers dry”. The logic is that data-centre cooling secretly gulps litres every time you hit Enter.
OpenAI June 2025 average → 0.32 mL per prompt
Coca-Cola UK abstraction licence → ~1.6 billion L per year in England
Context & gentle critique
At 0.32 mL each, you would need about 5 trillion prompts to match Coca-Cola’s annual abstraction licence (formal permission from the Environment Agency to remove > 20000 L of surface- or groundwater per day; scores of other industries hold similar permits) allowance in England alone—used mainly for bottling, pipe-rinsing and limited on-site cooling.
Per-prompt water is teaspoons, but siting GPU farms in drought-prone regions is a valid anxiety. The fix lies in location policy and the rapid shift to closed-loop or recycled-water cooling, already under way.
What data centres are doing to reduce water waste
The Takeaway
Generative AI’s footprint does matter—but so does perspective:
Per use, a 100-prompt brainstorm barely blips (⅕) next to an hour of Netflix or the hidden water in fizzy drinks.
At scale, every digital habit adds up.
The scrutiny aimed at generative AI should illuminate all our electrons and litres, not just the newest ones.
Ask of every watt and every sip—AI, video, beverage alike—“Is this worth it?” Shared curiosity beats blame every time.
What Now?
Join Netropolitan Academy today—because smart progress questions every kilowatt, not just the fashionable ones.
Until next time: stay curious, stay kind to the planet, and maybe ask if that third Netflix episode is really worth the juice.