AI Energy & Carbon Footprint Calculator

Every AI prompt consumes electricity, produces CO₂, and uses water for data centre cooling. Estimate the real environmental cost of your AI usage — per query, per day, and across models.

Quick Presets

~1 token = 0.75 words

Output costs ~4x more per token

Results for this single request

Energy

per request

CO₂

grams

Water

ml

vs Google

searches equiv.

Estimated cumulative footprint

Per Day

Per Month

Per Year

Annual CO₂

Annual Water

Energy per typical query (300 input + 300 output tokens for text models; fixed for image/video). Sorted from most to least efficient.

Model Type Energy / query Relative Rating

Sources: Epoch AI, IEA 2025, arXiv:2509.20241. H100 hardware at production batch utilisation. Figures are estimates.

Why Does AI Use So Much Energy?

Every AI prompt you send travels to a data centre housing thousands of GPU servers. A single NVIDIA H100 GPU — the most common chip powering frontier AI today — draws up to 700 watts of power, roughly equivalent to a powerful desktop PC running at full load. When you send a query, your request is processed by many such GPUs simultaneously. Add cooling systems, power conversion losses, and networking, and the total energy draw for a single data centre can rival a small city.

Unlike streaming or browsing (which mostly reads cached data), AI generates every response from scratch. Each output token requires a full forward pass through billions of model parameters. A 200-billion parameter model producing 300 output tokens performs roughly 60 trillion multiply-add operations — all in under a second.

⚡ Input vs Output Cost

Output tokens cost roughly 4x more energy than input tokens. Input is processed in one parallel "prefill" step; output requires a full forward pass per token.

💧 Water Cooling

Data centres evaporate water to cool servers. A typical AI query uses 6–10 ml of water — about a teaspoon. Global AI water usage runs into hundreds of millions of litres annually.

📈 Context Scaling

Energy scales roughly linearly with token count. A 100k-token context query can use up to 40 Wh — 100x more than a short chat message.

How Much Energy Does Each AI Task Use?

Not all AI tasks are equal. Here is a comparison of common AI tasks by approximate energy consumption:

Task Energy (approx) vs Google Search CO₂ (grams)
Short chat (100+100 tokens, small model)~0.003 Wh0.01x~0.001g
Typical GPT-4o / Claude query (300+300)~0.34 Wh~1x~0.15g
Long document analysis (10k tokens in)~2.5 Wh~8x~1.1g
AI Image generation (SDXL)~0.9 Wh~3x~0.4g
AI Image generation (DALL-E 3)~2.9 Wh~10x~1.3g
AI Video (5-sec, Sora-class)~500–3000 Wh1600–10000x~220–1300g

Based on 0.34 Wh/query median for frontier models (Epoch AI, 2025) and 442g CO₂/kWh global average carbon intensity (Our World in Data, 2024).

How to Reduce Your AI Carbon Footprint

You can meaningfully reduce your AI energy footprint without sacrificing productivity:

AI Energy vs Other Everyday Activities

To put AI energy into perspective, here is how a typical AI session compares to daily activities:

Activity Energy Equivalent to
1 AI text query (frontier model)~0.34 WhLED bulb on for ~1.7 seconds
20 AI queries (typical daily user)~6.8 WhBoiling 1 cup of water in a kettle
1 hour of Netflix HD~100–200 Wh~300–600 AI text queries
Google Search~0.3 WhSimilar to 1 short AI query
Driving 1 km (petrol car)~600 Wh~1,760 AI text queries

The Big Picture: AI vs Global Energy

Individual queries are small, but AI at scale is significant. In 2024, data centres accounted for approximately 1–2% of global electricity consumption. AI workloads are growing faster than any other data centre category, projected by the IEA to potentially double data centre energy demand by 2030.

Training is far more expensive than inference. GPT-4 training was estimated to consume over 50 GWh — the annual electricity use of 4,500 US homes. But training happens once; inference (your queries) happens billions of times daily. At scale, inference energy now rivals or exceeds training energy in large deployments.

✓ How to Use This Calculator

  1. In the Per Query tab: select your AI model, enter input and output token counts, and instantly see energy, CO₂, water, and Google Search equivalents.
  2. In the Daily Usage tab: enter how many queries and image generations you do per day to see your weekly, monthly, and annual footprint.
  3. In the Model Comparison tab: browse a ranked table of popular models by energy efficiency.
  4. Use Quick Presets for instant estimates — just click any preset and the calculator fills automatically.
  5. Use the Share Result button to share your footprint with others.

Frequently Asked Questions

How much energy does one ChatGPT prompt use?

A typical GPT-4o query uses approximately 0.24–0.34 Wh — similar to a Google search and equivalent to keeping a 10W LED on for about 1–2 seconds. It sounds trivial, but at billions of daily queries it adds up significantly.

Why do output tokens cost more energy than input tokens?

Input tokens are processed in one parallel "prefill" step. Output tokens are generated one at a time, each requiring a full forward pass through the model — making output generation approximately 3–5x more energy intensive per token.

Which AI model is the most energy efficient?

Smaller open-source models like Llama 3.1 8B and Mistral 7B are the most efficient — as little as 0.003 Wh per query, nearly 100x less than frontier models. For everyday tasks, they often deliver comparable quality at a fraction of the environmental cost.

How much water does AI use per query?

Data centres use water for evaporative cooling. A single frontier AI query evaporates approximately 6–10 ml of water — about a teaspoon. GPT-4 training alone was estimated to consume over 700,000 litres of water.

Is AI image generation energy-intensive?

Yes — significantly more than text. SDXL uses ~0.9 Wh per image, DALL-E 3 uses ~2.9 Wh. That is 3–9x the energy of a typical text query. AI video generation is far more extreme — a 5-second Sora-class clip can use 500–3000 Wh.

How accurate are these estimates?

These are best estimates based on published research from Epoch AI, IEA, and arXiv benchmarks. Real values vary by deployment configuration, GPU generation, batch utilisation, and data centre energy mix — often by 2–5x. This tool is designed for awareness and comparison, not precise auditing.

Related Tools

References & Data Sources

All figures are estimates. Real-world values vary by model deployment, hardware, data centre location, and energy mix. This tool is for educational awareness only.