Our Methodology
How we calculate your water footprint
The estimates for how much water a single AI query uses range from 0.26 ml to 25 ml — depending on who's measuring, what they include, and what they have at stake. That range matters. We think you should understand where our number comes from before you trust it, so here's the full picture — including how we turn milliliters into dollars.
The problem with the numbers
Every AI query consumes electricity, which generates heat, which requires cooling, which evaporates water. That's the direct on-site cost — scope 1. But generating that electricity also uses water (power plant cooling towers, reservoirs), and manufacturing the chips inside the servers used water too. Those are scope 2 and scope 3. The number you get depends on which of these you count.
An AI company reporting only scope 1 will arrive at a fraction of a milliliter. An environmental group including all three scopes — plus the cost of training the model in the first place — will arrive at something 50 to 100 times larger. Both can be technically correct. Neither is lying. But it matters who's doing the framing, which is why we show you all the sources below.
Who's reporting what
Over the past two years, a handful of sources have published per-query water estimates. The pattern is predictable: AI companies report lower numbers, environmental groups report higher ones, and academic researchers land in between. That's not conspiracy — it's how incentive structures work. The same dynamic exists in pharmaceutical research, nutrition science, and climate data.
We mapped the major sources on two axes below. Horizontal: the source's relationship to the AI industry. Vertical: their estimate in milliliters per query.
The estimate landscape
Hover each point for details. awkifir's position is our applied estimate: ~7 ml base × 1.5 multiplier = ~10.5 ml.
AI-friendly sourceHigh estimate,
enviro-advocacy sourceLow estimate,
AI-friendly sourceLow estimate,
enviro-advocacy source
0.26 ml/query — scope 1 only
Gemini Apps median text prompt
0.32 ml/query — scope 1 only
Avg ChatGPT query, unverified
~14 ml/query derived — scope 1+2
2.9 Wh/query × avg WUE + grid water
~5 ml/query — scope 1+2
Independent re-analysis of Li et al.
~4 ml/query derived — scope 1+2
0.6 L/kWh × industry power estimates
~5.5 ml/query — scope 1+2
Benchmarked GPT-4o inference
10–25 ml/query — scope 1+2+partial 3
“Making AI Less Thirsty” — GPT-3 era
~15–20 ml/query projected — full lifecycle
664B liters by 2030 scenario
~20+ ml/query implied — full lifecycle
Advocacy report, worst-case framing
~10.5 ml/query — scope 1+2
Base ~7 ml × 1.5 precautionary multiplier
i.
Google — Gemini Technical Report ↗0.26 ml /query
Published Aug 2025. First-party measurement of median Gemini text prompt. Includes full-stack data center overhead but only on-site water (scope 1). Does not include grid water or manufacturing.
ii.
OpenAI — Sam Altman ↗0.32 ml /query
Published June 2025 in a personal blog post. Average ChatGPT query. Not peer-reviewed, no published methodology. Scope 1 only.
iii.
EPRI — Powering Intelligence ↗~14 ml /query (derived)
Published May 2024. Estimated 2.9 Wh per ChatGPT query (10× a Google search). We derive the water figure by applying average US data center WUE and grid water intensity. Scope 1+2.
iv.
Goedecke — Independent Analysis ↗~5 ml /query
Published 2024. Re-analyzed the Li et al. paper, correcting for conversation-vs-page assumptions and GPT-3-to-GPT-4 efficiency gains. Scope 1+2.
v.
Jegham et al. — How Hungry is AI ↗~5.5 ml /query
Published May 2025. Benchmarked actual GPT-4o inference energy at 0.42 Wh median for short prompts. Water derived from energy using standard WUE factors. Scope 1+2.
vi.
de Vries-Gao — Patterns ↗~4 ml /query (derived)
Published Dec 2025 in Patterns (Cell Press). Estimated 0.6 L/kWh water intensity across AI data centers. Scope 1+2.
vii.
Li et al. — UC Riverside ↗10–25 ml /query
Published Oct 2023 in “Making AI Less Thirsty.” The foundational paper. 500ml per 20-50 responses for GPT-3. Includes partial scope 3 (grid water). Widely cited but based on older, less efficient models.
viii.
Greenpeace / Öko-Institut ↗~15–20 ml /query (projected)
Published May 2025. Synthesized 95+ studies. Projects 664B liters by 2030. Full lifecycle scope including manufacturing and embodied water.
*
awkifir — Applied Estimate
~10.5 ml /query — base ~7 ml × 1.5 multiplier
Base: ~7 ml/query (scope 1 + scope 2). We don't include scope 3 — per-query attribution is too speculative to be useful. We apply a 1.5× precautionary multiplier to account for self-reported efficiency data. Your dashboard shows ~10.5 ml per query. We publish this because you should know where the number comes from.
Why we add 1.5×
Our base estimate of ~7 ml sits in the middle of the academic range — between the ~5 ml independent analyses and the 10–25 ml from Li et al. It's a reasonable midpoint for scope 1+2 with current models. But we don't use it directly.
The efficiency data that all estimates depend on — power usage, water usage, grid carbon intensity — comes almost entirely from the companies running the data centers. They control what gets measured, how it's scoped, and when it's disclosed. Google's reported WUE might be accurate for their best facility on a cool day. It's probably not representative of the fleet average under peak load. The 1.5× accounts for that gap.
As independent verification improves and reporting standards emerge, we expect to adjust this downward. We'd like it to reach 1.0. That would mean the self-reported data is reliable enough to use directly.
What the extension measures
Token count. The extension estimates token count from the visible text of your queries and responses. Longer conversations = more computation = more water.
Model identification. We identify whether you're using Claude, ChatGPT, or Gemini, and which model variant when detectable. Larger models consume more energy per token. A GPT-4 query costs more water than a GPT-3.5 query.
Energy-to-water conversion. We convert energy to water using published WUE ratios for data centers operated by Anthropic, OpenAI, and Google, weighted by known facility locations. We can't know which specific data center served your query — this is the most granular calculation possible with public data.
The precautionary multiplier. The 1.5\u00d7 adjustment described above, applied to the final water estimate.
We do not see or store your prompts or responses. We measure token volume and model type. Nothing else.
What's changing
Model efficiency is improving fast — Google reports a 33× reduction in energy per Gemini query over 12 months. But query volume is also exploding, models are getting larger, and agentic workflows (where one prompt triggers dozens of sub-queries) are becoming standard. The per-query cost goes down. The aggregate cost goes up. Both things are true at the same time.
We update our methodology quarterly. New research gets incorporated. New efficiency data adjusts our conversion ratios. The matrix above will change — that's the point.
How we price your offset
Your offset isn't a made-up number. It's priced at the retail cost of the water your AI consumed — about $3.50 per liter, which is what you'd pay for a 500 ml bottle at a convenience store.
We use retail pricing as the anchor because it's tangible. You used a liter of water, here's what that liter costs in the real world. But we don't buy bottles. We route your offset to charity: water, where the economics are radically different.
According to charity: water's published data, every $1 donated funds approximately 1,100 liters of clean water through hand pumps, wells, and piping systems. That works out to about $0.0009 per liter — roughly 3,900 times more efficient than buying a bottle at a store.
So when you pay $3.50 to offset one liter of AI water usage, charity: water turns that into approximately 3,850 liters of clean water access for communities that don't have it. You're paying retail. They deliver at wholesale. That's the leverage.
You choose your tier:
1× — Replace the water your AI used. All of it goes to charity: water.
2× — Replace your water, then fund more. Split the extra across your giving portfolio.
3× — Triple it. Maximum impact.
Offsets accumulate until they hit the billing threshold you choose — $5, $10, or $20. We cover all processing fees. 100% of your offset reaches the charities.
Source: charity: water's published cost-per-liter methodology at charitywater.org/stories/micro-price-points
Questions, corrections, or data we should see: methodology@awkifir.com.