Can you mitigate your AI carbon footprint?
While AI's environmental footprint is heavily influenced by data centers and infrastructure, there's one aspect we—as users—directly control: the length of our prompts and responses.
How much of a difference does it really make?
Understanding Tokens: The Building Blocks of AI Communication
Generative AI processes language in small units called "tokens." A token might be a full word, part of a word, or even a single character.
For example:
"environmental" breaks into tokens like "environ" and "mental"
"AI" is typically one token
Common words like "the" or "and" are single tokens
Punctuation marks often count as separate tokens
Compare these approaches:
"Write a detailed explanation of sustainable agricultural practices used globally with examples from each region" (≈20 tokens submitted)
"List five sustainable farming practices and their regions" (≈9 tokens submitted)
While the first prompt might generate a response of 1,000+ tokens, the second could deliver useful information in just 100-200 tokens.
The Energy Equation
Here's the crucial point: Not all tokens are created equally. The tokens you submit into the model (your question to ChatGPT) do not require the same amount of energy as the tokens the model spits out (your answer). Each token that the AI model processes on the way out (the answer to your question) uses approximately 500 times more energy than each token submitted on the way in (your prompt). This is in part because while input tokens are processed together (“in parallel”), output tokens are generally processed one at a time (or “sequentially”), consuming more compute and energy.
Let’s go back to our example. We had two prompts above, one long and one short. Let’s compare their impacts (note: these figures are not meant to be exact, but to show the magnitude of difference)
Long approach:
Your prompt: "Write a detailed explanation of sustainable agricultural practices used globally with examples from each region" (≈20 tokens)
AI response: A comprehensive answer with multiple paragraphs, examples from different regions, and detailed explanations (≈1,000 tokens)
Energy impact: 20 input tokens + (1,000 output tokens × 500) = 500,020 energy units
Short approach:
Your prompt: "List five sustainable farming practices and their regions" (≈9 tokens)
AI response: A concise list with brief descriptions (≈150 tokens)
Energy impact: 9 input tokens + (150 output tokens × 500) = 75,009 energy units
Both approaches give you valuable information about sustainable farming practices, but the second uses 85% less energy.
4 Simple Ways to Minimize Your AI Footprint
These energy differences may seem abstract, but they quickly add up at scale. ChatGPT consumes roughly 3 Wh per query across a billion daily queries. If everyone requested 50% shorter responses, we could save approximately 200,000 metric tons of CO2 annually—equivalent to removing 1,000 cross-country flights. And these figures represent only ChatGPT usage—millions more interact with AI through numerous other platforms and services, multiplying the potential environmental impact.
So, what changes can you make?
Measure your carbon footprint, using Scope3’s framework and platform. Sign up to join the waitlist here.
Be concise and effective: Frame your queries precisely without unnecessary words. Consider conversation length rather than individual prompt size—a comprehensive prompt that resolves your question in one turn uses significantly less energy than multiple back-and-forth exchanges. Each conversation turn compounds energy usage as previous outputs become new inputs.
Set limits: Request "Answer in under 100 words" or similar constraints
Create new chats: Start fresh when previous context isn't relevant
These small adjustments might seem minimal individually, but multiplied across 400 million active users, they create significant impact. By becoming mindful of our AI interactions, we're building a more sustainable digital future.
Of course, while individual changes matter, our collective voice as consumers holds even greater power. As AI users, we should ask ourselves:
Are we choosing AI models appropriate for our tasks, or defaulting to the largest available?
Should platforms design interfaces that encourage efficiency and display token/energy usage metrics?
Could AI providers offer incentives for environmentally conscious usage patterns?
How might we support companies prioritizing energy-efficient AI development?
The most substantial environmental improvements will come from both consumer habits and industry practices evolving together. By demanding transparency about AI's environmental costs and supporting providers who optimize for efficiency, we create market pressure for sustainable AI.
Some platforms are already working on more efficient models and training methods, but without consumer awareness driving demand, progress may remain slow. Just as we've seen with other industries, informed consumers have the power to accelerate the adoption of greener technologies.