The Environmental Impact of Generative AI
An article in Nature suggests that AI models have less impact on the environment than human writers. Can this really be true?

I spent a lot of time in 2024 evaluating various AI tools and models, and running training courses in Generative AI. Part of that training includes the governance and ethics around the use of AI, a topic that many people are now calling Responsible AI. The word responsible is used in all its meanings, to incorporate the environmental impact of AI as well as any ethical concerns.
In preparation for this work, I read widely on the environmental impact of AI. There is no escaping that the impact is significant. Both Google and Microsoft have suffered increases in their carbon emissions. Some sources quote a five-year increase at Google of 48% at a time when they are trying to reduce emissions generally. Microsoft said its emissions have increased 29% since 2020. This increase was mainly attributable to the purchase of vast server farms which will predominantly be used in AI applications.
Thanks for reading Paul’s Substack! Subscribe for free to receive new posts and support my work.
But it was an article in Nature that really stopped me, and challenged the prevailing view that AI is bad for the environment, to cut a very long and complex argument short. The authors had made a detailed comparison of the environmental impact of AI models against the environmental impact of a human performing the same tasks. Their findings were eye-opening.
The authors found that when you divided the training cost of the AI model by the number of queries serviced by that model, the training cost of the AI model was 1.84g of CO2 equivalent (CO2e) per query. This is lower than most people might think, and this cost will reduce if the number of queries increases or the training methods are made more efficient. They found that an AI model takes around 3.8 seconds to generate one page of text. Clearly there are other impacts beyond the training of the AI model, such as the creation and later recycling of computer chips, but these were found to be negligible compared to the cost of training the AI model.
To find out the environmental impact of a human writer, an average of 300 words per hour is quoted for a professional author. In the most common context where AI might be used today, we would be thinking of business writing. I would suggest that most business writing is no slower and might even be faster than a professional novelist as the text typically has a much shorter shelf life with far less scrutiny. For example, e-mails might be read once by half a dozen people, never to be looked at again. Mark Twain, the author quoted in this article, is still being read a century after his death.
At this point the article maps a human individual's carbon cost and uses that to determine the carbon cost of one hour's worth of writing. They found that the environmental impact of the human is significantly affected by where in the world they live: "Assuming that a person’s emissions while writing are consistent with their overall annual impact, we estimate that the carbon footprint for a US resident producing a page of text (250 words) is approximately 1400g CO2e. In contrast, a resident of India has an annual impact of 1.9 metric tons, equating to around 180g CO2e per page. In this analysis, we use the US and India as examples of countries with the highest and lowest per capita impact among large countries (over 300 M population)."
Even considering the impact of the computer (either laptop or desktop) used for one hour of writing, they find that the laptop is less efficient than an AI model. It is easy to understand this: the human is using a laptop for one hour to write a page of text but the AI model is taking just less than four seconds.
The article goes on to make similar calculations for human versus AI in graphic illustration, as distinct from writing text.
It is easy to find holes in these arguments. For example, the human is still living and breathing whatever activity they choose to undertake. If they do not spend the next hour writing, they will expend the same amount of carbon doing something else, and possibly more if they end up running for an hour. If the AI model does not do anything for an hour, energy savings could be made.
Humans have not become more efficient at writing for the last century, unless perhaps you consider the impact of touch typing compared to using a quill. But the AI model is becoming orders of magnitude more capable, and more efficient, every few months. It is reasonable to assume that the few grams of carbon impact needed to produce one page of text out of an AI model will substantially reduce in a very short time.
The article goes into much greater detail and is worth a read. You could even request that your favourite AI model summarises it for you.
None of this will slow the rapid adoption of AI. We are surely right to try and calculate the environmental impact of human endeavours and to minimise them, and it is surely right to do the same for AI, because it is known to be hungry for electricity, water for cooling, rare minerals and many other things. I am resistant to the zealots who believe AI is the answer to everything, those who promote its use for even trivial tasks which would not be difficult or time consuming for a human to do unaided. There is no doubt that the quality of AI output is very patchy, often frustrating, clumsy and too often wrong. At best, today, it is a tool to be used with great care and patience. But I don't think there is any doubt that its performance is increasing at such a rapid rate that we should all be looking to incorporate it into our day-to-day lives, both at home and at work. Ignoring it, for whatever altruistic reason, will put you at a disadvantage. Inventions sometimes have very serious drawbacks yet it is surely foolish to pretend that they have not been invented at all. History can show us many cautionary tales of luddite behaviour.
AI has been with us since the 1950s. It feels like it has advanced more in the last 5 years than it did in the previous 50. Even just 5 months from now, such is the level of investment, the tools will be more capable than they are today. I am just as confident that the environmental impact of these tools will be exponentially reduced in time.
If you study these things for a while, you soon come across Moore's Law. This was intended to document the observation that the number of transistors on a chip (and therefore computer performance) doubles every 2 years, but it has been broadly applied to many other situations. It is not unreasonable to apply Moore's Law to the efficiency of AI models. Most likely, the choice will be between having a model that is twice as powerful in 2 years, or having one that is half as impactful on the environment. Cynics will be quick to suggest that the former will always win out. But I suggest that both are possible: the main vendors already have 'deep thinking' versions of their models, which take more time and require more power, but with measurably better outcomes. These are suited to more complex problems. But the vendors also have lightweight models which are better suited to use in smartphones and tablets rather than laptops and desktops, or perhaps for simpler problems. This is a nuanced trade-off and these are positive signs.
I hope that this article is a little easier to read than the Nature one. I like to think of myself as an AI pragmatist. Neither zealously pro AI or steadfastly luddite, even though I have seen so many false starts since I first studied this area in the late 1990s. If all of us aim for some position between these two extremes, whether you are involved in creating the AI future or deploying the tools, I think that the future of AI will be balanced both in terms of its ethical and environmental impacts.