There’s been much talk about AI’s huge appetite for power and water resources, but its impact on the environment is largely understudied.
In a push to have this unravelled, Mistral AI, in collaboration with consulting firm Carbone 4 and France’s ecological transition agency (ADEME), initiated a comprehensive lifecycle analysis (LCA) of an AI model. This, they said, was to quantify the environmental impacts of building and using large language models (LLMs) across three impact metrics: AI greenhouse gas (GHG) emissions, AI water consumption, and resource depletion.
Their findings point to a new perspective on AI resource consumption and the need for a global environmental standard for AI development.
- Training and running large AI models consume massive amounts of electricity, water, and rare earth materials, with huge environmental costs.
- By 2027, global AI demand could require up to 6.6 billion cubic meters of water.
- Companies like Google and Microsoft are pledging water replenishment and efficient cooling, but progress remains uneven.
- There are no global standards to track AI’s environmental impact, which makes it hard to compare one model’s footprint with another.
- Experts say the potential solution might require common reporting standards, powering AI with renewables, and building hardware in more sustainable ways.
What Is the Environmental Impact of AI?
To help consumers, AI developers, and policy makers fully understand and manage the environmental impacts of LLMs, Mistral AI quantified the true cost of training and running its Large 2 model over the course of 18 months of usage.
From their calculations, the Large 2 generated more than 20 kilotons of CO2 equivalents (CO2e), consumed 281,000 cubic meters of water, and resulted in 660kg Sb eq (standard unit for resource depletion).
These numbers are for training and running the model, and they account for about 85.5% of the GHG emissions and 91% of water consumption throughout the model’s lifecycle analysis.
In a smaller context, Mistral notes that using their Le Chat AI assistant for a 400-token response, or approximately a page’s worth of text, can translate to 1.14 g CO2e emissions and 45 milliliters of water consumption.
To picture its real usage, this 1.14gCO2e emission is equivalent to watching online streaming for 10 seconds in the US.
“These figures reflect the scale of computation involved in GenAI, requiring numerous GPUs, often in regions with carbon-intensive electricity and sometimes water stress,” Mistral explained in the report.
While Mistral admits it is difficult to pin down a precise number on these metrics since there are no standards for LLM environment accountability, they infer that there is a “strong correlation between a model’s size and its footprint.”
To break it down, they wrote:
“A model 10 times bigger will generate impacts one order of magnitude larger than a smaller model for the same amount of generated tokens. This highlights the importance of choosing the right model for the right use case.”
We already know that training large language models is resource-intensive and exerts a significant environmental toll. The AI’s environmental impact comes from the enormous amounts of fossil-fuel-based electricity, freshwater, and raw materials consumed daily during both training and inference.
A 2024 United States Data Center Energy Usage Report by Lawrence Berkeley National Laboratory showed that data centers consumed about 4.4% of the U.S. electricity supply in 2023, a figure that could triple by 2028 as demand for AI continues to rise.
To put this into perspective of ChatGPT’s environmental impact, the annual training and operation of a single advanced model such as GPT-4o can generate over 160,000 tonnes of CO2E, a greenhouse gas that drives global warming.
Speaking of AI’s environmental impact on water, AI training also places immense pressure on freshwater supplies. Cooling systems in high-density data centers often evaporate large volumes of water into the atmosphere.
Google, for example, reported that its data centers consumed about 5.6 billion gallons of water in 2022. For context, this is the equivalent of what it can take to irrigate roughly 37 golf courses annually in the southwestern United States.
Researchers from the University of California, Riverside project that by 2027, the global AI demand could require 4.2–6.6 billion cubic meters of water withdrawal. This amount, they claim, is equivalent to several times the annual water usage of Denmark or about half of the United Kingdom’s total consumption.
Raw material extraction is another important factor in the environmental impact of AI. The GPUs and specialized chips used to train large language models depend on rare earth elements such as lithium and cobalt, and these resources often come from environmentally fragile regions of the world, like Chile, Dr. Congo, Bolivia, and Argentina.
Mining these materials contributes to deforestation, water pollution, and human rights issues in supply chains, while recycling and recovery rates remain low. This persistent demand for hardware upgrades means that AI’s resource footprint will continue to rise if a sustainable means isn’t put in place.
Best Approach to a Sustainable AI Development
Mistral AI researchers emphasize that tackling the environmental impact of LLM begins with transparent, standardized reporting from the companies involved. In their view, AI companies need to publish the environmental impacts of the models they develop using standardized and internationally recognized frameworks.
Mistral noted that the framework should create a scoring system where buyers and users can identify the model with the least carbon footprint, or those with the lowest water consumption and material use during training and inference.
Some of the major players in the AI industry are already moving in this direction, even though data on their model training and inference are not readily available to the public.
While these initiatives are good, Mistral AI maintains that standardization of environmental impact reports is the best path to take.
The researchers said:
“By encouraging sufficiency and efficiency practices and publishing standardized environmental impact reports, we can collectively work towards aligning the AI sector with global climate goals.”
Advancement in AI technology is undeniably a welcome development, but the heavy environmental implications and resource depletion it carries are a major concern for everyone, even the end users.
Without having a standardized reporting model and collaborative sustainability measures on AI development, the environmental footprint will continue to grow.
As Mistral AI pointed out, the path forward lies in transparent lifecycle tracking, greater reliance on renewable energy, and responsible hardware sourcing.
FAQs
How is AI affecting the environment?
AI consumes vast amounts of electricity, water, and raw materials. This contributes to greenhouse gas emissions, water scarcity, and resource depletion.
How much energy and water are required to train a large AI model?
Training a single large AI model can use millions of kilowatt-hours of electricity and millions of liters of water for cooling, depending on its size.
Why is measuring AI’s ecological footprint so challenging?
Lack of standardized reporting and opaque supply chains make it difficult to track the full impact of AI systems on the environment.
What solutions or standards can make AI more sustainable?
Adopting renewable energy, improving cooling efficiency, and enforcing global reporting standards can help minimize AI’s environmental impact.
Why is AI bad for the environment?
AI is bad for the environment because training and running large models consume massive amounts of electricity, water, and rare earth materials. This might lead to high carbon emissions, water stress, and resource depletion.