In the race to build bigger and better AI models, tech companies are hiding a dirty secret: AI’s carbon footprint. AI systems require massive amounts of energy and water to build and run, and once deployed, they can emit several metric tons of carbon a day.
Sasha Luccioni, 33, a researcher at the AI startup Hugging Face, has developed a better way for tech companies to estimate and measure the carbon footprint of AI language models. Luccioni’s method helps companies calculate the carbon dioxide emissions of their AI systems in a way that accounts for climate impacts during their entire life cycle, including the energy, materials, and computing power needed to train them. For example, her team found that training, building, and running Hugging Face’s AI language model BLOOM has generated around 50 metric tons of carbon dioxide emissions.
Her work “represents the most thorough, honest, and knowledgeable analysis of the carbon footprint of a large ML model to date,” Emma Strubell, an assistant professor in the school of computer science at Carnegie Mellon University, who wrote a seminal 2019 paper on AI’s impact on the climate, told MIT Technology Review in November.
Luccioni says her approach helps people make more informed choices about AI. She says nobody else has done such an in-depth audit of a language model’s emissions. Code Carbon, the tool she helped create, has now been downloaded over 300,000 times.
“Understanding the environmental impacts of these models is really, really important to trying to get ahead of things and making them more efficient,” she says.