Taking a closer look at AI’s supposed energy apocalypse (2024)

Taking a closer look at AI’s supposed energy apocalypse (1)

Late last week, both Bloomberg and The Washington Post published stories focused on the ostensibly disastrous impact artificial intelligence is having on the power grid and on efforts to collectively reduce our use of fossil fuels. The high-profile pieces lean heavily on recent projections from Goldman Sachs and the International Energy Agency (IEA) to cast AI's "insatiable" demand for energy as an almost apocalyptic threat to our power infrastructure. The Post piece even cites anonymous "some [people]" in reporting that "some worry whether there will be enough electricity to meet [the power demands] from any source."

Digging into the best available numbers and projections available, though, it's hard to see AI's current and near-future environmental impact in such a dire light. While generative AI models and tools can and will use a significant amount of energy, we shouldn't conflate AI energy usage with the larger and largely pre-existing energy usage of "data centers" as a whole. And just like any technology, whether that AI energy use is worthwhile depends largely on your wider opinion of the value of generative AI in the first place.

Not all data centers

While the headline focus of both Bloomberg and The Washington Post's recent pieces is on artificial intelligence, the actual numbers and projections cited in both pieces overwhelmingly focus on the energy used by Internet "data centers" as a whole. Long before generative AI became the current Silicon Valley buzzword, those data centers were already growing immensely in size and energy usage, powering everything from Amazon Web Services servers to online gaming services, Zoom video calls, and cloud storage and retrieval for billions of documents and photos, to name just a few of the more common uses.

The Post story acknowledges that these "nondescript warehouses packed with racks of servers that power the modern Internet have been around for decades." But in the very next sentence, the Post asserts that, today, data center energy use "is soaring because of AI." Bloomberg asks one source directly "why data centers were suddenly sucking up so much power" and gets back a blunt answer: "It’s AI... It’s 10 to 15 times the amount of electricity."

Taking a closer look at AI’s supposed energy apocalypse (2)

Unfortunately for Bloomberg, that quote is followed almost immediately by a chart that heavily undercuts the AI alarmism. That chart shows worldwide data center energy usage growing at a remarkably steady pace from about 100 TWh in 2012 to around 350 TWh in 2024. The vast majority of that energy usage growth came before 2022, when the launch of tools like Dall-E and ChatGPT largely set off the industry's current mania for generative AI. If you squint at Bloomberg's graph, you can almost see the growth in energy usage slowing down a bit since that momentous year for generative AI.

Determining precisely how much of that data center energy use is taken up specifically by generative AI is a difficult task, but Dutch researcher Alex de Vries found a clever way to get an estimate. In his study "The growing energy footprint of artificial intelligence," de Vries starts with estimates that Nvidia's specialized chips are responsible for about 95 percent of the market for generative AI calculations. He then uses Nvidia's projected production of 1.5 million AI servers in 2027—and the projected power usage for those servers—to estimate that the AI sector as a whole could use up anywhere from 85 to 134 TWh of power in just a few years.

To be sure, that is an immense amount of power, representing about 0.5 percent of projected electricity demand for the entire world (and an even greater ratio in the local energy mix for some common data center locations). But measured against other common worldwide uses of electricity, it's not representative of a mind-boggling energy hog. A 2018 study estimated that PC gaming as a whole accounted for 75 TWh of electricity use per year, to pick just one common human activity that's on the same general energy scale (and that's without console or mobile gamers included).

Taking a closer look at AI’s supposed energy apocalypse (3)

More to the point, de Vries' AI energy estimates are only a small fraction of the 620 to 1,050 TWh that data centers as a whole are projected to use by 2026, according to the IEA's recent report. The vast majority of all that data center power will still be going to more mundane Internet infrastructure that we all take for granted (and which is not nearly as sexy of a headline bogeyman as "AI").

Taking a closer look at AI’s supposed energy apocalypse (2024)

FAQs

Why is AI sometimes inaccurate? ›

AI is very good at recognizing patterns and analyzing data, but it lacks the ability to interpret meaning in the same way that humans can. This can make it difficult for algorithms to identify sarcasm, irony, or other forms of figurative language that rely on context and cultural knowledge.

Where is AI headed? ›

AI is predicted to grow increasingly pervasive as technology develops, revolutionising sectors including healthcare, banking, and transportation. The work market will change as a result of AI-driven automation, necessitating new positions and skills.

What is the potential of AI? ›

At its most basic, AI can strengthen our operational systems and bring increased productivity, opportunity, and efficiency to our work, helping us process and analyze complex data faster, including data from medical imaging or digital health technologies, for example.

Why can't AI be trusted? ›

Humans are largely predictable to other humans because we share the same human experience, but this doesn't extend to artificial intelligence, even though humans created it. If trustworthiness has inherently predictable and normative elements, AI fundamentally lacks the qualities that would make it worthy of trust.

Can AI give wrong answers? ›

Google's AI Overviews have given incorrect, misleading and even dangerous answers. The fact that Google includes a disclaimer at the bottom of every answer (“Generative AI is experimental”) should be no excuse.

Where is AI in the Bible? ›

Ai, ancient Canaanite town destroyed by the Israelites under their leader Joshua (Joshua 7–8). Biblical references agree in locating Ai (Hebrew: ha-ʿAy, “The Ruin”) just east of Bethel (modern Baytīn in the West Bank). This would make it identical with the large early Bronze Age site now called At-Tall.

Will AI overthrow humanity? ›

If you believe science fiction, then you don't understand the meaning of the word fiction. The short answer to this fear is: No, AI will not take over the world, at least not as it is depicted in the movies.

Is AI a threat to humanity? ›

Can AI cause human extinction? If AI algorithms are biased or used in a malicious manner — such as in the form of deliberate disinformation campaigns or autonomous lethal weapons — they could cause significant harm toward humans. Though as of right now, it is unknown whether AI is capable of causing human extinction.

What is the next big thing after AI? ›

Quantum computing will optimise routes, improve efficiency, and reduce costs by doing sophisticated computations that regular computers cannot. Quantum computing has several interesting applications that might change whole industries. While quantum computing has great potential, it also has drawbacks.

Who is the father of AI? ›

John McCarthy is considered as the father of Artificial Intelligence. John McCarthy was an American computer scientist. The term "artificial intelligence" was coined by him.

Is Siri an AI? ›

Siri Inc. Siri is a spin-off from a project developed by the SRI International Artificial Intelligence Center. Its speech recognition engine was provided by Nuance Communications, and it uses advanced machine learning technologies to function.

Why is AI not always correct? ›

It makes stuff up and is incredibly confident that it's right - when it's not. This happens enough that there is a term for it. It's called a hallucination. Hallucinations happen when generative AI produces content that is not grounded in reality or does not accurately reflect the source data it has been trained on.

Why is AI not 100% accurate? ›

As a business innovation specialist and data scientist, I can attest that AI systems are fallible and may produce inaccurate outcomes if trained on biased or limited datasets. Biases present in the training data can perpetuate and even amplify societal biases, resulting in unfair or discriminatory results.

Why is AI ineffective? ›

Projects often stumble due to inadequate data, which hampers the system's ability to learn and make accurate predictions. Whether it's supervised learning, neural networks, or decision trees, the volume of quality data directly impacts the effectiveness of the AI solution.

How could artificial intelligence go wrong? ›

AI is only as unbiased as the data and people training the programs. So if the data is flawed, impartial, or biased in any way, the resulting AI will be biased as well. The two main types of bias in AI are “data bias” and “societal bias.”

Top Articles
Latest Posts
Article information

Author: Mr. See Jast

Last Updated:

Views: 6483

Rating: 4.4 / 5 (75 voted)

Reviews: 82% of readers found this page helpful

Author information

Name: Mr. See Jast

Birthday: 1999-07-30

Address: 8409 Megan Mountain, New Mathew, MT 44997-8193

Phone: +5023589614038

Job: Chief Executive

Hobby: Leather crafting, Flag Football, Candle making, Flying, Poi, Gunsmithing, Swimming

Introduction: My name is Mr. See Jast, I am a open, jolly, gorgeous, courageous, inexpensive, friendly, homely person who loves writing and wants to share my knowledge and understanding with you.