At the beginning of Star Wars: A New Hope, Darth Vader issues a stern warning to a table full of generals: “Don’t be too proud of this technological terror that you’ve constructed. The ability to destroy a planet is insignificant next to the power of the Force.”
Vader knew that there were some powers that technology could never surpass. Although the giants of technology are only beginning to unlock the power of AI, I also believe there are fundamental limits on what it can ever be expected to do. And that’s a good thing.
Real-World Limits on AI
I enjoy playing the occasional computer game. To get the most bang for my buck, I’ve built my own personal computers for the last 20 years. My current PC incorporates a graphics card designed to handle the most resource-intensive computer games. To do that, it needs a powerful 750-watt power supply. For context, if you’ve ever touched a 200-watt lightbulb while it’s on, you know how hot they can get. My computer can use more energy than three of those lightbulbs, just to run a game.
Despite all this power, my computer – indeed any home computer – is incapable of running the latest versions of ChatGPT. Requirements vary, but even basic AI servers are at least 10 times more powerful – and 10 times more power-hungry – than my home computer. The immense computational burden of AI places several practical limitations on the technology and throws a damp blanket on the plans of killer robots for taking over the earth.
First, the servers are simply too large currently to fit into a small space. It turns out that you just can’t cram all those computer parts into a small, mobile platform. They are just plain big, requiring at least a server rack that looks something like a bunch of horizontally mounted individual PCs in a big cabinet.
Secondly, the power requirements are enormous. By one estimate, the amount of energy needed to generate a single AI image is equivalent to brewing five to 50 pots of coffee or driving an electric car for five or six miles. Even if you could overcome the size issue, your laptop’s lithium ion battery wouldn’t be able to power an AI program for any practical length of time.
Consequently, because the technocrats need it, nuclear energy has become popular again. Three Mile Island, the infamous site of America’s closest call with a nuclear disaster, was scheduled to decommission its remaining reactor. But last year, Microsoft signed a contract to purchase all the energy output from it for the next 20 years. Meta, the parent company of Facebook, inked a similar 20-year deal with a nuclear plant in central Illinois. Google is partnering on the construction of three nuclear plants, and many other plans are in the works. Bottom line: Running an AI program requires a sophisticated and plentiful energy infrastructure.
And circling back to my lightbulb example, some of the power guzzled by an AI data center is dissipated as heat. Next-generation AI superchips could operate at 2,800 watts or more. Air cooling simply can’t handle this amount of heat, so AI requires elaborate liquid cooling systems to avoid a literal meltdown of the system. Even if we ignore the size and energy problems, our hypothetical Terminator will melt itself down into slag before you can say, “Hasta la vista, baby.”
AI programs simply can’t run on your work computer, smartphone, self-driving car, smart appliance, or any kind of robot. For that reason, anytime you use AI, you must rely on an internet connection to a server somewhere else. Mobile devices, of course, require a wireless connection.
That wireless connection is convenient, but also vulnerable to disruption. An AI-powered society could be brought to a standstill by a widespread internet outage due to a natural event like a solar flare, or by intentional action like a hacker or a foreign adversary. Remember The Phantom Menace? When young Anakin destroys the droid mothership, all the battle droids freeze up on the planet below. Something similar would happen to AI-powered devices if a data center went offline.
Fragile Complexity
The more complex a system is, the more prone it is to breakdowns or disruption. One of the COVID epoch’s major lessons was how fragile our supply chain is, and concomitantly, our standard of living. Shut down factories and shipping, and suddenly you cannot buy something as simple as toilet paper. More pertinently, the supply of new cars was throttled for months after the COVID lockdowns due to a shortage of computer chips.
High-tech societies are much more susceptible to degrading to pre-internet levels, or even pre-industrial levels, than we would like to think. During WWII, most Europeans were reduced to living hand-to-mouth, relying on what could be grown or scrounged from their ravaged surroundings. How many people nowadays even know how to preserve vegetables without refrigeration? To hunt for food? To sew clothes? In the event of an economic depression, major war, or environmental disaster, all the complex AI in the world would rapidly evaporate, leaving us in much worse condition than our more self-reliant ancestors.
For economic and cultural reasons, it is hard to imagine the AI revolution truly taking root somewhere like Africa or Latin America. Currently, Roomba vacuums, not exactly the most futuristic appliance, cost $600 each. While some rich Westerners may soon have AI-powered housemaids or self-driving cars, it is hard to envision widespread penetration of these luxuries in, say, Mozambique.
Lastly, a certain kind of society is needed to support an advanced civilization. The European tribes had access to Roman technology, yet lived in much more primitive conditions after the fall of the Empire. Many foreign aid projects have attempted to elevate Third World countries’ standards of living through technology, only to inevitably fail due to breakdowns, corruption, and civil unrest. Lack of infrastructure and low cultural value placed on maintenance are incompatible with AI and high technology in general. Yes, people worldwide will be able to access AI through their phones, but the remote corners of the world will be safe from any Robotic Revolution.
This installment covered several physical limitations on just how much AI can intrude into our world. On Monday, come back for Part 4, where we’ll explore the difference between AI and a genuine mind.
Brandon Aldinger is a chemist with a doctoral degree who works in an industrial research laboratory. He’s had lifelong interest in issues of science and faith, and he is passionate about training fellow Christians to think clearly about and stand firm on their beliefs within a hostile culture.