Living Within the Empire of AI

Written by Jeffery Smith

Monday, March 16, 2026

Empire of AI main

Best-selling author and AI expert Karen Hao share the concerns and opportunities with growing technology.

Earlier this month an audience in Pigott Auditorium heard from the New York Times bestselling author about her multi-year reporting on the origins of artificial intelligence (AI) and the business cultures that have enabled its rapid rise. A former journalist with the Wall Street Journal and the MIT Technology Review, Hao discussed both her pointed fears and some optimism about the future of the technology.

At the core of her concerns was the premise—expressed in the title of her recent book, Empire of AI—that AI firms possess all the trappings of colonial powers of the past. She argued that the AI industry as we know it today functions as an empire, seizing resources, exploiting labor and monopolizing knowledge production—all the while cloaking itself in the rhetoric of existential necessity.

Empire of AI on stage speaking

Take, for example, the proliferation of data centers, the physical manifestation of the AI platforms we routinely use at work and in our private lives. AI firms, including OpenAI but also Meta and Amazon, are “hyper scaling” at such an unprecedented rate that they are on course by 2030 to demand as three to six times the gigawatt hours currently used by the entire state of California. And this says nothing of the , land and other infrastructure needed to build and operate these massive computing warehouses. In a strange piece of irony, Sam Altman, CEO of OpenAI, casually noted, just one week afterHao’s discussion at 91探花, that he “sees a future where , like electricity, like water.”

AI as it is currently being scaled also presents risks to society. One obvious example is that most of the highly trained data scientists and engineers coming out of universities are no longer devoted to research in the public’s interest. The development of large language models (LLMs), which require greater and greater swaths of data trained on more and more sophisticated computing models, are pulling talent into the AI sector at rates unseen before. Instead of advancing medicine, public health, national defense or alternative energy, AI firms are, to use Hao’s words, “extracting” engineering talent to make harmful social media algorithms and chatbots, all the while building models on individual data that infringes on and the .

These social and environmental costs expose perhaps the most troubling feature of the AI empire—namely, its influence over our lives as citizens. Hao pointed out that the political capital garnered by AI’s largest actors has risen to a level not seen since the dawn of the railroad and oil industries. Empires of the past were built by the precursor to the modern nation-state. The problem now is that AI has persuaded—some would say —legislatures and regulators in a way that has replaced the state with unchecked private influence.

In her discussion, moderated by Professor Peter Rowan, Hao imparted some clues regarding how we might proceed and ways in which AI can be deployed responsibly and in alignment with the public interest.

Empire of AI ethics two speakers
Karen Hao on stage with moderator Peter Rowan.

To avoid the trappings of an empire, we need to treat AI as a nuanced tool to be used for specific purposes. Instead of the hyperscaled quest to build AI as a , to replace human judgment and decision-making across multiple areas of knowledge, she encouraged the audience to think of the responsible use of AI in more modest terms. Examples abound. What if we focus on training discreet models to help build novel, synthetic proteins to assist in drug discovery related to cancer? Or what if new models trained on historical meteorological data were combined with real-time weather information to improve hurricane forecasting? Hao argued that AI can improve our lives only when we give up the quest to massively scale the technology.

What is most interesting in this observation is that it makes technical—not just moral—sense. Citing , a research scientist, Hao expressed optimism that discreet models using more limited data sources­­—with the ability to adapt to new, ever changing inputs—is the future of truly useful AI research. While investors and Wall Street see scaling AI as essential for business, sophisticated AI researchers see scaling as an impediment to quality engineering.

Hao reminded the audience that empires fall only when people rise. The question is how we will rise to face AI’s challenges. I am somewhat optimistic that we will rise, but engineers, universities, regulators, lawmakers, and, yes, AI firms themselves presumably have a role to play in this shared future.

Professor Jeffery Smith, PhD, is the Frank Shrontz Chair in Professional Ethics director of the Center for Business Ethics at the Albers School of Business and Economics.