OpenAI functions as a modern empire, not a company, says investigative journalist
Karen Hao argues AI companies operate across political, social, and economic spheres while accumulating resources at imperial scales after 2019 investigation.

Karen Hao has spent seven years covering artificial intelligence development, culminating in a conclusion that challenges conventional understanding of the technology sector. OpenAI and similar organizations should be analyzed as empires rather than businesses, according to the investigative journalist's research published in "Empire of AI."
"These companies I really think need to be thought of as new forms of empire," Hao stated during a September 2025 interview on The Room Where It Happened podcast. "They're not just operating in the business sphere. They are also operating in the political sphere."
The empire framework emerged from Hao's 2019 embedding within OpenAI offices when she was reporting for MIT Technology Review. That three-day investigation revealed an organization functioning beyond conventional corporate parameters while deploying billions of dollars based on what she characterizes as quasi-religious ideology.
Subscribe PPC Land newsletter ✉️ for similar stories like this one. Receive the news every day in your inbox. Free of ads. 10 USD per year.
Imperial characteristics define AI development
Hao identifies specific attributes that distinguish AI companies from traditional businesses. These organizations accrue "an enormous amount of social, political, economic capital" while acquiring land, energy, water, and raw resources to build technology at scales comparable to historical empires.
"One of the driving motivators behind what OpenAI does is a kind of this quasi religious ideology that artificial general intelligence is right on the horizon and it's somehow going to bring profound benefit to humanity," Hao explained. "And this is also very imperial in nature."
The parallel to historical imperialism extends to ideological justification. According to Hao, empires of old were "driven by a religious ideology of we need to bring progress and modernity to all of humanity. And that's why we need to engage in grabbing all this land, grabbing all these raw resources, exploiting labor, building these technologies that ultimately fortify the empire and could potentially end up exacerbating global inequalities."
OpenAI's mission operates identically, she argues. The ideological component means different individuals within the organization interpret their work through personal value systems, creating fragmented understanding of actual objectives and methods.
"Every person kind of has their own personal value system that they're using as a prism through which to see OpenAI's mission and how to achieve it," Hao stated.
Quasi-religious mission drives resource accumulation
The belief structure within AI companies goes beyond conventional business motivation. Hao describes employees who "think that they are creating a a digital god or protecting us from a digital demon. and that this is their calling, that they were brought to this earth to usher in a new form of intelligence that will be the next stage of the human species."
This ideological foundation enables unprecedented resource consumption. The AI industry requires adding energy equivalent to 0.5-1.2 times United Kingdom annual consumption to global grids within five years, according to Hao's research. Most additional capacity will derive from coal and gas plants previously scheduled for retirement.
"The scale has just completely exploded to an unfathomable degree," Hao stated. "And and sometimes I feel like the the size and scale and numbers that these tech CEOs throw out at people is purposely so large and extraordinary that the average person in the world has never encountered something of that magnitude and therefore it just becomes incomprehensible."
Meta CEO Mark Zuckerberg exemplifies this pattern. He stated the company's AI supercomputers would approximate Manhattan's size and energy consumption. "That is something that is unprecedented and that is single-handedly currently reversing an extraordinary amount of climate progress that we made in the last decade," according to Hao.
Buy ads on PPC Land. PPC Land has standard and native ad formats via major DSPs and ad platforms like Google Ads. Via an auction CPM, you can reach industry professionals.
Strategic government alignment mirrors historical patterns
Hao draws explicit comparisons to the British East India Company's relationship with the British Crown. "You had a corporate empire and a nation state empire that were strategically aligned and benefiting from one another in a way that really ultimately led to a lot of exploitation and harm around the world," she explained.
The Trump administration's approach to AI companies demonstrates similar dynamics. "The US government has made under the Trump administration very clear that they want to rebuild the American empire and expand and strengthen it. And so they are they see these American tech companies as tools for doing so as quickly as possible," Hao stated.
Government officials are "trying to deploy OpenAI's technologies all around the world and they're trying to help OpenAI build more data centers to facilitate that," according to her analysis. Former Dutch parliamentarian Marietje Schaake noted in a Financial Times op-ed the strategic advantage of installing American hardware and software globally with potential remote control capabilities.
"I think that is part of the reason why the US government is is unwilling to do anything to check these companies because it it believes that it can maintain control over these organizations and continue to use them as hands to perpetuate like their own political will," Hao stated.
Democracy faces erosion from corporate power
Hao's greatest concern centers on democratic governance deterioration. "These companies, OpenAI, but all of the other Silicon Valley giants and tech and AI giants, they have grown so powerful that they really are beginning to completely warp and distort democracy," she stated.
The companies can "acquire land, create digital currencies, um, take people's energy and water, hike up their utility bills, distort people's educational and economic opportunities just with the wave of a hand based on what they decide to deploy in their products one day to the next based either on business or ideological grounds."
The Trump administration alliance with Silicon Valley eliminates what was "arguably originally the only entity that h that was more powerful than Silicon Valley and could actually regulate these companies," according to Hao. "And now under the Trump administration, we see absolutely no interest from the US government in in playing that counterveailing force."
This trajectory threatens fundamental democratic principles. "If we continue to be on this trajectory where these companies get more and more and more and more powerful and more and more and more wealthy where they can just buy up whatever they want and and knock over whatever whatever obstacles in their way that we are going to lose the ability to collectively self-determine our futures and that's when democracy dies," Hao stated.
Techno-authoritarian governance replaces democratic input
The technology industry operates fundamentally differently from democratic systems, according to Hao's analysis. "There should also be elected representatives that represent all of us. And there should be a democratic form of governance around these extremely consequential issues and how they should be resolved and how we tackle them."
Instead, the industry functions as techno-authoritarianism. "Open AI doesn't operate that way. Meta doesn't operate that way. Google, Microsoft, Apple, they're they operate in a much more techno athoritarian way where they dictate to billions of users or in OpenAI's case, hundreds of millions of users exactly what their techn like these these profoundly influential algorithms are going to do and integrate into their lives and and shape the course of their healthcare, their economic opportunities, their educational opportunities."
The implications extend beyond individual platforms. "We we need to have a say in these things that are molding who we become as a society," Hao stated.
Nonprofit structure served strategic purposes
Hao characterizes OpenAI's nonprofit origins as potentially strategic from inception, addressing sequential bottlenecks rather than representing genuine mission commitment. When founded in late 2015, Google dominated AI research talent. OpenAI "couldn't compete on compensation. And so they competed on a sense of mission and the nonprofit was a really convenient signal of how committed OpenAI was to that mission."
Chief Scientist Ilya Sutskever "explicitly came to OpenAI because of that mission. He took a huge pay cut to come to OpenAI," according to Hao. Once core teams assembled, resource requirements shifted. "The bottleneck became about capital because now it had all the it had a core group of really smart people and it just needed to build."
The for-profit transition addressed this new constraint. "That's when they thought nonprofit doesn't cut it anymore. We're going to put a for-profit arm within this nonprofit and use it as a vehicle to raise billions of dollars," Hao explained.
"If you look at that, you're like, well, that kind of seems like it was the strategy all along, you know, like you ch you first design the organization to tackle one bottleneck and then once you've solved that bottleneck, you morph it to tackle the next bottleneck," she stated.
Current inequality signals future trajectory
Observable outcomes contradict promises of broadly distributed AI benefits. "The AI boom has created more billionaires at a faster rate than ever before and has left new college graduates with an inability to find economic opportunities," Hao stated.
"What I see currently is that is creating more inequality, more divisiveness and that should be the strongest signal for what would come next if we don't take our future in our own hands and change the trajectory of AI development," she explained.
Sam Altman has claimed AI productivity gains could enable $13,500 annual payments to every adult. Hao responded: "Well, what I know is what we're seeing now, which is that the AI boom has created more billionaires at a faster rate than ever before."
Collective resistance remains possible
Despite concentrated power, Hao emphasizes democratic influence potential. AI companies require ingredients collectively owned by society, including data from artists, writers, and social media users. "There are that's the data of artists and writers and people posting on social media, people posting their kids photos on platforms like Flickr," she stated.
Copyright lawsuits represent successful resistance. "We're seeing significant push back from some of those groups saying no, we're going to sue you because we can't accept that you're just going to take our IP that we have we have copyright protections on this work. And that is them reclaiming ownership over a key ingredient that this company needs access to."
Community protests have delayed data center projects globally. "We're seeing hundreds of protests breaking out around the world with people pushing back against data center development because they're saying we don't want to host a data center in our community. It's hiking up our utility bills. It's creating all these other environmental impacts."
Educational debates about AI integration demonstrate another resistance point. "We're seeing students and teachers have huge discussions about whether or not we actually want AI in education, whether AI is actually enabling better education or it's just degrading people's critical thinking skills."
OpenAI responded to criticism about ChatGPT eroding student analytical abilities by adding study modes. "So like all of these I I often look at like AI development. There's a supply chain just like any other product," Hao stated.
Technology development requires democratic participation
Hao's fundamental message challenges assumptions about technological inevitability. "I really want people to understand that technology development is not inevitable and that it is also not just the purview of people that work in tech companies," she stated.
"Actually, every single person, regardless of where you sit in society, can have a say in technology development because when you think about how these companies develop these AI models, they actually need because it's so resource intensive, they need access to a whole host of different ingredients that are owned collectively owned by different groups in society."
The framework positions various resistance points as democratic contestation sites. "If we consider these as different sites of democratic contestation, then we'll if and we have different movements, different groups that are pushing to actually assert what they want out of the technology, how they want the technology developed, all in concert with one another. Companies have to respond," Hao explained.
She explicitly rejects relying on government intervention. "I don't think people should wait for government. The beautiful thing about democracy is that even when there is an absence of leadership at the top, there's leadership from the bottom."
Scale distinguishes current moment from precedents
The resource requirements separate AI development from previous technology sectors by orders of magnitude. Meta accumulated approximately 4 billion user accounts of data during the social media era, yet executives "were talking about in addition to that data, what about if we bought out Simon and Schustster for all of their books so that we could train a generative AI system?"
The company ultimately acquired books through unauthorized channels. "In the end, they didn't buy Simon and Schustster. They just tormented all the books from the dark web and used it to train their AI models instead," Hao stated.
"Even even compared to social media which already we can argue was problematic in its scale. AI is way worse," she explained. The unprecedented magnitude enables operations that slip "under the radar" because average people cannot comprehend the scale involved.
Subscribe PPC Land newsletter ✉️ for similar stories like this one. Receive the news every day in your inbox. Free of ads. 10 USD per year.
Timeline
- December 2015: OpenAI founded as nonprofit to compete with Google for AI research talent using mission-driven positioning
- 2019: Karen Hao embedded within OpenAI offices for three days while reporting for MIT Technology Review, discovering organizational inconsistencies
- 2019: Research documents carbon intensity of GPT-2 model as environmental concerns emerge
- November 2022: ChatGPT launches publicly, beginning mainstream AI adoption
- February 7, 2025: Sam Altman defends AI energy consumption during Berlin panel, arguing technology enables fusion breakthroughs
- May 21-22, 2025: Google announces AI advertising integration at Marketing Live conference
- July 2025: McKinsey identifies agentic AI as most significant emerging trend for marketing organizations
- August 7, 2025: OpenAI releases GPT-5 with unified reasoning architecture
- September 3, 2025: IAB releases comprehensive AI use case map for advertising professionals
- September 12, 2025: Adverity launches AI-powered analytics addressing resource constraints
- September 24, 2025: OpenAI seeks engineer for internal advertising infrastructure development
- September 29, 2025: OpenAI launches Instant Checkout enabling direct purchases through ChatGPT
- September 2025: Karen Hao's "Empire of AI" published by Allen Lane, Penguin Books
Subscribe PPC Land newsletter ✉️ for similar stories like this one. Receive the news every day in your inbox. Free of ads. 10 USD per year.
Summary
Who: Investigative journalist Karen Hao conducted seven years of AI industry research, including 2019 embedding within OpenAI offices while at MIT Technology Review. The investigation examined CEO Sam Altman, Chief Technology Officer Greg Brockman, Chief Scientist Ilya Sutskever, and broader organizational dynamics across the AI sector.
What: Hao argues AI companies including OpenAI function as modern empires rather than conventional businesses, operating across political, social, and economic spheres while accumulating resources at imperial scales. The quasi-religious ideology driving artificial general intelligence development mirrors historical imperial justifications for resource extraction and labor exploitation. Strategic evolution from nonprofit to for-profit structure potentially served recruitment and capital acquisition objectives while maintaining public relations benefits from original mission-driven branding.
When: Initial investigation occurred in 2019 during OpenAI's nonprofit phase, before November 2022 ChatGPT launch and subsequent commercial expansion. Research spanned seven years through September 2025 book publication, documenting the organization's transformation and broader industry patterns during AI's mainstream adoption period.
Where: Investigation centered on OpenAI's San Francisco offices with analysis extending to global AI development infrastructure including international data centers, energy systems, and resource extraction operations. The empire framework examines operations spanning political relationships with the Trump administration and deployment strategies across worldwide markets.
Why: Hao initiated research to verify organizations pushing AI frontiers and examine alignment between stated missions and actual practices. The empire concept matters for marketing professionals as AI reshapes advertising strategies, platform commerce capabilities, and campaign management infrastructure. Understanding imperial dynamics and ideological motivations helps marketers assess democratic implications, evaluate platform dependencies, and recognize collective resistance potential through data ownership, community organizing, and educational debates shaping technology development trajectories.