Search results

Book review of 'Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI' by Karen Hao

by Tom Johnson on Sep 22, 2025
categories: ai ai-book-club

In my AI Book Club, we recently read Empire of AI: Dreams and nightmares in Sam Altman's OpenAI, by Karen Hao. In this post, I'll briefly share some of my reactions to the book. The main focus in my review is to analyze Hao's treatment of the mission-driven ideology around AGI that explains many of the motivations for the workers at OpenAI and similar AI companies.

Introduction

Hao’s book is a tremendous piece of journalism and research. Across 420 pages, Hao provides an inside look at OpenAI in more detail than is available anywhere else. The closest similar book (that I’ve read, at least) is Parmy Olsen’s Supremacy: AI, ChatGPT, and the Race that Will Change the World, which focuses on the race to AGI and profiles both Altman from OpenAI and Hassabis from Deepmind in contrasting ways.

However, Hao’s book has much more depth, reporting, and first-person accounts, and could be characterized as more of an exposé on OpenAI and Sam Altman. In fact, some have suggested Altman is a stand-in for Silicon Valley in general, in the way he espouses a “scale at all costs” ideology (see Empire of AI, with Karen Hao, from the Time to Say Goodbye podcast). I found the various interpretations of AGI both by the figures in the book and by Hao to be the most salient and interesting theme of the book, so that’s what I’ll focus on here.

If you’d like to listen to our AI Book Club discussion of the book, see Recording of AI Book Club discussion for Karen Hao’s Empire of AI. The discussion provides more balance across different viewpoints and takeaways. You’re also invited to join the AI Book Club: A Human in the Loop and be part of the next discussion.

Clarifying people and timelines

First, Hao’s book clarified many concepts, people, timelines, and movements for me. Although I felt the book could have been shorter (at times it felt endless), I appreciated the detail and reporting in places. For example, she describes Musk’s role early on and his reasons for leaving, mainly the power struggle with Altman for control and the differing directions the company needed to take to compete against Google. Musk’s xAI company was later born from this initial disagreement.

Hao also describes Dario Amodei and his journey away from OpenAI (along with his sister and several other senior OpenAI employees) due to their differing views on AI safety, particularly concerning the commercialization of GPT-3, and Amodei’s founding of Anthropic (which, interestingly, also fails to provide a model for AI that would satisfy Hao due to an ends-justify-the-means mentality that still permeates).

Hao also explains how Ilya Sutskever, chief scientist and co-founder, left for similar safety concerns to form his own company, Safe Superintelligence Inc. And how Mira Murati, former CTO, left to form Thinking Machines Lab.

I didn’t realize how many original AI superstars were originally consolidated within OpenAI, only to splinter out into various AI companies.

Boomers, doomers, and Altman

Hao contextualizes the struggles within OpenAI as an ideological conflict between Boomers (AI accelerationists) and Doomers (AI safety advocates). Specifically, many Doomers were concerned about having Sam Altman as the leader of the company that ushers in artificial general intelligence (AGI), especially due to his marginalization of the safety and alignment concerns. AGI could be a powerful, transformative technology similar to the atomic bomb, giving the company that develops it ultimate power. In that scenario, in which the company leader isn’t just a CEO but rather one who makes god-like decisions that will shape society, many employees felt Altman wasn’t the right leader.

I didn’t realize there were so many competing factions within OpenAI itself. Seeing these competing ideological camps makes the constant turmoil and splintering at OpenAI much more understandable.

In addition to the ideological conflicts, Hao says that the conflicts, defections, and other tensions brewing inside OpenAI were often fueled by Altman’s untrustworthy character. She describes Altman as an astute psychological observer who listened carefully to people to understand what they wanted, then promised to deliver on those wants, only to do the same for others with opposing views. For those who opposed his agenda, he would quietly work to eliminate them from the company.

The famous comment that Altman was “not consistently candid” (made during Altman’s temporary firing) is a characterization that becomes fully understood in the context of so many deceits and false promises Hao describes in the book. For the most part, Altman aligns more with the commercial enterprises of the Boomers than the Doomers, but Hao says that despite interviewing 90+ people, no one could definitively say what Sam actually believes, since he seems to align with opposing views depending on who he talks to.

Hao shows how Altman plays the right ideology cards related to AGI to advance his agenda. For example, as Altman testifies before the Senate and answers questions, he focuses on the long-term existential risks of AI, the potential threatening scenarios if an authoritarian state like China develops AGI first, and more. In so doing, he deflects attention from the immediate damages OpenAI is creating, with natural resource extraction, human labor exploitation, consolidation of knowledge assets, and more. This insight about how to manipulate people around their AGI alignment (fears, fascination, or other attractions) proves to be a point Hao returns to frequently in the book.

Hao’s depiction of Altman is harsh and unforgiving, and in our book club we wondered whether it was fair. It’s hard to know. Olson’s portrayal in Supremacy wasn’t this negative. Although Hao needs a supervillain her empire argument, she does base Altman’s characterization on a compelling body of factual events, including more recent information than in Olson’s book.

The empire argument

Hao’s central argument is that OpenAI is following similar patterns as empires of old. She describes four patterns of empire methodologies:

  • Ideological justification: Using a grand, vague mission—building AGI to “benefit all of humanity”—to justify their actions, much like past empires used a “civilizing mission.” This narrative, often framed as a “good empire vs. evil empire” race (e.g., against China), serves to rally talent and capital while deflecting from more immediate harms.
  • Resource seizure: Claiming resources that aren’t their own, which includes data from the internet as well as natural resources like water and energy.
  • Labor exploitation: Exploiting low-wage “ghost work” from data annotators and content moderators in places like Kenya, and creating labor-automating technologies.
  • Knowledge monopolization: Concentrating top AI researchers within corporations, which filters public understanding of the technology through a corporate lens.

Some have criticized her book for over-simplifying events and movements. For example, the chapters on labor exploitation seem to cherrypick some individual, heartwrenching life stories of vulnerable people. It’s hard to know how representative those experiences are across the larger group. Others have noted that “AI colonialism” is just the reality of digital capitalism.

Additionally, the empire analogy and argument, even though her book’s title reflects it, isn’t one that Hao draws out in detail. For example, the only specific parallel she makes is with the British East India Company, which transitioned into an imperial power as it gained more political and economic leverage. Even with this example, there are some obvious differences between the ruling empires in previous eras and today’s AI empires, such as lack of military-enforced control, the company’s subjection to a country’s laws, and more.

You won’t find more drawn out comparisons with the Roman, Mongol, Ottoman, or Persian empires, for example. The empire reference includes broad strokes only. The broad strokes made me want to do some outside reading to learn more about the patterns in these other empires and how they actually compare to OpenAI.

The author’s position on AGI and AI

As a journalistic work, the author is mostly behind the scenes, describing the events and details in a matter-of-fact way. However, as someone who prefers personal essays, I wished the author were more present in the writing, sharing more of her own thoughts, especially if those thoughts had conflicts.

As the empire theme came into sharper focus, it reminded me of something I learned in my literary non-fiction MFA program 25 years ago: any non-fiction work is a selection of details and observations in service of the story you want to tell. When we write, we include or exclude details to tell a particular story. As such, even the most objective, exhaustive reporting is a selection of facts to support a particular story.

Granted, without this story, the author is merely collecting details, so we want the author’s interpretation. At the same time, we don’t want the author to make more complex events fit a neat, simple interpretation and narrative (the “narrative fallacy”). Hao’s book is mostly journalistic reporting, but it’s also a strong argument and critique of Silicon Valley. Following a conventional journalistic arc, her critique holds the powerful accountable and highlights the suffering of the vulnerable.

My impression was that Hao doesn’t take the possibility of AGI seriously herself. As a result, AGI is often portrayed in the book not as a plausible future technology, but as a far-fetched, manipulative narrative that Altman uses to advance his agenda. This perspective makes her depiction of the ideological struggles at OpenAI feel somewhat one-sided.

To be fair, Hao doesn’t explicitly dismiss the transformative potential of AGI. Her focus is more on how the rhetoric and ideology surrounding AGI are used as a tool. Instead of debating the likelihood of AGI’s arrival and its impact (as Kurzweil does), Hao’s focus is on how Sam Altman and OpenAI leverage the AGI mission to justify their actions and consolidate power. She highlights that the “AGI for the benefit of all humanity” mission, while potentially starting out as a sincerely held belief, has become a potent formula for empire-building.

Hao explains how this mission is used to justify OpenAI’s actions:

Six years after my initial skepticism about OpenAI’s altruism, I’ve come to firmly believe that OpenAI’s mission—to ensure AGI benefits all of humanity—may have begun as a sincere stroke of idealism, but it has since become a uniquely potent formula for consolidating resources and constructing an empire-esque power structure. It is a formula with three ingredients:

First, the mission centralizes talent by rallying them around a grand ambition. … Most consequentially, the mission remains so vague that it can be interpreted and reinterpreted—just as Napoleon did to the French Revolution’s motto—to direct the centralization of talent, capital, and resources however the centralizer wants. (400)

Hao’s overall critique is that the AGI narrative is used to manipulate and distract. By focusing on the long-term, existential threats of AGI, figures like Altman can deflect attention from the immediate, tangible harms caused by AI development, such as resource extraction, labor exploitation, and knowledge consolidation. Hao’s book argues that whether AGI is a real possibility or not, its slippery and vague definition allows for a “ends-justifies-the-means” mentality, which she argues is a hallmark of an imperialistic approach to technology.

Hao clarifies her overall position on AI near the end:

The critiques that I lay out in this book of OpenAI’s and Silicon Valley’s broader vision are not by any means meant to dismiss AI in its entirety. What I reject is the dangerous notion that broad benefit from AI can only be derived from—indeed, will ever emerge from—a vision for the technology that requires the complete capitulation of our privacy, our agency, and our worth, including the value of our labor and art, toward an ultimately imperial centralization project. (413)

In other words, Hao isn’t opposed to AI as a technology (though this statement comes as somewhat of a surprise, given the momentum against AI throughout the book). Instead, Hao says the book’s focus (an exposé) stems from her deliberate journalistic choice to critique the model of AI development that’s centralized, extractive, and harmful. Hao’s position is that there’s a different, more ethical path for AI. She spends her final chapter outlining this alternative vision, providing examples of projects that are smaller in scale, community-focused, and designed to provide specific benefits to the people who contribute their data and resources.

During our book club discussion, as we reflected on whether Hao was too one-sided, someone made a good point, noting that even if her argument is one-sided, it provides a welcome counterbalance to the many books that uncritically champion AI, and I agree. In this view, the author doesn’t need a more balanced view because the corpus of current AI literature now is probably more pro-AI than critical of AI.

Wrestling with moral conflict

Hao’s argument about AI companies functioning similar to empires does seem to have legitimacy to a degree. It leaves one, especially working in tech, with a moral conflict. For me, wrestling with that moral conflict returns me to explore the likelihood of AGI and the degree of its societal impact. In fact, while listening to Leo Laporte’s Intelligent Machines podcast (the Fat Bears Live Now! episode), Steven Levy, one of the premier technology journalists who has authored eight books, a host asks him the following:

Paris: I’m also curious I mean you’ve been writing about tech for decades. Does the AI era like feel like a genuinely new inflection point or is it more of a chapter and evolution of kind of the same disruptive technologies clashing with social norms that you’ve been chronicling for ages?

Steven: I do feel that is one continuous story that you know like I am writing one story, right? And I started when the PC era was booming. Leo could relate to this, right? … And you know and then on top of that you know came connectivity came the internet and then there was mobile and then there was social and each thing built on top of the previous one. I think this does that but I think some inflection points are bigger than others. The internet was like a massive inflection point, you know, bigger than social, you know, I think bigger than mobile, but then of course that involved the connectivity. I think this [AI] is it could be the biggest of all. (See 14:07 - 15:12.)

I think it’s hard to evaluate AI in the present moment. We might look back in 10 years and wonder how we could have all been duped by such silly and farfetched notions. Or we could look back in 10 years and note that we failed to take the safety and alignment concerns seriously at a time when it was possible, before economic and societal collapse. But in the present, we don’t know. As Levy says, “it could be the biggest of all.” And if so, does that give more weight and legitimacy to the ideological mission of AGI?

Hao explains how the definition of OpenAI’s mission keeps changing:

… the creep of OpenAI has been nothing short of remarkable. In 2015, its mission meant being a nonprofit “unconstrained by a need to generate financial return” and open-sourcing research, as OpenAI wrote in its launch announcement. In 2016, it meant “everyone should benefit from the fruits of AI after its [sic] built, but it’s totally OK to not share the science,” as Sutskever wrote to Altman, Brockman, and Musk. In 2018 and 2019, it meant the creation of a capped profit structure “to marshal substantial resources” while avoiding “a competitive race without time for adequate safety precautions,” as OpenAI wrote in its charter. In 2020, it meant walling off the model and building an “API as a strategy for openness and benefit sharing,” as Altman wrote in response to my first profile. In 2022, it meant “iterative deployment” and racing as fast as possible to deploy ChatGPT. And in 2024, Altman wrote on his blog after the GPT-4o release: “A key part of our mission is to put very capable AI tools in the hands of people for free (or at a great price).” Even during OpenAI’s Omnicrisis, Altman was beginning to rewrite his definitions once more.” (401)

AGI is a slippery, hard-to-pin-down concept itself. We don’t know exactly how AI will transform society as the models get more and more intelligent. In the same Intelligent Machines podcast, Levy also talks about the challenge of defining AGI. He says the definition of AGI is a “moving goalpost,” much like the broader definition of AI has been throughout his career. Levy recalls covering the 1997 chess match between IBM’s Deep Blue and Garry Kasparov. At the time, many believed Deep Blue’s victory represented the pinnacle of AI, with one of his own magazine covers even declaring it “The Brain’s Last Stand.” However, in retrospect, this was just a single milestone in the ongoing evolution of the technology. He draws a parallel to the current discussion around AGI, noting that even people within AI companies are now grappling with how to define it as they get closer to what they perceive as a tipping point. While he doesn’t subscribe to extreme skepticism, he believes that the technology will continue to improve significantly. He suggests that even if “superintelligence” or AGI as a concept is difficult to define, the current capabilities of AI are so advanced that it will take years to fully explore and use them.

As a technical writer, I’m constantly wrestling with this question as I see so many aspects of my job being automated. I’ve switched from writing tech docs to steering AI to write tech docs (and then often iterating and editing). I’ve been a tech writer for 20+ years and never seen such a transformative shift in the profession.

Many AI experts, for example, Ray Kurzweil in The Singularity is Nearer, predict AGI to land in 2029 due to compounding acceleration and the general purpose nature of the AI technology (meaning, it helps build tools and solutions across domains). Does that mean in several years I’ll be out of work, in another profession entirely? That kind of existential career angst hangs over us. In this position, I’m much less likely to see the mission or ideology of AI companies as being a means of manipulation to extract and exploit the world around them for their own empire-like gain. I’m also more understanding of the slippery nature of AGI given that it’s so hard to predict its impact on my own profession 5 years from now.

During our book club discussion, we considered an interesting point. As tech workers, do we have the option to reject AI? Can you really say “no thank you to AI” and continue viable employment, or are we also hostage to AI whether we like it or not? Most of us agreed that currently, if you’re not on the AI train at work, you won’t be employed long.

And this is probably why I keep coming back to the AGI theme and the mission-driven ideology—because my job and that of many others seems to center on these ideas. I think if the author had more ambivalence toward AGI, her depiction of the events in the book might have been more balanced. I felt like once the author conceived of the empire argument, she began interpreting and arranging details to strengthen it. The reality is that it’s not just Altman who is constantly reinterpreting AI—it’s a much broader audience also doing it.

Alternative trajectories for AI development

At the end of the book, Hao provides an optimistic chapter describing an alternative vision for AI. She argues for a model that’s smaller in scale, based on community benefit around the data contributed, uses algorithms more than scale, and pursues model outcomes that provide specific benefits in return. She cites a couple of examples, one being a community using AI to help preserve a Māori language by building a speech recognition tool to process audio archives. The book tries to end on this optimistic note, arguing that AI technology doesn’t have to follow its current trajectory (as outlined by CEOs like Altman and Amodei) but could be pursued in a more ethical way that doesn’t exploit resources and people at scale.

I appreciated the attempt to end on a note of optimism, but I wasn’t persuaded by this last chapter. The author seems to forget that OpenAI started out with similar altruistic aims, with goals to allow everyone to reap the benefits of AGI rather than concentrate the benefits in small circles of Silicon Valley elites. This altruistic mission is what attracted so many researchers and experts to OpenAI in the first place. But despite the altruistic goals, you need money for AI’s massive infrastructure and compute needs; you need enormous troves of data to train the models. As Parmy Olson describes, they had to basically compromise some of their altruistic ideals, making deals with Microsoft, Google, or other big tech to drive AI forward.

Anthropic, despite its emphasis on safety and alignment, seems to follow the trajectory it’s following because there’s few other alternatives, especially with so much competition, including from China. In short, it’s hard to see the smaller, niche-focused model winning out, especially as more people use AI tools for general purpose information (rather than just for highly specialized purposes). In that last chapter, there wasn’t much acknowledgement that OpenAI started out with more altruistic ambitions and why those proved futile in this space, nor much of a case for the smaller, community-focused models to be viable and competitive. The last chapter is a brief parting thought at an alternative rather than a deeply argued case.

Conclusion

Overall, Hao’s book is well worth reading. It’s not a book that will leave you with warm fuzzies about the AI industry or suggest that infinite possibilities are ready to be unlocked (cures for cancer, climate change solutions, etc.). But her book will ground you in the real concerns about AI’s potentially damaging impact on the many societies and environments outside of Silicon Valley—the same societies and environments provide the support and resources needed for its development.

About Tom Johnson

Tom Johnson

I'm an API technical writer based in the Seattle area. On this blog, I write about topics related to technical writing and communication — such as software documentation, API documentation, AI, information architecture, content strategy, writing processes, plain language, tech comm careers, and more. Check out my API documentation course if you're looking for more info about documenting APIs. Or see my posts on AI and AI course section for more on the latest in AI and tech comm.

If you're a technical writer and want to keep on top of the latest trends in the tech comm, be sure to subscribe to email updates below. You can also learn more about me or contact me. Finally, note that the opinions I express on my blog are my own points of view, not that of my employer.