
In brief
Authors Yudkowsky and Soares warn that AI superintelligence will make humans extinct.
Critics say extinction talk overshadows real harms like bias, layoffs, and disinformation.
The AI debate is split between doomers and accelerationists pushing for faster growth.
It may sound like a Hollywood thriller, but in their new book “If Anyone Builds It, Everyone Dies,” authors Eliezer Yudkowsky and Nate Soares argue that if humanity creates an intelligence smarter than itself, survival wouldnât just be unlikelyâit would be impossible.
The authors argue that todayâs systems arenât engineered line by line but âgrownâ by training billions of parameters. That makes their behavior unpredictable.
As intelligence scales, drives such as self-preservation or power-seeking could emerge independently, the authors warn. If such a system ever outstrips human control, they say, extinction would follow by default.
They call the current race among tech giants and governments to build ever-larger models a âsuicide race.â No villainy required, just incompetence.
ï»ż
Why it matters
The book lands with the AI community already fractured into camps.
AI doomers argue that extinction is the inevitable outcome unless humanity halts or drastically slows progress. Accelerationists counter that pushing forward could deliver breakthroughs in medicine, science, and economics, while slowing down poses a greater danger.
Nick Bostromâs Superintelligence first made the existential case a decade ago. Yudkowsky and Soares sharpen it into a klaxon call.
But critics worry that fixation on extinction distracts from harms already here: bias, layoffs, surveillance, and disinformation.
What the AI models say
Since the book is talking about them, we asked leading AI models what they think of it.
As far as we know, none of the LLMs have actually read the book yet, so the following reviews are based on the platformsâ interpretations of the reviews they have interpolated.
ChatGPT
OpenAIâs ChatGPT framed the book as a useful provocation rather than a prophecy.
âThe book is valuable as a perspective-shifting tool, even if you think it overshoots,â the chatbot responded. âIt sharpens intuitions about alignment and unintended consequences. But treating its worst-case as inevitable ignores uncertainty around future architectures, regulations, and constraints.â
ChatGPT stated that the model of future AI developments is contingent on how architectures evolve and what regulatory, social, or technical guardrails emerge.
âThat uncertainty means I wouldnât treat their worst-case as inevitable,â ChatGPT sniffed.
Meta
Meta AI struck a middle note, saying the bookâs dire warnings are exaggerated but still valuable for sparking debate.
âIts dire predictions may feel exaggerated, but the emphasis on caution and international cooperation is justified,â Meta AI responded. âWhether or not you agree with its conclusion, the book sparks needed debate about responsible AI development.â
Metaâs AI said the authors’ emphasis on the need for caution and international cooperation is well-justified, given the potentially catastrophic consequences of unaligned AI development.
Gemini
Googleâs Gemini cast the book as a terrifying but clarifying read, even if its proposed solutions seem unrealistic.
âThe book is essential for understanding the extreme end of AI risk,â Gemini said. âIts proposed solutionâa total shutdownâis politically unachievable, but the warning can galvanize investment in safety and control techniques before we reach the point of no return.â
Gemini said the book defines the extreme stakes of superintelligence, which, in turn, helps justify robust (if less radical) safety measures today.
Claude
Anthropicâs Claude was even more skeptical, which was interesting since among the leading LLMs, Anthropic is generally considered to be the most safety-conscious. But Claude described the bookâs tone as âoverconfidentâ and its framing as too binary.
âThe authors correctly identify real risks, but their certainty feels overconfident,â Claude said. âThey dismiss progress in AI safety research and frame the issue as a binary between perfection and extinction. That makes their message less constructive.â
While their warnings highlight important risks that warrant substantial resources and caution, Claude said the “everyone dies” framing appears more designed to grab attention than accurately represent the complex uncertainties involved.
âThe risks are serious enough to take seriously without needing to accept their claims of inevitable catastrophe,â the AI said.
Grok
Elon Muskâs AI platform also surprised, since heâs been promoting Grok as the anti-mainstream media AI of record.
But Grok was quick to cite âa scathing NYT review, [calling] its arguments weak or alarmist.â
âThis book is a gut-punch reminder that intelligence without wisdom is a liability, and we’ve got none of the latter baked in yet,â Grok responded. âThat said, it’s doomer porn for rationalists: thrilling in its fatalism, but it underplays human adaptability. xAI’s ethosâcuriosity over conquestâsuggests we can thread the needle: pursue bold AI for cosmic questions while embedding safeguards from day one.â
Beyond that call out, however, Grok said that âoverall, it’s a provocative, efficient read for anyone grappling with AI’s futureâcompelling if you’re open to pessimism, but potentially overstated if you’re optimistic about tech progress.â
Generally Intelligent Newsletter
A weekly AI journey narrated by Gen, a generative AI model.
Be the first to comment