Parts of this seem like a great introduction to the AI alignment problem for nontechnical people. (Specifically, the apocalyptic version of the alignment problem—contrast with Brian Christian’s excellent book The Alignment Problem, which focused more on the sorts of problems we already face from ML models right now). The fictional story in the “prelude” helps make things concrete, and chapter 1 does a good job rectifying some misunderstandings as to what people are really worried about. On the other hand, chapter 2—with subsections like “What Is Memory?” and “What Is Computation?”—goes down rabbit holes that might lose the casual audience. And other parts—like chapter 6, titled “Our Cosmic Endowment: The Next Billion Years and Beyond”—delve* into topics so speculative and distant that they may undermine the sense of seriousness and respectability the book wants to convey regarding the alignment problem. (There is a handy chart on page 72 categorizing the chapters into “Extremely Speculative”, “Not very speculative”, and “Speculative”, though.)

* “Delve” is the word that came out of my head naturally as I typed that sentence, and I’m not gonna let the fact that one of my childhood heroes tweeted that it’s only used by ChatGPT and people who “want to sound clever” goad me into rephrasing myself, but it’s fun to realize that my list of personal insecurities now includes will my vocabulary make rich people mistake me for a robot?

The coolest part of the book is chapter 5, which imagines 12 different “AI Aftermath Scenarios” and asks you to consider how you would feel about each of them. Even if I was often silently screaming that’s insane throughout some of them, it’s a fun exercise. The twelve scenarios are:1

A recurring concern that arises in discussions of these scenarios is that humans would find life meaningless if we were no longer at the forefront of things:

Many people in the benevolent dictatorship [scenario]… [would have] lives that feel pleasant but ultimately meaningless. Although people can create artificial challenges, from scientific rediscovery to rock climbing, everyone knows that there is no true challenge, merely entertainment. There’s no real point in humans trying to do science or figure other things out, because the AI already has. There’s no real point in humans trying to create something to improve their lives, because they’ll readily get it from the AI if they simply ask.2

I’d like to point out that most humans have never been involved in pushing the boundaries of human knowledge. The average person’s quest to improve their own life usually centers around trying to get for themselves things that others already have. And artificial challenges are wildly popular: there are far more gamers in the world than scientists. I can’t help but roll my eyes at the idea of us collectively declining to create paradise out of a fear that we’d get bored.

The book says something I found really surprising in its discussion of whether jobs lost to AI will be replaced by “new technology-enabled professions that we haven’t even thought of yet”3. That’s what happened with “the computer revolution”4, right? No:

…the vast majority of today’s occupations are ones that already existed a century ago, and when we sort them by the number of jobs they provide, we have to go all the way down to twenty-first place in the list until we encounter a new occupation: software developers, who make up less than 1% of the U.S. job market….5

The main trend on the job market isn’t that we’re moving into entirely new professions. Rather, we’re crowding into those pieces of terrain … that haven’t yet been submerged by the rising tide of technology!6