Book review: The AI Does Not Hate You: Superintelligence, Rationality
and the Race to Save the World, by Tom
Chivers.
This book is a sympathetic portrayal of the rationalist movement by a
quasi-outsider. It includes a well-organized explanation of why some
people expect tha AI will create large risks sometime this century,
written in simple language that is suitable for a broad audience.
Caveat: I know many of the people who are described in the book. I’ve
had some sort of connection with the rationalist movement since before
it became distinct from transhumanism, and I’ve been mostly an insider
since 2012. I read this book mainly because I was interested in how the
rationalist movement looks to outsiders.
Chivers is a science writer. I normally avoid books by science writers,
due to an impression that they mostly focus on telling interesting
stories, without developing a deep understanding of the topics they
write about.
Chivers’ understanding of the rationalist movement doesn’t quite qualify
as deep, but he was surprisingly careful to read a lot about the
subject, and to write only things he did understand.
Many times I reacted to something he wrote with “that’s close, but not
quite right”. Usually when I reacted that way, Chivers did a good job of
describing the the rationalist message in question, and the main problem
was either that rationalists haven’t figured out how to explain their
ideas in a way that a board audience can understand, or that
rationalists are confused. So the complaints I make in the rest of this
review are at most weakly directed in Chivers direction.
I saw two areas where Chivers overlooked something important.
Chivers wrote seven chapters on biases, and how rationalists view them,
ending with “the most important bias”: knowing about biases can make
you more biased. (italics his).
I get the impression that Chivers is sweeping this problem under the rug
(Do we fight that bias by being aware of it? Didn’t we just read that
that doesn’t work?). That is roughly what happened with many people who
learned rationalism solely via written descriptions.
Then much later, when describing how he handled his conflicting
attitudes toward the risks from AI, he gives a really great description
of maybe 3% of what CFAR teaches (internal double crux), much like a
blind man giving a really clear description of the upper half of an
elephant’s trunk. He prefaces this narrative with the apt warning: “I am
aware that this all sounds a bit mystical and self-helpy. It’s not.”
Chivers doesn’t seem to connect this exercise with the goal of
overcoming biases. Maybe he was too busy applying the technique on an
important problem to notice the connection with his prior discussions of
Bayes, biases, and sanity. It would be reasonable for him to argue that
CFAR’s ideas have diverged enough to belong in a separate category, but
he seems to put them in a different category by accident, without
realizing that many of us consider CFAR to be an important continuation
of rationalists’ interest in biases.
World conquest
Chivers comes very close to covering all of the layman-accessible claims
that Yudkowsky and Bostrom make. My one complaint here is that he only
give vague hints about why one bad AI can’t be stopped by other AI’s.
A key claim of many leading rationalists is that AI will have some
winner take all dynamics that will lead to one AI having a decisive
strategic advantage after it crosses some key threshold, such as
human-level intelligence.
This is a controversial position that is somewhat connected to foom
(fast
takeoff), but
which might be correct even without
foom.
Utility functions
“If I stop caring about chess, that won’t help me win any chess games,
now will it?”—That chapter title provides a good explanation of why a
simple AI would continue caring about its most fundamental goals.
Is that also true of an AI with more complex, human-like goals? Chivers
is partly successful at explaining how to apply the concept of a utility
function to a human-like intelligence. Rationalists (or at least those
who actively research AI safety) have a clear meaning here, at least as
applied to agents that can be modeled mathematically. But when laymen
try to apply that to humans, confusion abounds, due to the ease of
conflating subgoals with ultimate goals.
Chivers tries to clarify, using the story of Odysseus and the Sirens,
and claims that the Sirens would rewrite Odysseus’ utility function. I’m
not sure how we can verify that the Sirens work that way, or whether
they would merely persuade Odysseus to make false predictions about his
expected utility. Chivers at least states clearly that the Sirens try to
prevent Odysseus (by making him run aground) from doing what his
pre-Siren utility function advises. Chivers’ point could be a bit
clearer if he specified that in his (nonstandard?) version of the story,
the Sirens make Odysseus want to run aground.
Philosophy
“Essentially, he [Yudkowsky] (and the Rationalists) are thoroughgoing
utilitarians.”—That’s a bit misleading. Leading rationalists are
predominantly consequentialists, but mostly avoid committing to a moral
system as specific as utilitarianism. Leading rationalists also mostly
endorse moral
uncertainty.
Rationalists mostly endorse utilitarian-style calculation (which entails
some of the controversial features of utilitarianism), but are careful
to combine that with worry about whether we’re optimizing the quantity
that we want to optimize.
Chivers describes Holden
Karnofsky as wanting
“to get governments and tech companies to sign treaties saying they’ll
submit any AGI designs to outside scrutiny before switching them on. It
wouldn’t be iron-clad, because firms might simply lie”.
Most rationalists seem pessimistic about treaties such as this.
Lying is hardly the only problem. This idea assumes that there will be a
tiny number of attempts, each with a very small number of launches that
look like the real thing, as happened with the first moon landing and
the first atomic bomb. Yet the history of software development suggests
it will be something more like hundreds of attempts that look like they
might succeed. I wouldn’t be surprised if there are millions of times
when an AI is turned on, and the developer has some hope that this time
it will grow into a human-level AGI. There’s no way that a large number
of designs will get sufficient outside scrutiny to be of much use.
And if a developer is trying new versions of their system once a day
(e.g. making small changes to a number that controls, say, openness to
new experience), any requirement to submit all new versions for outside
scrutiny would cause large delays, creating large incentives to subvert
the requirement.
So any realistic treaty would need provisions that identify a relatively
small set of design choices that need to be scrutinized.
I see few signs that any experts are close to developing a consensus
about what criteria would be appropriate here, and I expect that doing
so would require a significant fraction of the total wisdom needed for
AI safety. I discussed my hope for one such criterion in my
review
of Drexler’s Reframing Superintelligence
paper.
Rationalist personalities
Chivers mentions several plausible explanations for what he labels the
“semi-death of LessWrong”, the most obvious being that Eliezer Yudkowsky
finished most of the blogging that he had wanted to do there. But I’m
puzzled by one explanation that Chivers reports: “the attitude … of
thinking they can rebuild everything”. Quoting Robin Hanson:
At Xanadu they had to do everything different: they had to organize
their meetings differently and orient their screens differently and
hire a different kind of manager, everything had to be different
because they were creative types and full of themselves. And that’s
the kind of people who started the Rationalists.
That seems like a partly apt explanation for the demise of the
rationalist startups MetaMed and Arbital. But LessWrong mostly copied
existing sites, such as Reddit, and was only ambitious in the sense that
Eliezer was ambitious about what ideas to communicate.
Culture
I guess a book about rationalists can’t resist mentioning polyamory.
“For instance, for a lot of people it would be difficult not to be
jealous.” Yes, when I lived in a mostly monogamous culture, jealousy
seemed pretty standard. That attititude melted away when the bay area
cultures that I associated with started adopting polyamory or something
similar (shortly before the rationalists became a culture). Jealousy has
much more purpose if my partner is flirting with monogamous people than
if he’s flirting with polyamorists.
Less dramatically, We all know people who are afraid of visiting their
city centres because of terrorist attacks, but don’t think twice about
driving to work.
This suggests some weird filter bubbles somewhere. I thought that fear
of cities got forgotten within a month or so after 9/11. Is this a
difference between London and the US? Am I out of touch with popular
concerns? Does Chivers associate more with paranoid people than I do? I
don’t see any obvious answer.
Conclusion
It would be really nice if Chivers and Yudkowsky could team up to write
a book, but this book is a close substitute for such a collaboration.
Book Review: The AI Does Not Hate You
Link post
Book review: The AI Does Not Hate You: Superintelligence, Rationality and the Race to Save the World, by Tom Chivers.
This book is a sympathetic portrayal of the rationalist movement by a quasi-outsider. It includes a well-organized explanation of why some people expect tha AI will create large risks sometime this century, written in simple language that is suitable for a broad audience.
Caveat: I know many of the people who are described in the book. I’ve had some sort of connection with the rationalist movement since before it became distinct from transhumanism, and I’ve been mostly an insider since 2012. I read this book mainly because I was interested in how the rationalist movement looks to outsiders.
Chivers is a science writer. I normally avoid books by science writers, due to an impression that they mostly focus on telling interesting stories, without developing a deep understanding of the topics they write about.
Chivers’ understanding of the rationalist movement doesn’t quite qualify as deep, but he was surprisingly careful to read a lot about the subject, and to write only things he did understand.
Many times I reacted to something he wrote with “that’s close, but not quite right”. Usually when I reacted that way, Chivers did a good job of describing the the rationalist message in question, and the main problem was either that rationalists haven’t figured out how to explain their ideas in a way that a board audience can understand, or that rationalists are confused. So the complaints I make in the rest of this review are at most weakly directed in Chivers direction.
I saw two areas where Chivers overlooked something important.
Rationality
One involves CFAR.
Chivers wrote seven chapters on biases, and how rationalists view them, ending with “the most important bias”: knowing about biases can make you more biased. (italics his).
I get the impression that Chivers is sweeping this problem under the rug (Do we fight that bias by being aware of it? Didn’t we just read that that doesn’t work?). That is roughly what happened with many people who learned rationalism solely via written descriptions.
Then much later, when describing how he handled his conflicting attitudes toward the risks from AI, he gives a really great description of maybe 3% of what CFAR teaches (internal double crux), much like a blind man giving a really clear description of the upper half of an elephant’s trunk. He prefaces this narrative with the apt warning: “I am aware that this all sounds a bit mystical and self-helpy. It’s not.”
Chivers doesn’t seem to connect this exercise with the goal of overcoming biases. Maybe he was too busy applying the technique on an important problem to notice the connection with his prior discussions of Bayes, biases, and sanity. It would be reasonable for him to argue that CFAR’s ideas have diverged enough to belong in a separate category, but he seems to put them in a different category by accident, without realizing that many of us consider CFAR to be an important continuation of rationalists’ interest in biases.
World conquest
Chivers comes very close to covering all of the layman-accessible claims that Yudkowsky and Bostrom make. My one complaint here is that he only give vague hints about why one bad AI can’t be stopped by other AI’s.
A key claim of many leading rationalists is that AI will have some winner take all dynamics that will lead to one AI having a decisive strategic advantage after it crosses some key threshold, such as human-level intelligence.
This is a controversial position that is somewhat connected to foom (fast takeoff), but which might be correct even without foom.
Utility functions
“If I stop caring about chess, that won’t help me win any chess games, now will it?”—That chapter title provides a good explanation of why a simple AI would continue caring about its most fundamental goals.
Is that also true of an AI with more complex, human-like goals? Chivers is partly successful at explaining how to apply the concept of a utility function to a human-like intelligence. Rationalists (or at least those who actively research AI safety) have a clear meaning here, at least as applied to agents that can be modeled mathematically. But when laymen try to apply that to humans, confusion abounds, due to the ease of conflating subgoals with ultimate goals.
Chivers tries to clarify, using the story of Odysseus and the Sirens, and claims that the Sirens would rewrite Odysseus’ utility function. I’m not sure how we can verify that the Sirens work that way, or whether they would merely persuade Odysseus to make false predictions about his expected utility. Chivers at least states clearly that the Sirens try to prevent Odysseus (by making him run aground) from doing what his pre-Siren utility function advises. Chivers’ point could be a bit clearer if he specified that in his (nonstandard?) version of the story, the Sirens make Odysseus want to run aground.
Philosophy
“Essentially, he [Yudkowsky] (and the Rationalists) are thoroughgoing utilitarians.”—That’s a bit misleading. Leading rationalists are predominantly consequentialists, but mostly avoid committing to a moral system as specific as utilitarianism. Leading rationalists also mostly endorse moral uncertainty. Rationalists mostly endorse utilitarian-style calculation (which entails some of the controversial features of utilitarianism), but are careful to combine that with worry about whether we’re optimizing the quantity that we want to optimize.
I also recommend Utilitarianism and its discontents as an example of one rationalist’s nuanced partial endorsement of utilitarianism.
Political solutions to AI risk?
Chivers describes Holden Karnofsky as wanting “to get governments and tech companies to sign treaties saying they’ll submit any AGI designs to outside scrutiny before switching them on. It wouldn’t be iron-clad, because firms might simply lie”.
Most rationalists seem pessimistic about treaties such as this.
Lying is hardly the only problem. This idea assumes that there will be a tiny number of attempts, each with a very small number of launches that look like the real thing, as happened with the first moon landing and the first atomic bomb. Yet the history of software development suggests it will be something more like hundreds of attempts that look like they might succeed. I wouldn’t be surprised if there are millions of times when an AI is turned on, and the developer has some hope that this time it will grow into a human-level AGI. There’s no way that a large number of designs will get sufficient outside scrutiny to be of much use.
And if a developer is trying new versions of their system once a day (e.g. making small changes to a number that controls, say, openness to new experience), any requirement to submit all new versions for outside scrutiny would cause large delays, creating large incentives to subvert the requirement.
So any realistic treaty would need provisions that identify a relatively small set of design choices that need to be scrutinized.
I see few signs that any experts are close to developing a consensus about what criteria would be appropriate here, and I expect that doing so would require a significant fraction of the total wisdom needed for AI safety. I discussed my hope for one such criterion in my review of Drexler’s Reframing Superintelligence paper.
Rationalist personalities
Chivers mentions several plausible explanations for what he labels the “semi-death of LessWrong”, the most obvious being that Eliezer Yudkowsky finished most of the blogging that he had wanted to do there. But I’m puzzled by one explanation that Chivers reports: “the attitude … of thinking they can rebuild everything”. Quoting Robin Hanson:
That seems like a partly apt explanation for the demise of the rationalist startups MetaMed and Arbital. But LessWrong mostly copied existing sites, such as Reddit, and was only ambitious in the sense that Eliezer was ambitious about what ideas to communicate.
Culture
I guess a book about rationalists can’t resist mentioning polyamory. “For instance, for a lot of people it would be difficult not to be jealous.” Yes, when I lived in a mostly monogamous culture, jealousy seemed pretty standard. That attititude melted away when the bay area cultures that I associated with started adopting polyamory or something similar (shortly before the rationalists became a culture). Jealousy has much more purpose if my partner is flirting with monogamous people than if he’s flirting with polyamorists.
This suggests some weird filter bubbles somewhere. I thought that fear of cities got forgotten within a month or so after 9/11. Is this a difference between London and the US? Am I out of touch with popular concerns? Does Chivers associate more with paranoid people than I do? I don’t see any obvious answer.
Conclusion
It would be really nice if Chivers and Yudkowsky could team up to write a book, but this book is a close substitute for such a collaboration.
See also Scott Aaronson’s review.