Hofstadter presents the problems of cooperation in a context of mainstream risks — risks that public intellectuals and scientists of the time agreed were legitimate topics for discussion, and which members of the news-watching public would have heard of — such as nuclear war and environmental pollution. Yudkowsky presents these problems in a context which features exotic risks — risks that most folks have not heard of outside of science fiction; and particularly ones that deal with agents with nonhuman drives, such as Unfriendly AIs, paperclip maximizers, baby-eating aliens, and so on.
This seems like a matter of literary genre. The math of the Prisoner’s Dilemma works the same regardless of whether you’re worried about cooperating with the Kremlin or an alien. But it probably has some consequences for how people think of the subject. Someone exposed to superrationality/x-rationality ideas via Less Wrong might get the erroneous impression that they are somehow fundamentally linked to exotic risks.
On the other hand, bringing exotic agents into the discussion takes a bunch of cheap answers off the table — “Oh, humans are fundamentally cooperative; we all share the same values deep down; all we need to do is hug it out, trust each other, and do the right thing.” The math works even when you don’t share the same values, which is a pretty significant point.
Hofstadter presents the problems of cooperation in a context of mainstream risks — risks that public intellectuals and scientists of the time agreed were legitimate topics for discussion, and which members of the news-watching public would have heard of — such as nuclear war and environmental pollution. Yudkowsky presents these problems in a context which features exotic risks — risks that most folks have not heard of outside of science fiction; and particularly ones that deal with agents with nonhuman drives, such as Unfriendly AIs, paperclip maximizers, baby-eating aliens, and so on.
This seems like a matter of literary genre. The math of the Prisoner’s Dilemma works the same regardless of whether you’re worried about cooperating with the Kremlin or an alien. But it probably has some consequences for how people think of the subject. Someone exposed to superrationality/x-rationality ideas via Less Wrong might get the erroneous impression that they are somehow fundamentally linked to exotic risks.
On the other hand, bringing exotic agents into the discussion takes a bunch of cheap answers off the table — “Oh, humans are fundamentally cooperative; we all share the same values deep down; all we need to do is hug it out, trust each other, and do the right thing.” The math works even when you don’t share the same values, which is a pretty significant point.