Cultural evolution is a bit of a catch-22; you need to keep it going for generations before you gain an advantage from it, but you can’t keep it going unless you’re already gaining an advantage from it. A young human today has a massive advantage in absorbing existing culture, but other species don’t and didn’t. It requires a very long up-front investment without immediate returns, which is exactly not what evolution tends to favor.
Regarding the relevance to AI, the importance of cultural evolution is a strong counter argument to fast take-off. Yudkowsky himself argues that humans somehow separated themselves from other apes, such that humans can uniquely do things that seem wildly out-of-distribution, like going to and walking on the moon, that we’re therefore more generally intelligent and that therefore AI could similarly separate itself from humans by becoming even more generally intelligent and gaining capabilities that are even more wildly out-of-distribution. However, the thing that separates humans from other apes is cultural evolution; it’s a one-time gain without a superlative. Moreover, it’s a counter argument to the very idea of general intelligence, because it shows that going to and walking on the moon are not in fact as wildly out-of-distribution as it first seems. The astronauts and the engineers who built their vehicles were trained exhaustively in the required capabilities, which had been previously culturally accumulated. Walking on the moon only seems striking because the compounding speed of memetic evolution is much higher than the extraordinary slow pace of genetic evolution.
A further argument I would make for the relevance of cultural evolution to AI is that in my view it shows that the ability of individual human agents to discover new capabilities is on average extremely limited and that the same is likely true for AI, although perhaps to a somewhat lesser extent. Humanity as a whole makes great strides, because among the many who try new things the very few who succeed pass on their new capabilities to the others. The vast majority of any individual’s capabilities relies on absorbing existing knowledge and habits. At the same time, most individuals do not pass on anything new and even when they do it’s the luck of the draw. I think the same is mostly true for any individual AI, because of the inherent rarity of useful behaviors in the space of all behaviors. If this is indeed true, then that means we have less to fear from misaligned individuals than from misaligned cultures.
Cultural evolution is a bit of a catch-22; you need to keep it going for generations before you gain an advantage from it, but you can’t keep it going unless you’re already gaining an advantage from it. A young human today has a massive advantage in absorbing existing culture, but other species don’t and didn’t. It requires a very long up-front investment without immediate returns, which is exactly not what evolution tends to favor.
Regarding the relevance to AI, the importance of cultural evolution is a strong counter argument to fast take-off. Yudkowsky himself argues that humans somehow separated themselves from other apes, such that humans can uniquely do things that seem wildly out-of-distribution, like going to and walking on the moon, that we’re therefore more generally intelligent and that therefore AI could similarly separate itself from humans by becoming even more generally intelligent and gaining capabilities that are even more wildly out-of-distribution. However, the thing that separates humans from other apes is cultural evolution; it’s a one-time gain without a superlative. Moreover, it’s a counter argument to the very idea of general intelligence, because it shows that going to and walking on the moon are not in fact as wildly out-of-distribution as it first seems. The astronauts and the engineers who built their vehicles were trained exhaustively in the required capabilities, which had been previously culturally accumulated. Walking on the moon only seems striking because the compounding speed of memetic evolution is much higher than the extraordinary slow pace of genetic evolution.
A further argument I would make for the relevance of cultural evolution to AI is that in my view it shows that the ability of individual human agents to discover new capabilities is on average extremely limited and that the same is likely true for AI, although perhaps to a somewhat lesser extent. Humanity as a whole makes great strides, because among the many who try new things the very few who succeed pass on their new capabilities to the others. The vast majority of any individual’s capabilities relies on absorbing existing knowledge and habits. At the same time, most individuals do not pass on anything new and even when they do it’s the luck of the draw. I think the same is mostly true for any individual AI, because of the inherent rarity of useful behaviors in the space of all behaviors. If this is indeed true, then that means we have less to fear from misaligned individuals than from misaligned cultures.