So how bad can things get? Am I crazy to worry about a future Cultural-Revolution-like virtue signaling dystopia, but even worse because it will be tech-enhanced / AI-assisted? For example during the Cultural Revolution almost everyone who kept a diary (including my own parents) either burned theirs or had their diaries become evidence for various thoughtcrimes (i.e., any past or current thoughts contradicting the current party line, which changes constantly so nobody is immune). But doing the equivalent of burning one’s diary will be impossible for a lot of people in the next “Cultural Revolution”. Also, during the Cultural Revolution, people eventually became exhausted from the extreme virtue signaling, Mao died, and common sense finally prevailed again. But with AI assistance, none of these things might happen in the next “Cultural Revolution”.
On the other side, I was going to say that it seems unlikely that too much intelligence signaling can cause anything as bad to happen, but then I realized that AI risk is actually a good example of this, because a lot of research interest in AI is driven at least in part by intellectual curiosity, and evolution probably gave us that to better signal intelligence. The whole FAI / AI alignment movement can be seen as people trying to inject more virtue signaling into the AI field! (It’s pretty crazy how much of a blind spot we have about this. I’m only having this thought now, even though I’ve known about signaling and AI risk for at least two decades.)
I don’t think you are crazy; I worry about this too. I think I should go read a book about the Cultural Revolution to learn more about how it happened—it can’t have been just Mao’s doing, because e.g. Barack Obama couldn’t make the same thing happen in the USA right now (or even in a deep-blue part of the USA!) no matter how hard he tried. Some conditions must have been different.*
*Off the top of my head, some factors that seem relevant: Material deprivation. Overton window so narrow and extreme that it doesn’t overlap with everyday reality. Lack of outgroup that is close enough to blame for everything yet also powerful enough to not be crushed swiftly.
I don’t think it could happen in the USA now, but I think maybe in 20 years it could if trends continue and/or get worse.
Then there are the milder forms, that don’t involve actually killing anybody but just involve getting people fired, harassed, shamed, discriminated against, etc. That seems much more likely to me—it already happens in very small, very ideologically extreme subcultures/communities—but also much less scary. (Then again, from a perspective of reducing AI risk, this scenario would be almost as bad maybe? If the AI safety community undergoes a “soft cultural revolution” like this, it might seriously undermine our effectiveness)
So how bad can things get? Am I crazy to worry about a future Cultural-Revolution-like virtue signaling dystopia, but even worse because it will be tech-enhanced / AI-assisted? For example during the Cultural Revolution almost everyone who kept a diary (including my own parents) either burned theirs or had their diaries become evidence for various thoughtcrimes (i.e., any past or current thoughts contradicting the current party line, which changes constantly so nobody is immune). But doing the equivalent of burning one’s diary will be impossible for a lot of people in the next “Cultural Revolution”. Also, during the Cultural Revolution, people eventually became exhausted from the extreme virtue signaling, Mao died, and common sense finally prevailed again. But with AI assistance, none of these things might happen in the next “Cultural Revolution”.
On the other side, I was going to say that it seems unlikely that too much intelligence signaling can cause anything as bad to happen, but then I realized that AI risk is actually a good example of this, because a lot of research interest in AI is driven at least in part by intellectual curiosity, and evolution probably gave us that to better signal intelligence. The whole FAI / AI alignment movement can be seen as people trying to inject more virtue signaling into the AI field! (It’s pretty crazy how much of a blind spot we have about this. I’m only having this thought now, even though I’ve known about signaling and AI risk for at least two decades.)
I don’t think you are crazy; I worry about this too. I think I should go read a book about the Cultural Revolution to learn more about how it happened—it can’t have been just Mao’s doing, because e.g. Barack Obama couldn’t make the same thing happen in the USA right now (or even in a deep-blue part of the USA!) no matter how hard he tried. Some conditions must have been different.*
*Off the top of my head, some factors that seem relevant: Material deprivation. Overton window so narrow and extreme that it doesn’t overlap with everyday reality. Lack of outgroup that is close enough to blame for everything yet also powerful enough to not be crushed swiftly.
I don’t think it could happen in the USA now, but I think maybe in 20 years it could if trends continue and/or get worse.
Then there are the milder forms, that don’t involve actually killing anybody but just involve getting people fired, harassed, shamed, discriminated against, etc. That seems much more likely to me—it already happens in very small, very ideologically extreme subcultures/communities—but also much less scary. (Then again, from a perspective of reducing AI risk, this scenario would be almost as bad maybe? If the AI safety community undergoes a “soft cultural revolution” like this, it might seriously undermine our effectiveness)