I don’t think you are crazy; I worry about this too. I think I should go read a book about the Cultural Revolution to learn more about how it happened—it can’t have been just Mao’s doing, because e.g. Barack Obama couldn’t make the same thing happen in the USA right now (or even in a deep-blue part of the USA!) no matter how hard he tried. Some conditions must have been different.*
*Off the top of my head, some factors that seem relevant: Material deprivation. Overton window so narrow and extreme that it doesn’t overlap with everyday reality. Lack of outgroup that is close enough to blame for everything yet also powerful enough to not be crushed swiftly.
I don’t think it could happen in the USA now, but I think maybe in 20 years it could if trends continue and/or get worse.
Then there are the milder forms, that don’t involve actually killing anybody but just involve getting people fired, harassed, shamed, discriminated against, etc. That seems much more likely to me—it already happens in very small, very ideologically extreme subcultures/communities—but also much less scary. (Then again, from a perspective of reducing AI risk, this scenario would be almost as bad maybe? If the AI safety community undergoes a “soft cultural revolution” like this, it might seriously undermine our effectiveness)
I don’t think you are crazy; I worry about this too. I think I should go read a book about the Cultural Revolution to learn more about how it happened—it can’t have been just Mao’s doing, because e.g. Barack Obama couldn’t make the same thing happen in the USA right now (or even in a deep-blue part of the USA!) no matter how hard he tried. Some conditions must have been different.*
*Off the top of my head, some factors that seem relevant: Material deprivation. Overton window so narrow and extreme that it doesn’t overlap with everyday reality. Lack of outgroup that is close enough to blame for everything yet also powerful enough to not be crushed swiftly.
I don’t think it could happen in the USA now, but I think maybe in 20 years it could if trends continue and/or get worse.
Then there are the milder forms, that don’t involve actually killing anybody but just involve getting people fired, harassed, shamed, discriminated against, etc. That seems much more likely to me—it already happens in very small, very ideologically extreme subcultures/communities—but also much less scary. (Then again, from a perspective of reducing AI risk, this scenario would be almost as bad maybe? If the AI safety community undergoes a “soft cultural revolution” like this, it might seriously undermine our effectiveness)