Sorry for a minor nitpick, it’s worth making. It doesn’t detract from Duncan’s overall point at all, and if people lose Bayes points from every single nitpick, then Duncan nonetheless loses very few from this.
Because they never heard people openly proselytizing Nazi ideology, they assumed that practically no one sympathized with Nazi ideology.
And then one Donald J. Trump smashed into the Overton window like a wrecking ball, and all of a sudden a bunch of sentiment that had been previously shamed into private settings was on display for the whole world to see, and millions of dismayed onlookers were disoriented at the lack of universal condemnation because surely none of us think that any of this is okay??
But in fact, a rather large fraction of people do think this is okay, and have always felt this was okay, and were only holding their tongues for fear of punishment, and only adding their voices to the punishment of others to avoid being punished as a sympathizer. When the threat of punishment diminished, so too did their hesitation.
In the case of political ideologies, it might be hard to create potential energy for intense political views (or transformative technology could suddenly blindside us and just make it really easy). But generally, it’s always been really easy for elites to engineer societal/psychological positive feedback loops sufficient to generate millions of new ideological adherents, or at least to take existing sentiments among millions of people and hone them into something specific and targeted like Nazism. In the particular case of political ideologies, it’s wrong to assume that because a bunch of Nazis appeared, they were mostly there all along but hidden.
The other examples in the post are still very solid and helpful afaik (the Autism label itself might be too intractably broken in our current society for practical use, but staying silent about the problem doesn’t seem helpful).
I like this post better than Raemon’s Dark Forest Theories, even though Raemon’s post is more wordcount-efficient and uses examples that I find more interesting and relevant. I think this post potentially does a very good job getting at the core of why things like Cryonics and Longtermism did not rapidly become mainstream (there might be substantial near-term EV in doing work to understand the unpopular-cryonics problem and the unpopular-longtermism problem, because those two problems are surprisingly closely linked to why human civilization is currently failing horribly at AI safety, and might even generate rapid solutions that are viable for short timelines e.g. providing momentum for rapidly raising the sanity waterline).
it’s wrong to assume that because a bunch of Nazis appeared, they were mostly there all along but hidden
I’d say it’s wrong as an “assumption” but very good as a prior. (The prior also suggests new white supremacists were generated, as Duncan noted.) Unfortunately, good priors (as with bad priors) often don’t have ready-made scientific studies to justify them, but like, it’s pretty clear that gay and mildly autistic people were there all along, and I have no reason to think the same is not true of white supremacists, so the prior holds. I also agree that it has proven easy for some people to “take existing sentiments among millions of people and hone them”, but you call them “elites”, so I’d point out that some of those people spend much of their time hating on “elites” and “elitism”...
I think this post potentially does a very good job getting at the core of why things like Cryonics and Longtermism did not rapidly become mainstream
Sorry for a minor nitpick, it’s worth making. It doesn’t detract from Duncan’s overall point at all, and if people lose Bayes points from every single nitpick, then Duncan nonetheless loses very few from this.
In the case of political ideologies, it might be hard to create potential energy for intense political views (or transformative technology could suddenly blindside us and just make it really easy). But generally, it’s always been really easy for elites to engineer societal/psychological positive feedback loops sufficient to generate millions of new ideological adherents, or at least to take existing sentiments among millions of people and hone them into something specific and targeted like Nazism. In the particular case of political ideologies, it’s wrong to assume that because a bunch of Nazis appeared, they were mostly there all along but hidden.
The other examples in the post are still very solid and helpful afaik (the Autism label itself might be too intractably broken in our current society for practical use, but staying silent about the problem doesn’t seem helpful).
I like this post better than Raemon’s Dark Forest Theories, even though Raemon’s post is more wordcount-efficient and uses examples that I find more interesting and relevant. I think this post potentially does a very good job getting at the core of why things like Cryonics and Longtermism did not rapidly become mainstream (there might be substantial near-term EV in doing work to understand the unpopular-cryonics problem and the unpopular-longtermism problem, because those two problems are surprisingly closely linked to why human civilization is currently failing horribly at AI safety, and might even generate rapid solutions that are viable for short timelines e.g. providing momentum for rapidly raising the sanity waterline).
I’d say it’s wrong as an “assumption” but very good as a prior. (The prior also suggests new white supremacists were generated, as Duncan noted.) Unfortunately, good priors (as with bad priors) often don’t have ready-made scientific studies to justify them, but like, it’s pretty clear that gay and mildly autistic people were there all along, and I have no reason to think the same is not true of white supremacists, so the prior holds. I also agree that it has proven easy for some people to “take existing sentiments among millions of people and hone them”, but you call them “elites”, so I’d point out that some of those people spend much of their time hating on “elites” and “elitism”...
Could you elaborate?