I like this model, much of which I would encapsulate in the tendency to extrapolate from past evidence, not only because it resonates with the image I have of the people who are reluctant to take existential risks seriously, but because it is more fertile for actionable advice than the simple explanation of “because they haven’t sat down to think deeply about it”. This latter explanation might hold some truth, but tackling it would be unlikely to make them take more actions towards reducing existential risks if they weren’t aware of, and weren’t able to fix, possible failure modes in their thinking, and weren’t aware that AGI is fundamentally different and extrapolating from past evidence is unhelpful.
I advocate shattering the Overton window and spreading arguments on the fundamental distinctions between AGI and our natural notions of intelligence, and these 4 points offer good, reasonable directions for addressing that. But the difficulty also lies in getting those arguments across to people outside specific or high-end communities like LW; in building a bridge between the ideas created at LessWrong, and the people who need to learn about them but are unlikely to come across LessWrong.
I like this model, much of which I would encapsulate in the tendency to extrapolate from past evidence, not only because it resonates with the image I have of the people who are reluctant to take existential risks seriously, but because it is more fertile for actionable advice than the simple explanation of “because they haven’t sat down to think deeply about it”. This latter explanation might hold some truth, but tackling it would be unlikely to make them take more actions towards reducing existential risks if they weren’t aware of, and weren’t able to fix, possible failure modes in their thinking, and weren’t aware that AGI is fundamentally different and extrapolating from past evidence is unhelpful.
I advocate shattering the Overton window and spreading arguments on the fundamental distinctions between AGI and our natural notions of intelligence, and these 4 points offer good, reasonable directions for addressing that. But the difficulty also lies in getting those arguments across to people outside specific or high-end communities like LW; in building a bridge between the ideas created at LessWrong, and the people who need to learn about them but are unlikely to come across LessWrong.