Basically, it’s a question of how should we trust our causal models versus trend extrapolation into the future.
In trend extrapolation world, the fears of AI extinction or catastrophe aren’t realized, like so many other catastrophe predictions, but the world does sort of explode as AI or another General Purpose Technology takes permanently 30-50% of jobs or more, creating a 21st century singularity that continues on for thousands of years.
In the worlds where causal models are right, AI catastrophe can happen, and the problem is unlike any other known. Trend extrapolation fails, and the situation gets more special and heroic.
I disagree that trend extrapolation world predicts that fears of AI extinction or catastrophe aren’t realized. It all depends on which trends you extrapolate. If you think hard about which trends to extrapolate as fundamental, and which to derive from the rest, congrats now you have a model.
The reason I mentioned that AI catastrophe/extinction aren’t realized is that perhaps over hundreds or thousands of technologies, people predictied that things would get worse in some way, and nearly all of the claims turn out to be exaggerated if not outright falsified, so under trend extrapolation, we should expect AI alarmism to not come true with really high probability.
But this could also be reframed as specialness vs generalness: How much can we assume AI is special, compared to other technologies? And I’d argue that’s the crux of the entire disagreement, in that if LW was convinced the general/outside view explanation was right, or Robin Hanson and AI researchers were convinced of the inside view of specialness being right, then both sides would have to change their actions drastically.
Do you actually have a comprehensive list of technologies X predictions, that shows that people are generally biased towards pessimism? Because plenty of people have falsely predicted that new technology X would make things better. And also falsely predicted that new technology X wouldn’t amount to much and/or would leave things about the same level of goodness. And also different sub-groups of people probably have different biases, so we should look at sub-groups that are more similar to the current AI safety crowd (e.g. very smart, technically competent, generally techno-optimistic people with lots of familiarity with the technology in question). Also different sub-groups of technology probably have different tendencies as well… in fact, yeah, obviously your judgment about whether technology X is going to have good or bad effects should be based primarily on facts about X, rather than on facts about the psychology of the people talking about X! Why are we even assigning enough epistemic weight to this particular kind of trend to bother investigating it in the first place?
Basically, it’s a question of how should we trust our causal models versus trend extrapolation into the future.
In trend extrapolation world, the fears of AI extinction or catastrophe aren’t realized, like so many other catastrophe predictions, but the world does sort of explode as AI or another General Purpose Technology takes permanently 30-50% of jobs or more, creating a 21st century singularity that continues on for thousands of years.
In the worlds where causal models are right, AI catastrophe can happen, and the problem is unlike any other known. Trend extrapolation fails, and the situation gets more special and heroic.
I disagree that trend extrapolation world predicts that fears of AI extinction or catastrophe aren’t realized. It all depends on which trends you extrapolate. If you think hard about which trends to extrapolate as fundamental, and which to derive from the rest, congrats now you have a model.
The reason I mentioned that AI catastrophe/extinction aren’t realized is that perhaps over hundreds or thousands of technologies, people predictied that things would get worse in some way, and nearly all of the claims turn out to be exaggerated if not outright falsified, so under trend extrapolation, we should expect AI alarmism to not come true with really high probability.
But this could also be reframed as specialness vs generalness: How much can we assume AI is special, compared to other technologies? And I’d argue that’s the crux of the entire disagreement, in that if LW was convinced the general/outside view explanation was right, or Robin Hanson and AI researchers were convinced of the inside view of specialness being right, then both sides would have to change their actions drastically.
Do you actually have a comprehensive list of technologies X predictions, that shows that people are generally biased towards pessimism? Because plenty of people have falsely predicted that new technology X would make things better. And also falsely predicted that new technology X wouldn’t amount to much and/or would leave things about the same level of goodness. And also different sub-groups of people probably have different biases, so we should look at sub-groups that are more similar to the current AI safety crowd (e.g. very smart, technically competent, generally techno-optimistic people with lots of familiarity with the technology in question). Also different sub-groups of technology probably have different tendencies as well… in fact, yeah, obviously your judgment about whether technology X is going to have good or bad effects should be based primarily on facts about X, rather than on facts about the psychology of the people talking about X! Why are we even assigning enough epistemic weight to this particular kind of trend to bother investigating it in the first place?