“The ideal dangerous technology for people to not give a shit about banning would involve a theoretical threat which is hard to understand”
I don’t think The Terminator was hard to understand. The second you get some credible people saying that AI is a threat, the media reaction is going to be overexcessive, as it always is.
The second you get some credible people saying that AI is a threat
It’s already happened—didn’t you see the media about Stephen Hawking saying AI could be dangerous? And Bill Joy?
The general point I am trying to make is that the general public are not rational in terms of collective epistemology. They don’t respond to complex logical and quantitative analyses. Yes, Joy and Hawking did say that AI is a risk, but there are many risks, including the risk that vaccinations cause autism and the risk that foreign workers will take all our jobs. The public does not understand the difference between these risks.
Thanks; I was mistaken. Would you say, then, that mainstream scientists are similarly irrational? (The main comparison I have in mind throughout this section, by the way, is global warming.)
I would say that poor social epistemology and, poor social axiology and mediocre individual rationality are the big culprits that prevent many scientists from taking AI risk seriously.
By “social axiology” I mean that our society is just not consequentialist enough. We don’t solve problems that way, and even the debate about global warming is not really dealing well with the problem of how to quantify risks under uncertainty. We don’t try to improve the world in a systematic, rational way; rather it is done piecemeal.
“The ideal dangerous technology for people to not give a shit about banning would involve a theoretical threat which is hard to understand”
I don’t think The Terminator was hard to understand. The second you get some credible people saying that AI is a threat, the media reaction is going to be overexcessive, as it always is.
It’s already happened—didn’t you see the media about Stephen Hawking saying AI could be dangerous? And Bill Joy?
The general point I am trying to make is that the general public are not rational in terms of collective epistemology. They don’t respond to complex logical and quantitative analyses. Yes, Joy and Hawking did say that AI is a risk, but there are many risks, including the risk that vaccinations cause autism and the risk that foreign workers will take all our jobs. The public does not understand the difference between these risks.
Thanks; I was mistaken. Would you say, then, that mainstream scientists are similarly irrational? (The main comparison I have in mind throughout this section, by the way, is global warming.)
I would say that poor social epistemology and, poor social axiology and mediocre individual rationality are the big culprits that prevent many scientists from taking AI risk seriously.
By “social axiology” I mean that our society is just not consequentialist enough. We don’t solve problems that way, and even the debate about global warming is not really dealing well with the problem of how to quantify risks under uncertainty. We don’t try to improve the world in a systematic, rational way; rather it is done piecemeal.