Because human attention is limited and a lot of people try to convince us of the importance of their favourite cause, we cannot engage with everyone’s arguments in detail. Thus we have to rely on heuristics to filter out insensible arguments. Depending on the form of exposure, the case for AI risks can fail on many of these generally useful heuristics, eight of which are detailed in this post. Given this outside view perspective, it is unclear whether we should actually expect ML researchers to spend time evaluating the arguments for AI risk.
Flo’s opinion:
I can remember being critical of AI risk myself for similar reasons and think that it is important to be careful with the framing of pitches to avoid these heuristics from firing. This is not to say that we should avoid criticism of the idea of AI risk, but criticism is a lot more helpful if it comes from people who have actually engaged with the arguments.
My opinion:
Even after knowing the arguments, I find six of the heuristics quite compelling: technology doomsayers have usually been wrong in the past, there isn’t a concrete threat model, it’s not empirically testable, it’s too extreme, it isn’t well grounded in my experience with existing AI systems, and it’s too far off to do useful work now. All six make me distinctly more skeptical of AI risk.
Flo’s summary for the Alignment Newsletter:
Flo’s opinion:
My opinion: