In discussions that I’ve had with close friends regarding AI risk the two main sources of skepticism appear from selection bias and NIMBY (not in my backyard). They’re quick to point out that they’ve heard many predictions of doom in their lifetimes and none came true so why should this be any different. These conversations are usually short and not very granular.
The long conversations, the ones that do deeply engage with specific ideas, come to the conclusion that AI is potentially dangerous but will never affect them personally. Maybe NIMBY isn’t the best way to describe it, they place a low probability of world ending doom and medium probability of AI causing suffering somewhere and at some point in time but intuitively believe that it won’t personally affect them.
In discussions that I’ve had with close friends regarding AI risk the two main sources of skepticism appear from selection bias and NIMBY (not in my backyard). They’re quick to point out that they’ve heard many predictions of doom in their lifetimes and none came true so why should this be any different. These conversations are usually short and not very granular.
The long conversations, the ones that do deeply engage with specific ideas, come to the conclusion that AI is potentially dangerous but will never affect them personally. Maybe NIMBY isn’t the best way to describe it, they place a low probability of world ending doom and medium probability of AI causing suffering somewhere and at some point in time but intuitively believe that it won’t personally affect them.