I don’t think the answer is as simple as changing terminology or carefully modelling their current viewpoints and bridging the inferential divides.
Indeed, and I think that-this-is-the-case is the message I want communicators to grasp: I have very little reach, but I have significant experience talking to people like this, and I want to transfer some of the knowledge from that experience to people who can use it better.
The thing I’ve found most useful is to be able to express that significant parts of their viewpoint are reasonable. Eg, one thing I’ve tried is “AI isn’t just stealing our work, it’s also stealing our competence”. Hasn’t stuck, though. I find it helpful to point out that yes, climate change sure is a (somewhat understated) accurate description of what doom looks like.
I do think “allergies” are a good way to think about it, though. They’re not unable to consider what might happen if AI keeps going as it is, they’re part of a culture that is trying to apply antibodies to AI. And those antibodies include active inference wishcasting like “AI is useless”. They know it’s not completely useless, but the antibody requires them to not acknowledge that in order for its effect to bind; and their criticisms aren’t wrong, just incomplete—the problems they raise with AI are typically real problems, but not high impact ones so much as ones they think will reduce the marketability of AI.
Indeed, and I think that-this-is-the-case is the message I want communicators to grasp: I have very little reach, but I have significant experience talking to people like this, and I want to transfer some of the knowledge from that experience to people who can use it better.
The thing I’ve found most useful is to be able to express that significant parts of their viewpoint are reasonable. Eg, one thing I’ve tried is “AI isn’t just stealing our work, it’s also stealing our competence”. Hasn’t stuck, though. I find it helpful to point out that yes, climate change sure is a (somewhat understated) accurate description of what doom looks like.
I do think “allergies” are a good way to think about it, though. They’re not unable to consider what might happen if AI keeps going as it is, they’re part of a culture that is trying to apply antibodies to AI. And those antibodies include active inference wishcasting like “AI is useless”. They know it’s not completely useless, but the antibody requires them to not acknowledge that in order for its effect to bind; and their criticisms aren’t wrong, just incomplete—the problems they raise with AI are typically real problems, but not high impact ones so much as ones they think will reduce the marketability of AI.