Here’s how we could reframe the issue in a more positive way: first, we recognize that people are already broadly aware of AI x-risk (but not by that name). I think most people have an idea that ‘robots’ could ‘gain sentience’ and take over the world, and have some prior probability ranging from ‘it’s sci-fi’ to ‘I just hope that happens after I’m dead’. Therefore, what people need to be informed of is this: there is a community of intellectuals and computer people who are working on the problem, we have our own jargon, here is the progress we have made and here is what we think society should do. Success could be measured by the question “should we fund research to make AI systems safer?”
I’d say your first assumption is off. We actually researched something related. We asked people the question: “List three events, in order of probability (from most to least probable) that you believe could potentially cause human extinction within the next 100 years”. I would say that if your assumption would be correct, they would say “robot takeover” or something similar as part of that top 3. However, >90% doesn’t mention AI, robots, or anything similar. Instead, they typically say things like climate change, asteroid strike, or pandemic. So based on this research, either people don’t see a robot takeover scenario as likely at all, or they think timelines are very long (>100 yrs).
I do support informing the public more about the existence of the AI Safety community, though, I think that would be good.
Ah wow interesting. I assumed that most people have seen or know about either The Terminator, The Matrix, I, Robot, Ex Machina or M3GAN. Obviously people usually dismiss them as sci-fi, but I assumed most people were at least aware of them.
Here’s how we could reframe the issue in a more positive way: first, we recognize that people are already broadly aware of AI x-risk (but not by that name). I think most people have an idea that ‘robots’ could ‘gain sentience’ and take over the world, and have some prior probability ranging from ‘it’s sci-fi’ to ‘I just hope that happens after I’m dead’. Therefore, what people need to be informed of is this: there is a community of intellectuals and computer people who are working on the problem, we have our own jargon, here is the progress we have made and here is what we think society should do. Success could be measured by the question “should we fund research to make AI systems safer?”
I’d say your first assumption is off. We actually researched something related. We asked people the question: “List three events, in order of probability (from most to least probable) that you believe could potentially cause human extinction within the next 100 years”. I would say that if your assumption would be correct, they would say “robot takeover” or something similar as part of that top 3. However, >90% doesn’t mention AI, robots, or anything similar. Instead, they typically say things like climate change, asteroid strike, or pandemic. So based on this research, either people don’t see a robot takeover scenario as likely at all, or they think timelines are very long (>100 yrs).
I do support informing the public more about the existence of the AI Safety community, though, I think that would be good.
Ah wow interesting. I assumed that most people have seen or know about either The Terminator, The Matrix, I, Robot, Ex Machina or M3GAN. Obviously people usually dismiss them as sci-fi, but I assumed most people were at least aware of them.