I see your point, but I think this is unavoidable. Also, I haven’t heard of anyone who was stressing out much after our information.
Personally, I was informed (or convinced perhaps) a few years ago at a talk from Anders Sandberg from FHI. That did cause stress and negative feelings for me at times, but it also allowed me to work on something I think is really meaningful. I never for a moment regretted being informed. How many people do you know who say, I wish I hadn’t been informed about climate change back in the nineties? For me, zero. I do know a lot of people who would be very angry if someone had deliberately not informed them back then.
I think people can handle emotions pretty well. I also think they have a right to know. In my opinion, we shouldn’t decide for others what is good or bad to be aware of.
Here’s how we could reframe the issue in a more positive way: first, we recognize that people are already broadly aware of AI x-risk (but not by that name). I think most people have an idea that ‘robots’ could ‘gain sentience’ and take over the world, and have some prior probability ranging from ‘it’s sci-fi’ to ‘I just hope that happens after I’m dead’. Therefore, what people need to be informed of is this: there is a community of intellectuals and computer people who are working on the problem, we have our own jargon, here is the progress we have made and here is what we think society should do. Success could be measured by the question “should we fund research to make AI systems safer?”
I’d say your first assumption is off. We actually researched something related. We asked people the question: “List three events, in order of probability (from most to least probable) that you believe could potentially cause human extinction within the next 100 years”. I would say that if your assumption would be correct, they would say “robot takeover” or something similar as part of that top 3. However, >90% doesn’t mention AI, robots, or anything similar. Instead, they typically say things like climate change, asteroid strike, or pandemic. So based on this research, either people don’t see a robot takeover scenario as likely at all, or they think timelines are very long (>100 yrs).
I do support informing the public more about the existence of the AI Safety community, though, I think that would be good.
Ah wow interesting. I assumed that most people have seen or know about either The Terminator, The Matrix, I, Robot, Ex Machina or M3GAN. Obviously people usually dismiss them as sci-fi, but I assumed most people were at least aware of them.
Thank you!
I see your point, but I think this is unavoidable. Also, I haven’t heard of anyone who was stressing out much after our information.
Personally, I was informed (or convinced perhaps) a few years ago at a talk from Anders Sandberg from FHI. That did cause stress and negative feelings for me at times, but it also allowed me to work on something I think is really meaningful. I never for a moment regretted being informed. How many people do you know who say, I wish I hadn’t been informed about climate change back in the nineties? For me, zero. I do know a lot of people who would be very angry if someone had deliberately not informed them back then.
I think people can handle emotions pretty well. I also think they have a right to know. In my opinion, we shouldn’t decide for others what is good or bad to be aware of.
Here’s how we could reframe the issue in a more positive way: first, we recognize that people are already broadly aware of AI x-risk (but not by that name). I think most people have an idea that ‘robots’ could ‘gain sentience’ and take over the world, and have some prior probability ranging from ‘it’s sci-fi’ to ‘I just hope that happens after I’m dead’. Therefore, what people need to be informed of is this: there is a community of intellectuals and computer people who are working on the problem, we have our own jargon, here is the progress we have made and here is what we think society should do. Success could be measured by the question “should we fund research to make AI systems safer?”
I’d say your first assumption is off. We actually researched something related. We asked people the question: “List three events, in order of probability (from most to least probable) that you believe could potentially cause human extinction within the next 100 years”. I would say that if your assumption would be correct, they would say “robot takeover” or something similar as part of that top 3. However, >90% doesn’t mention AI, robots, or anything similar. Instead, they typically say things like climate change, asteroid strike, or pandemic. So based on this research, either people don’t see a robot takeover scenario as likely at all, or they think timelines are very long (>100 yrs).
I do support informing the public more about the existence of the AI Safety community, though, I think that would be good.
Ah wow interesting. I assumed that most people have seen or know about either The Terminator, The Matrix, I, Robot, Ex Machina or M3GAN. Obviously people usually dismiss them as sci-fi, but I assumed most people were at least aware of them.