One way people can help is by stating their beliefs on AI and the confidence in those beliefs to their friends, family members, and acquaintances who they talk to.
Currently, a bunch of people are coming across things in the news talking about humanity going extinct if AI progress continues as it has and no more alignment research happens. I would expect many of them to not think seriously about it because it’s really hard to shake out of the “business as usual” frame. Most of your friends and family members probably know you’re a reasonable, thoughtful person and it seems helpful to make people feel comfortable engaging with the arguments in a serious way instead of filing it away in some part of their brain that doesn’t affect their actions or predictions about the future in any way.
I have talked to my dad about how I feel very uncertain about making it to 40, that (with lots of uncertainty) I currently expect not to unless there’s coordination to slow AI development or a lot more effort towards AI alignment. He is new to this so had a bunch of questions but said he didn’t find it weird and now thinks it is scary. It was interesting noticing the inferential distance, since he initially had confusions like “If the AI gets consciousness, won’t it want to help other conscious beings?” and “It feels weird to be so against change, humanity will adapt” but I think he gets it now.
I think sharing sincerely the things you believe with more people is good.
I wasn’t expecting the development endgame to be much different, though it’s a bit early. At least it’s LLMs and not Atari-playing RL agents. Also, I’m much less certain about inevitability of boundary-norm-ignoring optimizers now, in a world that’s not too dog eat dog at the top. This makes precise value targeting less crucial for mere survival, though most of the Future is still lost without it.
So the news is good. I’m personally down to 70% probability of extinction, mostly first AGIs failing to prevent the world from getting destroyed by their research output, since it isn’t looking like they are going to be superintelligent out of the box. I’m no longer expecting the first AGIs to intentionally destroy the world, unless users are allowed to explicitly and successfully wish for it to be destroyed, which bizarrely seems like a significant portion of the risk.
I think there will probably be even more discussion of AI x-risk in the media in the near future. My own media consumption is quite filtered but for example, the last time I was in an Uber, the news channel on the radio mentioned Geoffrey Hinton thinking AI might kill us all. And it isn’t a distant problem for my parents the way climate change is because they use Chat-GPT and are both impressed and concerned by it. They’ll probably form thoughts on it anyway, and I’d prefer if I can be around to respond to their confusion and concerns.
It also seems plausible that there is more AI panic and anxiety amongst some fraction of the general public in the near future. And I’d prefer the people I love are eased into it rather than feeling panicked and anxious all at once and not knowing how to deal with it.
It’s also useful for me to get a pulse on how people outside my social group (which is mostly heavily filtered as well) respond to AI x-risk arguments. For example, I didn’t know before what ideas that seemed obvious to me (being more intelligent doesn’t mean you have nice values, why humans care about the things we care about, that if something much smarter than us aims to take over it will succeed quickly etc) were completely new to my parents or friends who are not rationalist-adjacent(-adjacent).
I also think being honest with people close to me is more compassionate and good but that by itself wouldn’t compel me to actively discuss AI x-risk with them.
One way people can help is by stating their beliefs on AI and the confidence in those beliefs to their friends, family members, and acquaintances who they talk to.
Currently, a bunch of people are coming across things in the news talking about humanity going extinct if AI progress continues as it has and no more alignment research happens. I would expect many of them to not think seriously about it because it’s really hard to shake out of the “business as usual” frame. Most of your friends and family members probably know you’re a reasonable, thoughtful person and it seems helpful to make people feel comfortable engaging with the arguments in a serious way instead of filing it away in some part of their brain that doesn’t affect their actions or predictions about the future in any way.
I have talked to my dad about how I feel very uncertain about making it to 40, that (with lots of uncertainty) I currently expect not to unless there’s coordination to slow AI development or a lot more effort towards AI alignment. He is new to this so had a bunch of questions but said he didn’t find it weird and now thinks it is scary. It was interesting noticing the inferential distance, since he initially had confusions like “If the AI gets consciousness, won’t it want to help other conscious beings?” and “It feels weird to be so against change, humanity will adapt” but I think he gets it now.
I think sharing sincerely the things you believe with more people is good.
I wasn’t expecting the development endgame to be much different, though it’s a bit early. At least it’s LLMs and not Atari-playing RL agents. Also, I’m much less certain about inevitability of boundary-norm-ignoring optimizers now, in a world that’s not too dog eat dog at the top. This makes precise value targeting less crucial for mere survival, though most of the Future is still lost without it.
So the news is good. I’m personally down to 70% probability of extinction, mostly first AGIs failing to prevent the world from getting destroyed by their research output, since it isn’t looking like they are going to be superintelligent out of the box. I’m no longer expecting the first AGIs to intentionally destroy the world, unless users are allowed to explicitly and successfully wish for it to be destroyed, which bizarrely seems like a significant portion of the risk.
Do you think it’s worth doing it if you will cause them distress? I find that hard to decide
I think there will probably be even more discussion of AI x-risk in the media in the near future. My own media consumption is quite filtered but for example, the last time I was in an Uber, the news channel on the radio mentioned Geoffrey Hinton thinking AI might kill us all. And it isn’t a distant problem for my parents the way climate change is because they use Chat-GPT and are both impressed and concerned by it. They’ll probably form thoughts on it anyway, and I’d prefer if I can be around to respond to their confusion and concerns.
It also seems plausible that there is more AI panic and anxiety amongst some fraction of the general public in the near future. And I’d prefer the people I love are eased into it rather than feeling panicked and anxious all at once and not knowing how to deal with it.
It’s also useful for me to get a pulse on how people outside my social group (which is mostly heavily filtered as well) respond to AI x-risk arguments. For example, I didn’t know before what ideas that seemed obvious to me (being more intelligent doesn’t mean you have nice values, why humans care about the things we care about, that if something much smarter than us aims to take over it will succeed quickly etc) were completely new to my parents or friends who are not rationalist-adjacent(-adjacent).
I also think being honest with people close to me is more compassionate and good but that by itself wouldn’t compel me to actively discuss AI x-risk with them.