What do I tell the people who I know but can’t spend lots of time with?
Clarification: How do I get relative strangers who converse with me IRL to maximally care about the dangers of AI?
Do I downplay my concerns such that they don’t think I’m crazy? Do I mention it every time I see them to make sure they don’t forget? Do I tolerate third parties butting in and making wrong statements? Do I tell them to read up on it and pester them on whether they read it already? Do I never mention it to laymen to avoid them propagating wrong memes? Do I seek out and approach people who could be useful to the field or should I just forward or mention them to someone who can give the topic a better first impression than me?
Show a genuine keen interest in the things they have deep models of[1] first before bringing up alignment, unless they invite you to talk first. Steer towards deep conversation with some well-chosen questions[2], but be very open to having it about whatever they know most about rather than AI immediately. At some point, they are likely to ask about what you’re interested in[3].
Then you have an A/B tested elevator pitch prepared, and adapted for your specific audience (ideally as few people as possible, which helps to lower the amount of in-the-moment status you need to spend on a brief weird-sounding monologue). Mine usually goes something like:
At some point, we will build artificial systems[4] which are more generally capable[5] than humans. At this point, they will tend to be the main drivers of their own development, resulting in a feedback cycle of recursive self-improvement called the intelligence explosion. What happens in the future will likely be determined by the values of the systems that emerge from this, and there is a global research effort to figure out how to make sure those values include humanity’s well-being. I’m trying to contribute to that effort by x.
Then you let them steer and answer their questions as honestly and accurately as you can. If there’s a lull in the conversation, bringing their attention to the arc of civilization with accelerating change viewable in their lifetimes eluded to in The Most Important Century and Sapiens is a good filler, but let them lead the conversation and politely (praising them for good questions!) give them quickfire answers. Be sure to flag any question you struggle to answer well for further research and thank them for giving you a question which you don’t have an answer to yet. Feel free to post it to Stampy if we’re missing it from our canonical questions.
This approach has a very good rate of the other person walking away seeming to take the concerns seriously, and a fairly good rate of people later joining the effort. It does depend on you actually having good answers to hand to their first few “but why don’t we just” questions, which means being fairly well read (or watching all of Rob Miles).
Almost everyone has deep models of something, for example a supermarket worker taught me about logistics and the changes to training needs and autonomy brought on by automation recently. And by learning the 5-10 minute version of everyone’s deep models you become more intellectually awesome, which helps for all sorts of things.
e.g. “What interests you?”, or ideally just getting very curious about some aspect of a thing they’ve spent a lot of time on, usually work, studies, or a hobby.
This means they’re in a receptive state, having been heard and explicitly opened the door to you sharing, so the normal memetic filters are lowered. And if they don’t ask you, they’re probably not someone who it would help to sell on alignment.
Intentionally not using the word AI here, so they create a new mental category for the thing I’m describing rather than using their existing sci-fi bucket.
I’m concerned with the ethics. Is it wrong to doom speak to strangers? Is that the most effective thing here? I’d be lying if I said I was fine, but would it be best to tell them I’m “mildly concerned”?
How do convey these grave emotions I have while maximally getting the people around me to care about mitigating AI risk?
Should I compromise on truth and downplay my concerns if that will get someone to care more? Should I expect people to be more receptive to the message of AI risk if I’m mild about it?
Why do you care if people around you, who presumably have lives to live, care about AI risk? It’s not a problem like AIDS or groundwater pollution, where individual carefulness is needed to make a difference. In those cases, telling everybody about the problem is important, because it will prevent them having unprotected sex, or dumping used motor oil in their backyard. Unaligned AGI is a problem like nuclear war or viral gain-of-function research, where a few people work on the problem pretty much full time. If you want to increase the population of such people, that’s fine, but telling your mother-in-law that the world is doomed isn’t going to help.
Why do I care if the people around me care about AI risk?
1. when AI is going to rule we’d like the people to somehow have some power I reckon. I mean creating any superintelligence is a powergrab. Making one in secret is quite hostile, shouldn’t people get a say or at least insight in what their future holds?
2. Nobody still really knows what we’d like the superint to do. I think an ML researcher is as capable of voicing their desires for the future as an artist. The field surely can benefit from interdisciplinary approaches.
3. As with nuclear war, I’m sure politicians will care more when the people care more. AI governance is a big point. Convincing AI devs to not make the superint seems easier when a big percentage of humanity is pressuring them not to do it.
4. Maybe this also extends to international relations. Seeing that the people of a democratic country care about the safety, makes the ventures from that country seem more reliable.
5. I get bummed out when nobody knows what I’m talking about.
What do I tell the people who I know but can’t spend lots of time with?
Clarification: How do I get relative strangers who converse with me IRL to maximally care about the dangers of AI?
Do I downplay my concerns such that they don’t think I’m crazy?
Do I mention it every time I see them to make sure they don’t forget?
Do I tolerate third parties butting in and making wrong statements?
Do I tell them to read up on it and pester them on whether they read it already?
Do I never mention it to laymen to avoid them propagating wrong memes?
Do I seek out and approach people who could be useful to the field or should I just forward or mention them to someone who can give the topic a better first impression than me?
Show a genuine keen interest in the things they have deep models of[1] first before bringing up alignment, unless they invite you to talk first. Steer towards deep conversation with some well-chosen questions[2], but be very open to having it about whatever they know most about rather than AI immediately. At some point, they are likely to ask about what you’re interested in[3].
Then you have an A/B tested elevator pitch prepared, and adapted for your specific audience (ideally as few people as possible, which helps to lower the amount of in-the-moment status you need to spend on a brief weird-sounding monologue). Mine usually goes something like:
Then you let them steer and answer their questions as honestly and accurately as you can. If there’s a lull in the conversation, bringing their attention to the arc of civilization with accelerating change viewable in their lifetimes eluded to in The Most Important Century and Sapiens is a good filler, but let them lead the conversation and politely (praising them for good questions!) give them quickfire answers. Be sure to flag any question you struggle to answer well for further research and thank them for giving you a question which you don’t have an answer to yet. Feel free to post it to Stampy if we’re missing it from our canonical questions.
This approach has a very good rate of the other person walking away seeming to take the concerns seriously, and a fairly good rate of people later joining the effort. It does depend on you actually having good answers to hand to their first few “but why don’t we just” questions, which means being fairly well read (or watching all of Rob Miles).
Almost everyone has deep models of something, for example a supermarket worker taught me about logistics and the changes to training needs and autonomy brought on by automation recently. And by learning the 5-10 minute version of everyone’s deep models you become more intellectually awesome, which helps for all sorts of things.
e.g. “What interests you?”, or ideally just getting very curious about some aspect of a thing they’ve spent a lot of time on, usually work, studies, or a hobby.
This means they’re in a receptive state, having been heard and explicitly opened the door to you sharing, so the normal memetic filters are lowered. And if they don’t ask you, they’re probably not someone who it would help to sell on alignment.
Intentionally not using the word AI here, so they create a new mental category for the thing I’m describing rather than using their existing sci-fi bucket.
Intentionally not using the word intelligence here, as that brings up associations which are generally unhelpful (elitism, inadequacy, etc).
Thank you plex, I was not aware of this wiki.
The pitch is nice, I’ll incorporate it.
“tell” with what goal? (could you ask in other words or give an example answer so I’ll understand what you’re pointing at?)
I’m concerned with the ethics.
Is it wrong to doom speak to strangers? Is that the most effective thing here? I’d be lying if I said I was fine, but would it be best to tell them I’m “mildly concerned”?
How do convey these grave emotions I have while maximally getting the people around me to care about mitigating AI risk?
Should I compromise on truth and downplay my concerns if that will get someone to care more? Should I expect people to be more receptive to the message of AI risk if I’m mild about it?
Why do you care if people around you, who presumably have lives to live, care about AI risk? It’s not a problem like AIDS or groundwater pollution, where individual carefulness is needed to make a difference. In those cases, telling everybody about the problem is important, because it will prevent them having unprotected sex, or dumping used motor oil in their backyard. Unaligned AGI is a problem like nuclear war or viral gain-of-function research, where a few people work on the problem pretty much full time. If you want to increase the population of such people, that’s fine, but telling your mother-in-law that the world is doomed isn’t going to help.
Why do I care if the people around me care about AI risk?
1. when AI is going to rule we’d like the people to somehow have some power I reckon.
I mean creating any superintelligence is a powergrab. Making one in secret is quite hostile, shouldn’t people get a say or at least insight in what their future holds?
2. Nobody still really knows what we’d like the superint to do. I think an ML researcher is as capable of voicing their desires for the future as an artist. The field surely can benefit from interdisciplinary approaches.
3. As with nuclear war, I’m sure politicians will care more when the people care more. AI governance is a big point. Convincing AI devs to not make the superint seems easier when a big percentage of humanity is pressuring them not to do it.
4. Maybe this also extends to international relations. Seeing that the people of a democratic country care about the safety, makes the ventures from that country seem more reliable.
5. I get bummed out when nobody knows what I’m talking about.