Show a genuine keen interest in the things they have deep models of[1] first before bringing up alignment, unless they invite you to talk first. Steer towards deep conversation with some well-chosen questions[2], but be very open to having it about whatever they know most about rather than AI immediately. At some point, they are likely to ask about what you’re interested in[3].
Then you have an A/B tested elevator pitch prepared, and adapted for your specific audience (ideally as few people as possible, which helps to lower the amount of in-the-moment status you need to spend on a brief weird-sounding monologue). Mine usually goes something like:
At some point, we will build artificial systems[4] which are more generally capable[5] than humans. At this point, they will tend to be the main drivers of their own development, resulting in a feedback cycle of recursive self-improvement called the intelligence explosion. What happens in the future will likely be determined by the values of the systems that emerge from this, and there is a global research effort to figure out how to make sure those values include humanity’s well-being. I’m trying to contribute to that effort by x.
Then you let them steer and answer their questions as honestly and accurately as you can. If there’s a lull in the conversation, bringing their attention to the arc of civilization with accelerating change viewable in their lifetimes eluded to in The Most Important Century and Sapiens is a good filler, but let them lead the conversation and politely (praising them for good questions!) give them quickfire answers. Be sure to flag any question you struggle to answer well for further research and thank them for giving you a question which you don’t have an answer to yet. Feel free to post it to Stampy if we’re missing it from our canonical questions.
This approach has a very good rate of the other person walking away seeming to take the concerns seriously, and a fairly good rate of people later joining the effort. It does depend on you actually having good answers to hand to their first few “but why don’t we just” questions, which means being fairly well read (or watching all of Rob Miles).
Almost everyone has deep models of something, for example a supermarket worker taught me about logistics and the changes to training needs and autonomy brought on by automation recently. And by learning the 5-10 minute version of everyone’s deep models you become more intellectually awesome, which helps for all sorts of things.
e.g. “What interests you?”, or ideally just getting very curious about some aspect of a thing they’ve spent a lot of time on, usually work, studies, or a hobby.
This means they’re in a receptive state, having been heard and explicitly opened the door to you sharing, so the normal memetic filters are lowered. And if they don’t ask you, they’re probably not someone who it would help to sell on alignment.
Intentionally not using the word AI here, so they create a new mental category for the thing I’m describing rather than using their existing sci-fi bucket.
Show a genuine keen interest in the things they have deep models of[1] first before bringing up alignment, unless they invite you to talk first. Steer towards deep conversation with some well-chosen questions[2], but be very open to having it about whatever they know most about rather than AI immediately. At some point, they are likely to ask about what you’re interested in[3].
Then you have an A/B tested elevator pitch prepared, and adapted for your specific audience (ideally as few people as possible, which helps to lower the amount of in-the-moment status you need to spend on a brief weird-sounding monologue). Mine usually goes something like:
Then you let them steer and answer their questions as honestly and accurately as you can. If there’s a lull in the conversation, bringing their attention to the arc of civilization with accelerating change viewable in their lifetimes eluded to in The Most Important Century and Sapiens is a good filler, but let them lead the conversation and politely (praising them for good questions!) give them quickfire answers. Be sure to flag any question you struggle to answer well for further research and thank them for giving you a question which you don’t have an answer to yet. Feel free to post it to Stampy if we’re missing it from our canonical questions.
This approach has a very good rate of the other person walking away seeming to take the concerns seriously, and a fairly good rate of people later joining the effort. It does depend on you actually having good answers to hand to their first few “but why don’t we just” questions, which means being fairly well read (or watching all of Rob Miles).
Almost everyone has deep models of something, for example a supermarket worker taught me about logistics and the changes to training needs and autonomy brought on by automation recently. And by learning the 5-10 minute version of everyone’s deep models you become more intellectually awesome, which helps for all sorts of things.
e.g. “What interests you?”, or ideally just getting very curious about some aspect of a thing they’ve spent a lot of time on, usually work, studies, or a hobby.
This means they’re in a receptive state, having been heard and explicitly opened the door to you sharing, so the normal memetic filters are lowered. And if they don’t ask you, they’re probably not someone who it would help to sell on alignment.
Intentionally not using the word AI here, so they create a new mental category for the thing I’m describing rather than using their existing sci-fi bucket.
Intentionally not using the word intelligence here, as that brings up associations which are generally unhelpful (elitism, inadequacy, etc).
Thank you plex, I was not aware of this wiki.
The pitch is nice, I’ll incorporate it.