I’m with you on this. I think Yudkowsky was a lot better in this with his more serious tone, but even so, we need to look for better.
Popular scientific educators would be a place to start and I’ve thought about sending out a million emails to scientifically minded educators on YouTube, but even that doesn’t feel like the best solution to me.
The sort of people that are listened to are the more political types, so they I think are the people to reach out to. You might say they need to understand the science to talk about it, but I’d still put more weight on charisma vs. scientific authority.
Anyone have any ideas on how to get people like this on board?
I just read your one post. I agree with it. We need more people on board. We are getting that, but finding more people with more PR skills would seem like a good idea.
I think the starting point is finding people who are already part of this community who are interested in brainstorming about PR strategy. To that end, I’m writing a post on this topic.
Getting charismatic “political types” to weigh in is unlikely to help with “polarization.” That’s what happened with global warming climate change.
A more effective strategy might be to lean into the polarization: make “AI safety” an issue of tribal identity, which members will support reflexively against enemies. That might delay technological advancement for long enough.
It seems like polarization will prevent public policy changes. If half of the experts think that regulation is a terrible idea, how would governments decide to regulate? Worse yet, if some AI and corporate types are on the other side of polarization, they will charge full speed ahead as a fuck-you to the irritating doomers.
I think there’s a lot we could learn from climate change activists. Having a tangible ‘bad guy’ would really help, so maybe we should be framing it more that way.
“The greedy corporations are gambling with our lives to line their pockets.”
“The governments are racing towards AI to win world domination, and Russia might win.”
“AI will put 99% of the population out of work forever and we’ll all starve.”
And a better way to frame the issue might be “Bad people using AI” as opposed to “AI will kill us”.
If anyone knows of any groups working towards a major public awareness campaign, please let the rest of us know about it. Or maybe we should start our own.
There’s a catch-22 here where the wording will put people off if it’s too extreme because they’ll just put all doomsayers in one boat whether the fears are over AI, UFOs or cthulu and then dismiss them equally. (It’s like there’s a tradeoff between level of alarm and credibility).
And on the other hand claims will also be dismissed if the perceived danger is toned down in the wording. The best way to get the message across, in my opinion, is to either have more influential people spread the message (as previously recommended) or organize focus testing on what parts of the message people don’t understand and workshop how to get it across. If I had to take a crack at how to structure a clear, persuasive message my intuition is that the best way to word this message is to explain the current environment, current AI capabilities and specific timeline and then let the reader work out the implications.
Examples
‘Nearly 80% of the labor force works in service jobs and current AI technology can do most of those jobs. In ~5 years AI workers could be more proficient and economical than humans.’
‘It’s impossible to know what a machine is thinking. In running large language model based AI researchers don’t know exactly what they’re looking at until they analyze the metrics. Within 10-30 years an AI could reach a super intelligent level and it wouldn’t be immediately apparent.’
I’m with you on this. I think Yudkowsky was a lot better in this with his more serious tone, but even so, we need to look for better.
Popular scientific educators would be a place to start and I’ve thought about sending out a million emails to scientifically minded educators on YouTube, but even that doesn’t feel like the best solution to me.
The sort of people that are listened to are the more political types, so they I think are the people to reach out to. You might say they need to understand the science to talk about it, but I’d still put more weight on charisma vs. scientific authority.
Anyone have any ideas on how to get people like this on board?
I just read your one post. I agree with it. We need more people on board. We are getting that, but finding more people with more PR skills would seem like a good idea.
I think the starting point is finding people who are already part of this community who are interested in brainstorming about PR strategy. To that end, I’m writing a post on this topic.
Getting charismatic “political types” to weigh in is unlikely to help with “polarization.” That’s what happened with
global warmingclimate change.A more effective strategy might be to lean into the polarization: make “AI safety” an issue of tribal identity, which members will support reflexively against enemies. That might delay technological advancement for long enough.
It seems like polarization will prevent public policy changes. If half of the experts think that regulation is a terrible idea, how would governments decide to regulate? Worse yet, if some AI and corporate types are on the other side of polarization, they will charge full speed ahead as a fuck-you to the irritating doomers.
I think there’s a lot we could learn from climate change activists. Having a tangible ‘bad guy’ would really help, so maybe we should be framing it more that way.
“The greedy corporations are gambling with our lives to line their pockets.”
“The governments are racing towards AI to win world domination, and Russia might win.”
“AI will put 99% of the population out of work forever and we’ll all starve.”
And a better way to frame the issue might be “Bad people using AI” as opposed to “AI will kill us”.
If anyone knows of any groups working towards a major public awareness campaign, please let the rest of us know about it. Or maybe we should start our own.
There’s a catch-22 here where the wording will put people off if it’s too extreme because they’ll just put all doomsayers in one boat whether the fears are over AI, UFOs or cthulu and then dismiss them equally. (It’s like there’s a tradeoff between level of alarm and credibility).
And on the other hand claims will also be dismissed if the perceived danger is toned down in the wording. The best way to get the message across, in my opinion, is to either have more influential people spread the message (as previously recommended) or organize focus testing on what parts of the message people don’t understand and workshop how to get it across. If I had to take a crack at how to structure a clear, persuasive message my intuition is that the best way to word this message is to explain the current environment, current AI capabilities and specific timeline and then let the reader work out the implications.
Examples
‘Nearly 80% of the labor force works in service jobs and current AI technology can do most of those jobs. In ~5 years AI workers could be more proficient and economical than humans.’
‘It’s impossible to know what a machine is thinking. In running large language model based AI researchers don’t know exactly what they’re looking at until they analyze the metrics. Within 10-30 years an AI could reach a super intelligent level and it wouldn’t be immediately apparent.’