Getting charismatic “political types” to weigh in is unlikely to help with “polarization.” That’s what happened with global warming climate change.
A more effective strategy might be to lean into the polarization: make “AI safety” an issue of tribal identity, which members will support reflexively against enemies. That might delay technological advancement for long enough.
It seems like polarization will prevent public policy changes. If half of the experts think that regulation is a terrible idea, how would governments decide to regulate? Worse yet, if some AI and corporate types are on the other side of polarization, they will charge full speed ahead as a fuck-you to the irritating doomers.
I think there’s a lot we could learn from climate change activists. Having a tangible ‘bad guy’ would really help, so maybe we should be framing it more that way.
“The greedy corporations are gambling with our lives to line their pockets.”
“The governments are racing towards AI to win world domination, and Russia might win.”
“AI will put 99% of the population out of work forever and we’ll all starve.”
And a better way to frame the issue might be “Bad people using AI” as opposed to “AI will kill us”.
If anyone knows of any groups working towards a major public awareness campaign, please let the rest of us know about it. Or maybe we should start our own.
There’s a catch-22 here where the wording will put people off if it’s too extreme because they’ll just put all doomsayers in one boat whether the fears are over AI, UFOs or cthulu and then dismiss them equally. (It’s like there’s a tradeoff between level of alarm and credibility).
And on the other hand claims will also be dismissed if the perceived danger is toned down in the wording. The best way to get the message across, in my opinion, is to either have more influential people spread the message (as previously recommended) or organize focus testing on what parts of the message people don’t understand and workshop how to get it across. If I had to take a crack at how to structure a clear, persuasive message my intuition is that the best way to word this message is to explain the current environment, current AI capabilities and specific timeline and then let the reader work out the implications.
Examples
‘Nearly 80% of the labor force works in service jobs and current AI technology can do most of those jobs. In ~5 years AI workers could be more proficient and economical than humans.’
‘It’s impossible to know what a machine is thinking. In running large language model based AI researchers don’t know exactly what they’re looking at until they analyze the metrics. Within 10-30 years an AI could reach a super intelligent level and it wouldn’t be immediately apparent.’
Getting charismatic “political types” to weigh in is unlikely to help with “polarization.” That’s what happened with
global warmingclimate change.A more effective strategy might be to lean into the polarization: make “AI safety” an issue of tribal identity, which members will support reflexively against enemies. That might delay technological advancement for long enough.
It seems like polarization will prevent public policy changes. If half of the experts think that regulation is a terrible idea, how would governments decide to regulate? Worse yet, if some AI and corporate types are on the other side of polarization, they will charge full speed ahead as a fuck-you to the irritating doomers.
I think there’s a lot we could learn from climate change activists. Having a tangible ‘bad guy’ would really help, so maybe we should be framing it more that way.
“The greedy corporations are gambling with our lives to line their pockets.”
“The governments are racing towards AI to win world domination, and Russia might win.”
“AI will put 99% of the population out of work forever and we’ll all starve.”
And a better way to frame the issue might be “Bad people using AI” as opposed to “AI will kill us”.
If anyone knows of any groups working towards a major public awareness campaign, please let the rest of us know about it. Or maybe we should start our own.
There’s a catch-22 here where the wording will put people off if it’s too extreme because they’ll just put all doomsayers in one boat whether the fears are over AI, UFOs or cthulu and then dismiss them equally. (It’s like there’s a tradeoff between level of alarm and credibility).
And on the other hand claims will also be dismissed if the perceived danger is toned down in the wording. The best way to get the message across, in my opinion, is to either have more influential people spread the message (as previously recommended) or organize focus testing on what parts of the message people don’t understand and workshop how to get it across. If I had to take a crack at how to structure a clear, persuasive message my intuition is that the best way to word this message is to explain the current environment, current AI capabilities and specific timeline and then let the reader work out the implications.
Examples
‘Nearly 80% of the labor force works in service jobs and current AI technology can do most of those jobs. In ~5 years AI workers could be more proficient and economical than humans.’
‘It’s impossible to know what a machine is thinking. In running large language model based AI researchers don’t know exactly what they’re looking at until they analyze the metrics. Within 10-30 years an AI could reach a super intelligent level and it wouldn’t be immediately apparent.’