I am deeply worried about the prospect of a botched fire alarm response. In my opinion, the most likely result of a successful fire alarm would not be that society suddenly gets its act together and finds the best way to develop AI safely. Rather, the most likely result is that governments and other institutions implement very hasty and poorly thought-out policy, aimed at signaling that they are doing “everything they can” to prevent AI catastrophe. In practice, this means poorly targeted bans, stigmatization, and a redistribution of power from current researchers to bureaucratic agencies that EAs have no control over.
I do concede this is a real enough risk that this is played wrong, and would very strongly encourage those considering independent efforts to centrally coordinate so we maximize the odds of any distributed actions going “right”.
I am deeply worried about the prospect of a botched fire alarm response. In my opinion, the most likely result of a successful fire alarm would not be that society suddenly gets its act together and finds the best way to develop AI safely. Rather, the most likely result is that governments and other institutions implement very hasty and poorly thought-out policy, aimed at signaling that they are doing “everything they can” to prevent AI catastrophe. In practice, this means poorly targeted bans, stigmatization, and a redistribution of power from current researchers to bureaucratic agencies that EAs have no control over.
I do concede this is a real enough risk that this is played wrong, and would very strongly encourage those considering independent efforts to centrally coordinate so we maximize the odds of any distributed actions going “right”.
Reflecting on this and other comments, I decided to edit the original post to retract the call for a “fire alarm”.