My guess is that this would be quite harmful in expectation, by making it significantly more likely that AI safety becomes red-tribe-coded and shortening timelines-until-the-topic-polarizes-and-everyone-gets-mindkilled.
If Eliezer goes on Glenn Beck a bunch and Paul Christiano goes on Rachel Maddow a bunch then maybe we can set things up such that the left-wing orthodoxy is that P(AI-related extinction)=20% and the right-wing orthodoxy is that P(AI-related extinction)=99% 😂😂
AI safety in the sense of preventing algorithms from racial discrimination is blue-tribe coded. AI safety in the sense of preventing human extinction is not coded that way.
My guess is that this would be quite harmful in expectation, by making it significantly more likely that AI safety becomes red-tribe-coded and shortening timelines-until-the-topic-polarizes-and-everyone-gets-mindkilled.
If Eliezer goes on Glenn Beck a bunch and Paul Christiano goes on Rachel Maddow a bunch then maybe we can set things up such that the left-wing orthodoxy is that P(AI-related extinction)=20% and the right-wing orthodoxy is that P(AI-related extinction)=99% 😂😂
Hmmmm… can we get the “P(AI-related extinction) < 5%” position branded as libertarian? Cement it as the position of a tiny minority.
Not very familiar with US culture here: is AI safety not extremely blue-tribe coded right now?
AI safety in the sense of preventing algorithms from racial discrimination is blue-tribe coded. AI safety in the sense of preventing human extinction is not coded that way.
You have blue-coded editorials like https://www.nature.com/articles/d41586-023-02094-7?utm_medium=Social&utm_campaign=nature&utm_source=Twitter#Echobox=1687881012