I’m concerned that the AI safety debate is becoming more and more polarized, sort of like US politics in general. I think many Americans are being very authentic and undiplomatic with each other when they argue online, in a way that doesn’t effectively advance their policy objectives. Given how easily other issues fall into this trap, it seems reasonable on priors to expect the same for AI. Then we’ll have a “memetic trench warfare” situation where you have a lot of AI-acceleration partisans who are entrenched in their position. If they can convince just one country’s government to avoid cooperating with “shut it all down”, your advocacy could end up doing more harm than good. So, if I were you I’d be a bit more focused on increasing the minimum level of AI fear in the population, as opposed to optimizing for the mean or median level of AI fear.
With regard to polarization, I’m much more worried about Eliezer, and perhaps Nate, than I am about Rob. If I were you, I’d make Rob spokesperson #1, and try to hire more people like him.
With regard to the “Message and Tone” section, I mostly agree with the specific claims. But I think there is danger in taking it too far. I strongly recommend this post: https://www.lesswrong.com/posts/D2GrrrrfipHWPJSHh/book-review-how-minds-change
I’m concerned that the AI safety debate is becoming more and more polarized, sort of like US politics in general. I think many Americans are being very authentic and undiplomatic with each other when they argue online, in a way that doesn’t effectively advance their policy objectives. Given how easily other issues fall into this trap, it seems reasonable on priors to expect the same for AI. Then we’ll have a “memetic trench warfare” situation where you have a lot of AI-acceleration partisans who are entrenched in their position. If they can convince just one country’s government to avoid cooperating with “shut it all down”, your advocacy could end up doing more harm than good. So, if I were you I’d be a bit more focused on increasing the minimum level of AI fear in the population, as opposed to optimizing for the mean or median level of AI fear.
With regard to polarization, I’m much more worried about Eliezer, and perhaps Nate, than I am about Rob. If I were you, I’d make Rob spokesperson #1, and try to hire more people like him.