Overall, a headline that seems counterproductive and needlessly divisive.
Probably the understatement of the decade, this article is literally an “order” from Official Authority to stop talking about what I believe is literally the most important thing in the world. I guess this is not literally the headline that would maximally make me lose respect for Nature… but it’s pretty close.
This article is a pure appeal to authority. It contains no arguments at all, it only exists as a social signal that Respectable Scientists should steer away from talk of AI existential risk.
The AI risk debate is now no more about any actual arguments, it’s now about slinging around political capital and scientific prestige. It has become political in nature.
Yep, that’s the biggest issue I have with my own side of the debate on AI risk, in that quite often, they don’t even try to state why it isn’t a risk, and instead appeal to social authority, and while social authority is evidence, it’s too easy to filter that evidence a lot to be useful.
To be frank, I don’t blame a lot of the AI risk people for not being convinced that we aren’t doomed, even though reality doesn’t grade on a curve, the unsoundness of the current arguments against doom don’t help, and it is in fact bad that my side keeps doing this.
Probably the understatement of the decade, this article is literally an “order” from Official Authority to stop talking about what I believe is literally the most important thing in the world. I guess this is not literally the headline that would maximally make me lose respect for Nature… but it’s pretty close.
This article is a pure appeal to authority. It contains no arguments at all, it only exists as a social signal that Respectable Scientists should steer away from talk of AI existential risk.
The AI risk debate is now no more about any actual arguments, it’s now about slinging around political capital and scientific prestige. It has become political in nature.
Yep, that’s the biggest issue I have with my own side of the debate on AI risk, in that quite often, they don’t even try to state why it isn’t a risk, and instead appeal to social authority, and while social authority is evidence, it’s too easy to filter that evidence a lot to be useful.
To be frank, I don’t blame a lot of the AI risk people for not being convinced that we aren’t doomed, even though reality doesn’t grade on a curve, the unsoundness of the current arguments against doom don’t help, and it is in fact bad that my side keeps doing this.