Well, I was asking because I found Yudkowsky’s model of AI doom far more complete than any other model of the long term consequences of AI. So the point of my original question is “how frequently is a model that is far more complete than it’s competitors wrong?”.
But yeah, even something as low as 1% chance of doom demands very large amount of attentions from the human race (similar to the amount of attention we assigned to the possibility of nuclear war).
(That said, I do think the specific value of p(doom) is very important when deciding which actions to take, because it effects the strategic considerations in the play to your outs post.)
Well, I was asking because I found Yudkowsky’s model of AI doom far more complete than any other model of the long term consequences of AI. So the point of my original question is “how frequently is a model that is far more complete than it’s competitors wrong?”.
But yeah, even something as low as 1% chance of doom demands very large amount of attentions from the human race (similar to the amount of attention we assigned to the possibility of nuclear war).
(That said, I do think the specific value of p(doom) is very important when deciding which actions to take, because it effects the strategic considerations in the play to your outs post.)