The odds of any particular AGI destroying the world are << than the odds of AGI as a whole destroying the world, so the questions in “Prediction based on approach for creating AGI:” may be miscalibrated.
I interpreted that question as a conditionalprobability, and so not required to be strictly less. In particular, P(X causes catastrophe | X was of the given type, and the first AGI developed, on the given date).
Nothing requires that this conditional probability should be less than “some AGI destroys the world conditional on any AGI being developed”.
I do think that the first AGI developed will have a big effect on the probability of doom, so hopefully it will be some value possible to derive from the question. But it would be interesting to control for what other AIs do, in order to get better calibrated statistics.
The odds of any particular AGI destroying the world are << than the odds of AGI as a whole destroying the world, so the questions in “Prediction based on approach for creating AGI:” may be miscalibrated.
I interpreted that question as a conditional probability, and so not required to be strictly less. In particular, P(X causes catastrophe | X was of the given type, and the first AGI developed, on the given date).
Nothing requires that this conditional probability should be less than “some AGI destroys the world conditional on any AGI being developed”.
Excellent point.
I do think that the first AGI developed will have a big effect on the probability of doom, so hopefully it will be some value possible to derive from the question. But it would be interesting to control for what other AIs do, in order to get better calibrated statistics.