There is also the unpacking bias mentioned in the survey pdf. Going the other direction are some knowledge effects. Also note that most of the attendees were not AI types, but experts on asteroids, nukes, bioweapons, cost-benefit analysis, astrophysics, and other non-AI risks. It’s still interesting that the median AI risk was more than a quarter of median total risk in light of that fact.
There’s also the possibility that people dismiss it out of hand, without even thinking, and the more you look into the facts, the more your estimate rises. In this instance, the people at the conference just have the most facts.
That sample is drawn from those who think risks are important enough to go to a conference about the subject.
That seems like a self-selected sample of those with high estimates of p(DOOM).
The fact that this is probably a biased sample from the far end of a long tail should inform interpretations of the results.
There is also the unpacking bias mentioned in the survey pdf. Going the other direction are some knowledge effects. Also note that most of the attendees were not AI types, but experts on asteroids, nukes, bioweapons, cost-benefit analysis, astrophysics, and other non-AI risks. It’s still interesting that the median AI risk was more than a quarter of median total risk in light of that fact.
There’s also the possibility that people dismiss it out of hand, without even thinking, and the more you look into the facts, the more your estimate rises. In this instance, the people at the conference just have the most facts.