partly as a result of other projects like the Existential Risk Persuasion Tournament (conducted by the Forecasting Research Institute), I now think of it as a data-point that “superforecasters as a whole generally come to lower numbers than I do on AI risk, even after engaging in some depth with the arguments.”
I participated in the Existential Risk Persuasion Tournament and I disagree that most superforecasters in that tournament engaged in any depth with the arguments. I also disagree with the phrase “even after arguing about it”—barely any arguing happened, at least in my subgroup. I think much less effort went into these estimates than it would be natural to assume based on how the tournament has been written about by EAs, journalists, and so on.
I participated in the Existential Risk Persuasion Tournament and I disagree that most superforecasters in that tournament engaged in any depth with the arguments. I also disagree with the phrase “even after arguing about it”—barely any arguing happened, at least in my subgroup. I think much less effort went into these estimates than it would be natural to assume based on how the tournament has been written about by EAs, journalists, and so on.
Seconded for whatever group I participated in.