this experiment has been done before.
If you have a framing of the AI Doom argument that can cause a consensus of super-forecasters (or AI risk skeptics, or literally any group that has an average pDoom<20%) to change their consensus, I would be exceptionally interested in seeing that demonstrated.
Such an argument would be neither bad nor weak, which is precisely the type of argument I have been hoping to find by writing this post.
> Please notice that your position is extremely non-intuitive to basically everyone.
Please notice that Manifold both thinks AGI soon and pDoom low.
Dying is a symmetric problem, it’s not like we can’t die without AGI. If you want to calculate p(human extinction | AGI) you have to consider ways AGI can both increase and decrease p(extinction). And the best methods currently available to humans to aggregate low probability statistics are expert surveys, groups of super-forecasters, or prediction markets, all of which agree on pDoom <20%.