However, it’s harder to find uncontroversial questions which would be diagnostic of these errors.
Perhaps an expert’s beliefs about the costs of better information and the costs of delay might be assessed with a willingness-to-pay question, such as a tradeoff involving a hypothetical benefit to everyone now living on Earth which could be sacrificed to gain hypothetical perfect understanding of some technical unknowns related to AI risks, or a hypothetical benefit gained at the cost of perfect future helplessness against AI risks. However, even this sort of question might seem to frame things hyperbolically.
Perhaps an expert’s beliefs about the costs of better information and the costs of delay might be assessed with a willingness-to-pay question, such as a tradeoff involving a hypothetical benefit to everyone now living on Earth which could be sacrificed to gain hypothetical perfect understanding of some technical unknowns related to AI risks, or a hypothetical benefit gained at the cost of perfect future helplessness against AI risks. However, even this sort of question might seem to frame things hyperbolically.