Say I assign a 0.2% probability to a given intervention averting human extinction. If I assign it a 0.1% probability of bringing about extinction (which otherwise would not have occurred), then I’ve lost half the value of an intervention with a 0.2% probability of success and no risk of backfire. A 0.198% probability of extinction would leave a hundredth of the value.
Even at that point, it seems like quite a stretch to say that the best estimate of the Nuclear Threat Initiative’s existential risk impact is that it is 99% as likely to bring about existential catastrophe as to prevent it. And note that if the risk of backfire is to wipe out more orders of magnitude of x-risk reduction opportunity, the positives and negatives need to be very finely balanced:
If an x-risk reduction intervention has substantially greater probability of averting than producing existential risk, then it’s a win.
If an x-risk reduction intervention has greater probability of producing than averting existential risk, then that means that preventing its use is itself an x-risk intervention with good expected value.
To be neutral, the probability of backfire must be in a very narrow range.
Also, as Anna notes, uncertainty on the numbers at this scale leads to high value of information.
If you assigned a 0.2% probability to a social intervention producing a specific result, I’d mostly be highly skeptical that you have enough, good enough, data to put that precise a number on it. Once probabilities get small enough, they’re too small for the human brain to accurately estimate them.
To be neutral in reality, yes, the probability must be in a very narrow range. To be neutral within the ability of a human brain to evaluate without systematic quantitative study, it just needs to be small enough that you can’t really tell if you’re in case 1 or case 2.
If you assigned a 0.2% probability to a social intervention producing a specific result, I’d mostly be highly skeptical that you have enough, good enough, data to put that precise a number on it. Once probabilities get small enough, they’re too small for the human brain to accurately estimate them.
Do you mean that people tend to be poorly calibrated? You might mean that events or statements to which people assign 0.2% probability happen more often than that. Or you might mean that they happen less often. But either way one should then shift one’s probability estimates to take that information into account.
Or do you mean that such a number would be unstable in response to new information, more thinking about it, getting info on how priming and biases affect estimation (obviously such estimates on the spot depend on noisy factors, and studies show gains just from averaging estimates one makes at different times and so forth), etc?
In either case, if you were compelled to offer betting odds on thousands of independent claims like that (without knowledge of which side of the bet you’d have to take, and otherwise structured to make giving your best estimate the winning strategy) how would you do it?
As an aside, Yvain’s post on probability estimation seems relevant here.
Specifically, I am of the opinion that it is well-demonstrated that calculating adverse consequences of social policy is both sufficiently complicated and sufficiently subject to priming and biases that it is beyond human capacity at this time to accurately estimate whether the well-intentioned efforts of the Nuclear Threat Initiative are more likely to reduce or increase the risk of global thermonuclear war.
If I were forced to take a bet on the issue, I would set the odds at perfectly even. Not because I expect that a full and complete analysis by, say, Omega would come up with the probability being even, but because I have no ability to predict whether Omega would find that the Nuclear Threat initiative reduces or increases the chance of global thermonuclear war.
Say I assign a 0.2% probability to a given intervention averting human extinction. If I assign it a 0.1% probability of bringing about extinction (which otherwise would not have occurred), then I’ve lost half the value of an intervention with a 0.2% probability of success and no risk of backfire. A 0.198% probability of extinction would leave a hundredth of the value.
Even at that point, it seems like quite a stretch to say that the best estimate of the Nuclear Threat Initiative’s existential risk impact is that it is 99% as likely to bring about existential catastrophe as to prevent it. And note that if the risk of backfire is to wipe out more orders of magnitude of x-risk reduction opportunity, the positives and negatives need to be very finely balanced:
If an x-risk reduction intervention has substantially greater probability of averting than producing existential risk, then it’s a win.
If an x-risk reduction intervention has greater probability of producing than averting existential risk, then that means that preventing its use is itself an x-risk intervention with good expected value.
To be neutral, the probability of backfire must be in a very narrow range.
Also, as Anna notes, uncertainty on the numbers at this scale leads to high value of information.
If you assigned a 0.2% probability to a social intervention producing a specific result, I’d mostly be highly skeptical that you have enough, good enough, data to put that precise a number on it. Once probabilities get small enough, they’re too small for the human brain to accurately estimate them.
To be neutral in reality, yes, the probability must be in a very narrow range. To be neutral within the ability of a human brain to evaluate without systematic quantitative study, it just needs to be small enough that you can’t really tell if you’re in case 1 or case 2.
Do you mean that people tend to be poorly calibrated? You might mean that events or statements to which people assign 0.2% probability happen more often than that. Or you might mean that they happen less often. But either way one should then shift one’s probability estimates to take that information into account.
Or do you mean that such a number would be unstable in response to new information, more thinking about it, getting info on how priming and biases affect estimation (obviously such estimates on the spot depend on noisy factors, and studies show gains just from averaging estimates one makes at different times and so forth), etc?
In either case, if you were compelled to offer betting odds on thousands of independent claims like that (without knowledge of which side of the bet you’d have to take, and otherwise structured to make giving your best estimate the winning strategy) how would you do it?
As an aside, Yvain’s post on probability estimation seems relevant here.
The second.
Specifically, I am of the opinion that it is well-demonstrated that calculating adverse consequences of social policy is both sufficiently complicated and sufficiently subject to priming and biases that it is beyond human capacity at this time to accurately estimate whether the well-intentioned efforts of the Nuclear Threat Initiative are more likely to reduce or increase the risk of global thermonuclear war.
If I were forced to take a bet on the issue, I would set the odds at perfectly even. Not because I expect that a full and complete analysis by, say, Omega would come up with the probability being even, but because I have no ability to predict whether Omega would find that the Nuclear Threat initiative reduces or increases the chance of global thermonuclear war.