If you assigned a 0.2% probability to a social intervention producing a specific result, I’d mostly be highly skeptical that you have enough, good enough, data to put that precise a number on it. Once probabilities get small enough, they’re too small for the human brain to accurately estimate them.
To be neutral in reality, yes, the probability must be in a very narrow range. To be neutral within the ability of a human brain to evaluate without systematic quantitative study, it just needs to be small enough that you can’t really tell if you’re in case 1 or case 2.
If you assigned a 0.2% probability to a social intervention producing a specific result, I’d mostly be highly skeptical that you have enough, good enough, data to put that precise a number on it. Once probabilities get small enough, they’re too small for the human brain to accurately estimate them.
Do you mean that people tend to be poorly calibrated? You might mean that events or statements to which people assign 0.2% probability happen more often than that. Or you might mean that they happen less often. But either way one should then shift one’s probability estimates to take that information into account.
Or do you mean that such a number would be unstable in response to new information, more thinking about it, getting info on how priming and biases affect estimation (obviously such estimates on the spot depend on noisy factors, and studies show gains just from averaging estimates one makes at different times and so forth), etc?
In either case, if you were compelled to offer betting odds on thousands of independent claims like that (without knowledge of which side of the bet you’d have to take, and otherwise structured to make giving your best estimate the winning strategy) how would you do it?
As an aside, Yvain’s post on probability estimation seems relevant here.
Specifically, I am of the opinion that it is well-demonstrated that calculating adverse consequences of social policy is both sufficiently complicated and sufficiently subject to priming and biases that it is beyond human capacity at this time to accurately estimate whether the well-intentioned efforts of the Nuclear Threat Initiative are more likely to reduce or increase the risk of global thermonuclear war.
If I were forced to take a bet on the issue, I would set the odds at perfectly even. Not because I expect that a full and complete analysis by, say, Omega would come up with the probability being even, but because I have no ability to predict whether Omega would find that the Nuclear Threat initiative reduces or increases the chance of global thermonuclear war.
If you assigned a 0.2% probability to a social intervention producing a specific result, I’d mostly be highly skeptical that you have enough, good enough, data to put that precise a number on it. Once probabilities get small enough, they’re too small for the human brain to accurately estimate them.
To be neutral in reality, yes, the probability must be in a very narrow range. To be neutral within the ability of a human brain to evaluate without systematic quantitative study, it just needs to be small enough that you can’t really tell if you’re in case 1 or case 2.
Do you mean that people tend to be poorly calibrated? You might mean that events or statements to which people assign 0.2% probability happen more often than that. Or you might mean that they happen less often. But either way one should then shift one’s probability estimates to take that information into account.
Or do you mean that such a number would be unstable in response to new information, more thinking about it, getting info on how priming and biases affect estimation (obviously such estimates on the spot depend on noisy factors, and studies show gains just from averaging estimates one makes at different times and so forth), etc?
In either case, if you were compelled to offer betting odds on thousands of independent claims like that (without knowledge of which side of the bet you’d have to take, and otherwise structured to make giving your best estimate the winning strategy) how would you do it?
As an aside, Yvain’s post on probability estimation seems relevant here.
The second.
Specifically, I am of the opinion that it is well-demonstrated that calculating adverse consequences of social policy is both sufficiently complicated and sufficiently subject to priming and biases that it is beyond human capacity at this time to accurately estimate whether the well-intentioned efforts of the Nuclear Threat Initiative are more likely to reduce or increase the risk of global thermonuclear war.
If I were forced to take a bet on the issue, I would set the odds at perfectly even. Not because I expect that a full and complete analysis by, say, Omega would come up with the probability being even, but because I have no ability to predict whether Omega would find that the Nuclear Threat initiative reduces or increases the chance of global thermonuclear war.