To me, 0.02 is a comparatively tiny difference between likelihood of a proposition and its negation.
If P(A) = 0.51 and P(~A) = 0.49 then almost every decision I make based on A will give almost equal weight to whether it is true or false, and the cognitive process of working through implications on either side are essentially identical to the case P(A) = 0.49 and P(~A) = 0.51. The outcome of the decision will also be the same very frequently, since outcomes are usually unbalanced.
It takes quite a bit of contriving to arrange a situation where there is any meaningful difference between P(A) = 0.51 and P(A) = 0.49 for some real-world proposition A.
Yeah, and this may get at another reason why the proposal doesn’t seem right to me. There’s no doubt that most people would be better calibrated if they adopted it, but 52% and 48% are the same for the average person, so it’s completely impractical.
If anything, the proposal should be ‘if you don’t think you’re particularly smart, your position on almost every controversial topic should be “I have no idea”’. Which still might not be good advice because there is disproportionate overlap between the set of people likely to take the advice and the set of people for whom it doesn’t apply.
If you think it’s very important to think about all the possible adjacent interpretations of a proposition as stated before making up your mind, it can be useful to indicate your initial agreement with the propositions as a small minimum divergence from total uncertainty (the uncertainty representing your uncertainty about whether you’ll come up with better interpretations for the thing you think you’re confident about) on just so many interpretations before you consider more ambitious numbers like 90%.
If you always do this and you wind up being wrong about some belief, then it is at least possible to think that the error you made was failing to list a sufficient number of sufficiently specific adjacent possibilities before asking yourself more seriously about what their true probabilities were. Making distinctions is a really important part of knowing the truth; don’t pin all the hopes of every A-adjacent possibility on just one proposition in the set of A-adjacent possibilities. Two A-adjacent propositions can have great or critically moderate differences in likelihood; thinking only about A can mislead you about A-synonymous things.
To me, 0.02 is a comparatively tiny difference between likelihood of a proposition and its negation.
If P(A) = 0.51 and P(~A) = 0.49 then almost every decision I make based on A will give almost equal weight to whether it is true or false, and the cognitive process of working through implications on either side are essentially identical to the case P(A) = 0.49 and P(~A) = 0.51. The outcome of the decision will also be the same very frequently, since outcomes are usually unbalanced.
It takes quite a bit of contriving to arrange a situation where there is any meaningful difference between P(A) = 0.51 and P(A) = 0.49 for some real-world proposition A.
Yeah, and this may get at another reason why the proposal doesn’t seem right to me. There’s no doubt that most people would be better calibrated if they adopted it, but 52% and 48% are the same for the average person, so it’s completely impractical.
If anything, the proposal should be ‘if you don’t think you’re particularly smart, your position on almost every controversial topic should be “I have no idea”’. Which still might not be good advice because there is disproportionate overlap between the set of people likely to take the advice and the set of people for whom it doesn’t apply.
If you think it’s very important to think about all the possible adjacent interpretations of a proposition as stated before making up your mind, it can be useful to indicate your initial agreement with the propositions as a small minimum divergence from total uncertainty (the uncertainty representing your uncertainty about whether you’ll come up with better interpretations for the thing you think you’re confident about) on just so many interpretations before you consider more ambitious numbers like 90%.
If you always do this and you wind up being wrong about some belief, then it is at least possible to think that the error you made was failing to list a sufficient number of sufficiently specific adjacent possibilities before asking yourself more seriously about what their true probabilities were. Making distinctions is a really important part of knowing the truth; don’t pin all the hopes of every A-adjacent possibility on just one proposition in the set of A-adjacent possibilities. Two A-adjacent propositions can have great or critically moderate differences in likelihood; thinking only about A can mislead you about A-synonymous things.