The nature of 0 & 1 as limit cases seem to be fascinating for the theorists. However, in terms of ‘Overcoming Bias’, shouldn’t we be looking at more mundane conceptions of probability ? EY’s posts have drawn attention to the idea that the amount of information needed to add additional cetainty on a proposition increases exponentially while the probability increases linearly. This says that in utilitarian terms, not many situations will warrant chasing the additional information above 99.9% certainty (outside technical implementations in nuclear physics, rocket science or whatever). 99.9% as a number is taken out of a hat.
In human terms, when we say ‘I’m 99.9% sure that 2+2 always =4’, where not talking about 1000 equivalent statements. We’re talking about one statement, with a spatial representation of what ’100% sure’ means with respect to that statement, and 0.1% of that spatial representation allowed for ‘niggling doubts’, of the sort : what have I forgotten ? What don’t I know ? What is inconceivable for me ?
The interesting question for ‘overcoming bias’ is : how do we make that tradeoff between seeking additional information on the one hand and accepting a limited degree of certainty on the other ? As an example (cf. the Evil Lords of the Matrix), considering whether our minds are being controlled by magic mushrooms from Alpha Pictoris may someday increase the ‘niggling doubt’ range from 0.1% to 5%, but the evidence would have to be shoved in our faces pretty hard first.
The nature of 0 & 1 as limit cases seem to be fascinating for the theorists. However, in terms of ‘Overcoming Bias’, shouldn’t we be looking at more mundane conceptions of probability ? EY’s posts have drawn attention to the idea that the amount of information needed to add additional cetainty on a proposition increases exponentially while the probability increases linearly. This says that in utilitarian terms, not many situations will warrant chasing the additional information above 99.9% certainty (outside technical implementations in nuclear physics, rocket science or whatever). 99.9% as a number is taken out of a hat. In human terms, when we say ‘I’m 99.9% sure that 2+2 always =4’, where not talking about 1000 equivalent statements. We’re talking about one statement, with a spatial representation of what ’100% sure’ means with respect to that statement, and 0.1% of that spatial representation allowed for ‘niggling doubts’, of the sort : what have I forgotten ? What don’t I know ? What is inconceivable for me ? The interesting question for ‘overcoming bias’ is : how do we make that tradeoff between seeking additional information on the one hand and accepting a limited degree of certainty on the other ? As an example (cf. the Evil Lords of the Matrix), considering whether our minds are being controlled by magic mushrooms from Alpha Pictoris may someday increase the ‘niggling doubt’ range from 0.1% to 5%, but the evidence would have to be shoved in our faces pretty hard first.