You propose to ignore the “odd” errors humans sometimes make while calculating a probability for some event. However, errors do occur, even when judging the very first case. And they (at least some of them) occur randomly. When you believe you have correctly calculated the probability, you just might have made an error anywhere in the calculation.
If you keep around the “socially accepted” levels of confidence, those errors average out pretty fast, but if you make only one error in 10^5 calculations, you should not assign probabilities smaller than 1 / 10^5. Otherwise a bet 10000 to 1 between you and me (a fair game from your perspective) will give me an expected value larger than 0 due to the errors in your thoughts you could possibly make.
This is another advantage an AI might have over humans, if the hardware is good enough, probability assignments below 10^-5 might actually be reasonable.
You propose to ignore the “odd” errors humans sometimes make while calculating a probability for some event
I don’t think I said any such thing.
There is always some uncertainty; but a belief that the uncertainty is above some particular lower bound is a belief like any other, and no more exempt from the requirements of justification.
But Drahflow did just justify it. He said you’re running on error-prone hardware. Now, there’s still the question of how often the hardware makes errors, and there’s the problem of privileging the hypothesis (thinking wrongly about the lottery can’t make the probability of a ticket winning more than 10^-8, no matter how wrong you are), and there’s the horrible LHC inconsistency, but the opposing position is not unjustified. It has justification that goes beyond just social modesty. It’s a consistent trend in which people form confidence bounds that are too narrow on hard problems (and to a lesser extent, too wide on easy problems). If you went by the raw experiments then “99% probability” would translate into 40% surprises because (a) people are that stupid (b) people have no grasp of what the phrase “99% probability” means.
I agree, and I don’t think this contradicts or undermines the argument of the post.
These experiments should definitely shift physicists’ probabilities by some nonzero amount; the question is how much. When they calculate that the probability of a marble statue waving is 10 to the minus gazillion, would you really want to argue that, based on surveys like this, they should adjust that to some mundane quantity like 0.01? That seems absurd to me. But if you grant this, then you have to concede that “epistemic bootstrapping” beyond ordinary human levels of confidence is possible. Then the question becomes: what’s the limit, given our knowledge of physics (present and future)?
If you did see a marble statue wave, after making this calculation, you would resurrect a hypothesis at the one-in-a-million level maybe (someone played a hugely elaborate prank on you involving sawing off a duplicate statue’s arm and switching that with the recently examined statue while you were briefly distracted by a phone ringing, say), not a hypothesis at the 10 to the minus whatever (e.g. you are being simulated by Omega for laughs).
Perhaps I’m getting this wrong, but this seems similar in spirit to the “queer uses of probability” discussion in Jaynes, where he asks what kind of evidence you’d have to see to believe in ESP, and you can take the probability of that as an indication of your prior probability for ESP.
Perhaps you’re making too much of absolute probabilities, when in general what we’re interested in is choosing between two or more competing hypotheses.
You propose to ignore the “odd” errors humans sometimes make while calculating a probability for some event. However, errors do occur, even when judging the very first case. And they (at least some of them) occur randomly. When you believe you have correctly calculated the probability, you just might have made an error anywhere in the calculation.
If you keep around the “socially accepted” levels of confidence, those errors average out pretty fast, but if you make only one error in 10^5 calculations, you should not assign probabilities smaller than 1 / 10^5. Otherwise a bet 10000 to 1 between you and me (a fair game from your perspective) will give me an expected value larger than 0 due to the errors in your thoughts you could possibly make.
This is another advantage an AI might have over humans, if the hardware is good enough, probability assignments below 10^-5 might actually be reasonable.
I don’t think I said any such thing.
There is always some uncertainty; but a belief that the uncertainty is above some particular lower bound is a belief like any other, and no more exempt from the requirements of justification.
But Drahflow did just justify it. He said you’re running on error-prone hardware. Now, there’s still the question of how often the hardware makes errors, and there’s the problem of privileging the hypothesis (thinking wrongly about the lottery can’t make the probability of a ticket winning more than 10^-8, no matter how wrong you are), and there’s the horrible LHC inconsistency, but the opposing position is not unjustified. It has justification that goes beyond just social modesty. It’s a consistent trend in which people form confidence bounds that are too narrow on hard problems (and to a lesser extent, too wide on easy problems). If you went by the raw experiments then “99% probability” would translate into 40% surprises because (a) people are that stupid (b) people have no grasp of what the phrase “99% probability” means.
I agree, and I don’t think this contradicts or undermines the argument of the post.
These experiments should definitely shift physicists’ probabilities by some nonzero amount; the question is how much. When they calculate that the probability of a marble statue waving is 10 to the minus gazillion, would you really want to argue that, based on surveys like this, they should adjust that to some mundane quantity like 0.01? That seems absurd to me. But if you grant this, then you have to concede that “epistemic bootstrapping” beyond ordinary human levels of confidence is possible. Then the question becomes: what’s the limit, given our knowledge of physics (present and future)?
If you did see a marble statue wave, after making this calculation, you would resurrect a hypothesis at the one-in-a-million level maybe (someone played a hugely elaborate prank on you involving sawing off a duplicate statue’s arm and switching that with the recently examined statue while you were briefly distracted by a phone ringing, say), not a hypothesis at the 10 to the minus whatever (e.g. you are being simulated by Omega for laughs).
Perhaps I’m getting this wrong, but this seems similar in spirit to the “queer uses of probability” discussion in Jaynes, where he asks what kind of evidence you’d have to see to believe in ESP, and you can take the probability of that as an indication of your prior probability for ESP.
Perhaps you’re making too much of absolute probabilities, when in general what we’re interested in is choosing between two or more competing hypotheses.
This comment reads as if you’re disagreeing with me about something (“you’re making too much...”), but I can’t detect any actual disagreement.