Ask yourself how much evidence someone would have to give you before you’re back to 50%.
Unfortunately, this depends on accurate intuitions for the evidential value of different numbers of successful trials. I don’t think people have these, though maybe their intuitions are good enough to avoid the more grievous errors.
Another problem is if there are more possible explanations than chance and the thing you’re looking for. I would not believe space aliens were manipulating coinflips after any number of heads results in a row.
It seems to me it’s logically possible for me to have horribly off intuitions about the evidential value of ten heads results, so that something to which I rationally and non-consciously assign a 1 in a million probability seems only ten heads results away from 50-50. Maybe you’d say that I wasn’t really assigning anything as extreme as a 1 in a million probability, but it seems like there are other ways to make the concept meaningful that don’t refer to coinflips.
I think it’d be useful to calibrate the semantics of purely intuitive valuation of statements, all else equal. Once you start systematically processing the evidence by means other than gut feeling, you construct other sources of evidence, but knowing what the intuition really tells you is important in its own right.
If you can’t tell 1 in 1,000 from 1 in 1,000,000, there should be a concept for that level of belief, with all beliefs in it assigned odds in between, depending on empirical distribution of statements you ask questions about. If you apply other techniques to update on that estimate, fine, but that’s different data. Maybe the concepts of ranges of probability are not very good, and it’s more useful to explicitly compensate for affection, complexity of the statement or imagery, etc, but those are not about evidence per se, only about deciphering the gut feeling.
For this purpose, the already existing concepts of “a little”, “probably”, “unlikely” may be very important, if only their probabilistic semantics can be extracted, probably through estimating the concepts fitting each of a set of statements specifically constructed around a given one to extract its odds.
I think I see what you’re saying—that our intuitions for extreme likelihood functions might be as bad as those for extreme prior probabilities. IIRC, research shows that humans have a good sense for probabilities in the neighborhood of 0.5, so I think you’re safe as long as your trials have sampling probabilities around 0.5 and you explicitly and sequentially imagine each counterfactual trial and your resulting feelings of credence.
The research result is interesting, but I can still imagine people being maybe an order of magnitude off every 10-20 coinflips. (Maybe by that time an order of magnitude no longer matters much.)
I think I’d imagine them to be very wrong when assessing the evidence in 100 heads results vs. 10 if they didn’t explicitly imagine every trial, just because of scope insensitivity. (Maybe these are more extreme cases than are likely to come up in applications.)
Unfortunately, this depends on accurate intuitions for the evidential value of different numbers of successful trials. I don’t think people have these, though maybe their intuitions are good enough to avoid the more grievous errors.
Another problem is if there are more possible explanations than chance and the thing you’re looking for. I would not believe space aliens were manipulating coinflips after any number of heads results in a row.
This is the part for which you consciously use math, as in the given example.
It seems to me it’s logically possible for me to have horribly off intuitions about the evidential value of ten heads results, so that something to which I rationally and non-consciously assign a 1 in a million probability seems only ten heads results away from 50-50. Maybe you’d say that I wasn’t really assigning anything as extreme as a 1 in a million probability, but it seems like there are other ways to make the concept meaningful that don’t refer to coinflips.
I think it’d be useful to calibrate the semantics of purely intuitive valuation of statements, all else equal. Once you start systematically processing the evidence by means other than gut feeling, you construct other sources of evidence, but knowing what the intuition really tells you is important in its own right.
If you can’t tell 1 in 1,000 from 1 in 1,000,000, there should be a concept for that level of belief, with all beliefs in it assigned odds in between, depending on empirical distribution of statements you ask questions about. If you apply other techniques to update on that estimate, fine, but that’s different data. Maybe the concepts of ranges of probability are not very good, and it’s more useful to explicitly compensate for affection, complexity of the statement or imagery, etc, but those are not about evidence per se, only about deciphering the gut feeling.
For this purpose, the already existing concepts of “a little”, “probably”, “unlikely” may be very important, if only their probabilistic semantics can be extracted, probably through estimating the concepts fitting each of a set of statements specifically constructed around a given one to extract its odds.
I think I see what you’re saying—that our intuitions for extreme likelihood functions might be as bad as those for extreme prior probabilities. IIRC, research shows that humans have a good sense for probabilities in the neighborhood of 0.5, so I think you’re safe as long as your trials have sampling probabilities around 0.5 and you explicitly and sequentially imagine each counterfactual trial and your resulting feelings of credence.
Right, that’s what I’m saying.
The research result is interesting, but I can still imagine people being maybe an order of magnitude off every 10-20 coinflips. (Maybe by that time an order of magnitude no longer matters much.)
I think I’d imagine them to be very wrong when assessing the evidence in 100 heads results vs. 10 if they didn’t explicitly imagine every trial, just because of scope insensitivity. (Maybe these are more extreme cases than are likely to come up in applications.)