Now when you actually talk to someone, you’ll often convey priors about many things, but less often how stable you deem those priors to be. This dice is probably loaded … the ‘probably’ refers to your prior, but it does not refer to how fast that prior could change. Maybe it’s a dice a friend who’s gathering loaded dice is presenting to you, so if you check it you’ll be quickly convinced if it’s not loaded. Maybe it’s your trusted loaded dice from childhood which you’ve used thousands of times, and if it doesn’t appear to be loaded on the next few throws, you’ll still consider it to be loaded.
I believe this is a model space problem. We’re looking at a toy bayesian reasoner that can be easily modeled in a human mind, predicting how it will update its hypotheses about dice in response to evidence like the same number coming up too often. Our toy bayesian, of course, assigns probability 0 to encountering evidence like “my trusted expert friend says it’s loaded,” so that wouldn’t change its probabilities at all. But that’s not a flaw in bayesian reasoning; it’s a flaw in the kind of bayesian reasoner that can be easily modeled in a human mind.
This doesn’t demonstrate that human reasoning that works doesn’t have a bayesian core. E.g., I don’t know how I would update my probabilities about a die being loaded if, say, my left arm turned into a purple tentacle and started singing “La Bamba.” But it does show that even an ideal reasoner can’t always out-predict a computationally limited one; if the computationally limited one has access to a much better prior, and/or a whole lot more evidence.
I believe this is a model space problem. We’re looking at a toy bayesian reasoner that can be easily modeled in a human mind, predicting how it will update its hypotheses about dice in response to evidence like the same number coming up too often. Our toy bayesian, of course, assigns probability 0 to encountering evidence like “my trusted expert friend says it’s loaded,” so that wouldn’t change its probabilities at all. But that’s not a flaw in bayesian reasoning; it’s a flaw in the kind of bayesian reasoner that can be easily modeled in a human mind.
This doesn’t demonstrate that human reasoning that works doesn’t have a bayesian core. E.g., I don’t know how I would update my probabilities about a die being loaded if, say, my left arm turned into a purple tentacle and started singing “La Bamba.” But it does show that even an ideal reasoner can’t always out-predict a computationally limited one; if the computationally limited one has access to a much better prior, and/or a whole lot more evidence.