Thanks, the “true probability” phrasing was misleading, I should’ve reread my comment before submitting. Probability is in the mind etc., what I referred to was “the probability you’d eventually end up with, having incorporated all relevant information, the limit”, which is still in your mind, but as close to “true” as you’ll get.
So you can of course say Pr(Box is empty | I saw it’s empty) = x and Pr(Box is empty | I saw it’s empty and I got to examine its inner surfaces with my hand) = y, then list all similar hypothesis about the box being empty conditioned on various experiments, and compare x, y etc. to get a notion of the stability of your prior.
However, such a listing is quite tedious, and countably infinite as well, even if it’s the only full representation of your box-is-empty belief conditioned on all possible information.
The point was that “my prior about the box being empty is low / high / whatever” doesn’t give any information about whether you’ve just guesstimated it—or—whether you’re very sure about your value and will likely discount (for the most part) any new information showing the contrary as being a fluke, or a trick. A magician seemingly countering gravity with a levitation trick only marginally lowers your prior how gravity works.
Now when you actually talk to someone, you’ll often convey priors about many things, but less often how stable you deem those priors to be. This dice is probably loaded … the ‘probably’ refers to your prior, but it does not refer to how fast that prior could change. Maybe it’s a dice a friend who’s gathering loaded dice is presenting to you, so if you check it you’ll be quickly convinced if it’s not loaded. Maybe it’s your trusted loaded dice from childhood which you’ve used thousands of times, and if it doesn’t appear to be loaded on the next few throws, you’ll still consider it to be loaded.
Yet in both cases you’d say “the dice is probably loaded”. How do you usefully convey the extra information about the stability of your prior? “The dice is probably loaded, and my belief in that isn’t likely to change” so to speak? Not a theoretical definition of stability—only listing all your beliefs can represent those—but, as in the grandparent—a simple and intuitive way of conveying that important extra information about stability, and a plea to start conveying that information.
Thanks, the “true probability” phrasing was misleading, I should’ve reread my comment before submitting. Probability is in the mind etc., what I referred to was “the probability you’d eventually end up with, having incorporated all relevant information, the limit”, which is still in your mind, but as close to “true” as you’ll get.
Now when you actually talk to someone, you’ll often convey priors about many things, but less often how stable you deem those priors to be. This dice is probably loaded … the ‘probably’ refers to your prior, but it does not refer to how fast that prior could change. Maybe it’s a dice a friend who’s gathering loaded dice is presenting to you, so if you check it you’ll be quickly convinced if it’s not loaded. Maybe it’s your trusted loaded dice from childhood which you’ve used thousands of times, and if it doesn’t appear to be loaded on the next few throws, you’ll still consider it to be loaded.
I believe this is a model space problem. We’re looking at a toy bayesian reasoner that can be easily modeled in a human mind, predicting how it will update its hypotheses about dice in response to evidence like the same number coming up too often. Our toy bayesian, of course, assigns probability 0 to encountering evidence like “my trusted expert friend says it’s loaded,” so that wouldn’t change its probabilities at all. But that’s not a flaw in bayesian reasoning; it’s a flaw in the kind of bayesian reasoner that can be easily modeled in a human mind.
This doesn’t demonstrate that human reasoning that works doesn’t have a bayesian core. E.g., I don’t know how I would update my probabilities about a die being loaded if, say, my left arm turned into a purple tentacle and started singing “La Bamba.” But it does show that even an ideal reasoner can’t always out-predict a computationally limited one; if the computationally limited one has access to a much better prior, and/or a whole lot more evidence.
Thanks, the “true probability” phrasing was misleading, I should’ve reread my comment before submitting. Probability is in the mind etc., what I referred to was “the probability you’d eventually end up with, having incorporated all relevant information, the limit”, which is still in your mind, but as close to “true” as you’ll get.
So you can of course say Pr(Box is empty | I saw it’s empty) = x and Pr(Box is empty | I saw it’s empty and I got to examine its inner surfaces with my hand) = y, then list all similar hypothesis about the box being empty conditioned on various experiments, and compare x, y etc. to get a notion of the stability of your prior.
However, such a listing is quite tedious, and countably infinite as well, even if it’s the only full representation of your box-is-empty belief conditioned on all possible information.
The point was that “my prior about the box being empty is low / high / whatever” doesn’t give any information about whether you’ve just guesstimated it—or—whether you’re very sure about your value and will likely discount (for the most part) any new information showing the contrary as being a fluke, or a trick. A magician seemingly countering gravity with a levitation trick only marginally lowers your prior how gravity works.
Now when you actually talk to someone, you’ll often convey priors about many things, but less often how stable you deem those priors to be. This dice is probably loaded … the ‘probably’ refers to your prior, but it does not refer to how fast that prior could change. Maybe it’s a dice a friend who’s gathering loaded dice is presenting to you, so if you check it you’ll be quickly convinced if it’s not loaded. Maybe it’s your trusted loaded dice from childhood which you’ve used thousands of times, and if it doesn’t appear to be loaded on the next few throws, you’ll still consider it to be loaded.
Yet in both cases you’d say “the dice is probably loaded”. How do you usefully convey the extra information about the stability of your prior? “The dice is probably loaded, and my belief in that isn’t likely to change” so to speak? Not a theoretical definition of stability—only listing all your beliefs can represent those—but, as in the grandparent—a simple and intuitive way of conveying that important extra information about stability, and a plea to start conveying that information.
Relevant resource: Probability is subjectively objective.
I believe this is a model space problem. We’re looking at a toy bayesian reasoner that can be easily modeled in a human mind, predicting how it will update its hypotheses about dice in response to evidence like the same number coming up too often. Our toy bayesian, of course, assigns probability 0 to encountering evidence like “my trusted expert friend says it’s loaded,” so that wouldn’t change its probabilities at all. But that’s not a flaw in bayesian reasoning; it’s a flaw in the kind of bayesian reasoner that can be easily modeled in a human mind.
This doesn’t demonstrate that human reasoning that works doesn’t have a bayesian core. E.g., I don’t know how I would update my probabilities about a die being loaded if, say, my left arm turned into a purple tentacle and started singing “La Bamba.” But it does show that even an ideal reasoner can’t always out-predict a computationally limited one; if the computationally limited one has access to a much better prior, and/or a whole lot more evidence.