there’s no term for “how surprised I was” in Bayes’ Theorem.
Not quite. The intuitive notion of “how surprised you were” maps closely to bayesian likelihood ratios.
Regarding your die/beads scenarios:
In your die scenario, you have one highly favored model that assigns equal probability to each possible number. In the beads scenario you have many possible models, all with low probability; averaging their predictions gives equal probability to each possible color.
To simplify things, let’s say our only models are M, which predicts the outcomes are random and equally likely (i.e. a fair die or jar filled with an even ratio of 12 colors of beads), and not-M (i.e. a weighted die or jar filled with all the same color beads). In the beads scenario we might guess that P(M)=.1; in the die scenario P(M)=.99. In both cases, our probability of red/one is 1⁄12, because neither of our models tell us which color/number to expect. But our probability of winning the bet is different—we only win if M is correct.
That clears things up a lot. I hadn’t really thought about the multiple-models take on it (despite having read the “prior probabilities as mathematical objects” post). Thanks.
Not quite. The intuitive notion of “how surprised you were” maps closely to bayesian likelihood ratios.
Regarding your die/beads scenarios:
In your die scenario, you have one highly favored model that assigns equal probability to each possible number. In the beads scenario you have many possible models, all with low probability; averaging their predictions gives equal probability to each possible color.
To simplify things, let’s say our only models are M, which predicts the outcomes are random and equally likely (i.e. a fair die or jar filled with an even ratio of 12 colors of beads), and not-M (i.e. a weighted die or jar filled with all the same color beads). In the beads scenario we might guess that P(M)=.1; in the die scenario P(M)=.99. In both cases, our probability of red/one is 1⁄12, because neither of our models tell us which color/number to expect. But our probability of winning the bet is different—we only win if M is correct.
That clears things up a lot. I hadn’t really thought about the multiple-models take on it (despite having read the “prior probabilities as mathematical objects” post). Thanks.