Summarizing one’s state of knowledge about these two propositions onto the same scale of reals between 0 and 1 seems to ignore an awful lot
We’re getting ahead of the reading, but there’s a key distinction between the plausibility of a single proposition (i.e. a probability) and the plausibilities of a whole family of related plausibilities (i.e. a probability distribution).
Our state of knowledge about the coin is such that if we assessed probabilities for the class of propositions, “This coin has a bias X”, where X ranged from 0 (always heads) to 1 (always tails) we would find our prior distribution a sharp spike centered on 1⁄2. That, technically, is what we mean by “confidence”, and formally we will be using things like the variance of the distribution.
We’re getting ahead of the reading, but there’s a key distinction between the plausibility of a single proposition (i.e. a probability) and the plausibilities of a whole family of related plausibilities (i.e. a probability distribution).
Ok, that sounds helpful. But then my question is this—if we have whole family of mutually exclusive propositions, with varying real numbers for plausibilities, about the plausibility of one particular proposition, then the assumption that that one proposition can have one specific real number as its plausibility is cast in doubt. I don’t yet see how we can have all those plausibility assignments in a coherent whole. But I’m happy to leave my question on the table if we’ll come to that part later.
If you have a mutually exclusive and exhaustive set of propositions Ai, each of which specifies a plausibility
) for the one proposition B you’re interested in, then your total plausilibity is =\sum_iP(B|A_i)P(A_i)). (Actually this is true whether or not the A’s say anything about B. But if they do, then this can be useful way to think about P(B).)
I haven’t said how to assign plausibilities to the A’s (quick, what’s the plausibility that an unspecified urn contains one white and three cyan balls?), but this at least should describe how it fits together once you’ve answered those subproblems.
We’re getting ahead of the reading, but there’s a key distinction between the plausibility of a single proposition (i.e. a probability) and the plausibilities of a whole family of related plausibilities (i.e. a probability distribution).
Our state of knowledge about the coin is such that if we assessed probabilities for the class of propositions, “This coin has a bias X”, where X ranged from 0 (always heads) to 1 (always tails) we would find our prior distribution a sharp spike centered on 1⁄2. That, technically, is what we mean by “confidence”, and formally we will be using things like the variance of the distribution.
Ok, that sounds helpful. But then my question is this—if we have whole family of mutually exclusive propositions, with varying real numbers for plausibilities, about the plausibility of one particular proposition, then the assumption that that one proposition can have one specific real number as its plausibility is cast in doubt. I don’t yet see how we can have all those plausibility assignments in a coherent whole. But I’m happy to leave my question on the table if we’ll come to that part later.
If you have a mutually exclusive and exhaustive set of propositions Ai, each of which specifies a plausibility
) for the one proposition B you’re interested in, then your total plausilibity is =\sum_iP(B|A_i)P(A_i)). (Actually this is true whether or not the A’s say anything about B. But if they do, then this can be useful way to think about P(B).)I haven’t said how to assign plausibilities to the A’s (quick, what’s the plausibility that an unspecified urn contains one white and three cyan balls?), but this at least should describe how it fits together once you’ve answered those subproblems.