It seems like you’re describing a Bayesian probability distribution over a frequentist probability estimate of the “real” probability.
Right. But I was careful to refer to f as a frequency rather than a probability, because f isn’t a description of our beliefs but rather a physical property of the coin (and of the way it’s being thrown).
Agreed that this works in cases which make sense under frequentism, but in cases like “Trump gets reelected” you need some sort of distribution over a Bayesian credence, and I don’t see any natural way to generalise to that.
I agree. But it seems to me like the other replies you’ve received are mistakenly treating all propositions as though they do have an f with an unknown distribution. Unnamed suggests using the beta distribution; the thing which it’s the distribution of would have to be f. Similarly rossry’s reply, containing phrases like “something in the ballpark of 50%” and “precisely 50%”, talks as though there is some unknown percentage to which 50% is an estimate.
A lot of people (like in the paper Pattern linked to) think that our distribution over f is a “second-order” probability describing our beliefs about our beliefs. I think this is wrong. The number f doesn’t describe our beliefs at all; it describes a physical property of the coin, just like mass and diameter.
In fact, any kind of second-order probability must be trivial. We have introspective access to our own beliefs. So given any statement about our beliefs we can say for certain whether or not it’s true. Therefore, any second-order probability will either be equal to 0 or 1.
I don’t have much to add on the original question, but I do disagree about your last point:
In fact, any kind of second-order probability must be trivial. We have introspective access to our own beliefs. So given any statement about our beliefs we can say for certain whether or not it’s true. Therefore, any second-order probability will either be equal to 0 or 1.
There is a sense in which, once you say “my credence in X is Y”, then I can’t contradict you. But if I pointed out that actually, you’re behaving as if it is Y/2, and some other statements you made implied that it is Y/2, and then you realise that when you said the original statement, you were feeling social pressure to say a high credence even though it didn’t quite feel right—well, that all looks a lot like you being wrong about your actual credence in X. This may end up being a dispute over the definition of belief, but I do prefer to avoid defining things in ways where people must be certain about them, because people can be wrong in so many ways.
Okay, sure. But an idealized rational reasoner wouldn’t display this kind of uncertainty about its own beliefs, but it would still have the phenomenon you were originally asking about (where statements assigned the same probability update by different amounts after the introduction of evidence). So this kind of second-order probability can’t be used to answer the question you originally asked.
Right. But I was careful to refer to f as a frequency rather than a probability, because f isn’t a description of our beliefs but rather a physical property of the coin (and of the way it’s being thrown).
I agree. But it seems to me like the other replies you’ve received are mistakenly treating all propositions as though they do have an f with an unknown distribution. Unnamed suggests using the beta distribution; the thing which it’s the distribution of would have to be f. Similarly rossry’s reply, containing phrases like “something in the ballpark of 50%” and “precisely 50%”, talks as though there is some unknown percentage to which 50% is an estimate.
A lot of people (like in the paper Pattern linked to) think that our distribution over f is a “second-order” probability describing our beliefs about our beliefs. I think this is wrong. The number f doesn’t describe our beliefs at all; it describes a physical property of the coin, just like mass and diameter.
In fact, any kind of second-order probability must be trivial. We have introspective access to our own beliefs. So given any statement about our beliefs we can say for certain whether or not it’s true. Therefore, any second-order probability will either be equal to 0 or 1.
I don’t have much to add on the original question, but I do disagree about your last point:
There is a sense in which, once you say “my credence in X is Y”, then I can’t contradict you. But if I pointed out that actually, you’re behaving as if it is Y/2, and some other statements you made implied that it is Y/2, and then you realise that when you said the original statement, you were feeling social pressure to say a high credence even though it didn’t quite feel right—well, that all looks a lot like you being wrong about your actual credence in X. This may end up being a dispute over the definition of belief, but I do prefer to avoid defining things in ways where people must be certain about them, because people can be wrong in so many ways.
Okay, sure. But an idealized rational reasoner wouldn’t display this kind of uncertainty about its own beliefs, but it would still have the phenomenon you were originally asking about (where statements assigned the same probability update by different amounts after the introduction of evidence). So this kind of second-order probability can’t be used to answer the question you originally asked.
FYI there’s more about “credal resilience” here (although I haven’t read the linked papers yet).