Being a full-on Bayesian means not only having probability assignments for every proposition, but also having the conditional probabilities that will allow you to make appropriate updates to your probability assignments when new information comes in.
The difference between “The probability of X is definitely 0.5” and “The probability of X is somewhere between 0 and 1, and I have no idea at all where” lies in how you will adjust your estimates for Pr(X) as new information comes in. If your estimate is based on a lot of strong evidence, then your conditional probabilities for X given modest quantities of new evidence will still be close to 0.5. If your estimate is a mere seat-of-the-pants guess, then your conditional probabilities for X given modest quantities of new evidence will be all over the place.
Sometimes this is described in terms of your probability estimates for your probability estimates. That’s appropriate when, e.g., what you know about X is that it is governed by some sort of random process that makes X happen with a particular probability (a coin toss, say) but you are uncertain about the details of that random process (e.g., does something about the coin or how it’s tossed mean that Pr(heads) is far from 0.5?). But similar issues arise in different cases where there’s nothing going on that could reasonably be called a random process but your degree of knowledge is greater or less, and I’m not sure the “probabilities of probabilities” perspective is particularly helpful there.
Being a full-on Bayesian means not only having probability assignments for every proposition, but also having the conditional probabilities that will allow you to make appropriate updates to your probability assignments when new information comes in.
The difference between “The probability of X is definitely 0.5” and “The probability of X is somewhere between 0 and 1, and I have no idea at all where” lies in how you will adjust your estimates for Pr(X) as new information comes in. If your estimate is based on a lot of strong evidence, then your conditional probabilities for X given modest quantities of new evidence will still be close to 0.5. If your estimate is a mere seat-of-the-pants guess, then your conditional probabilities for X given modest quantities of new evidence will be all over the place.
Sometimes this is described in terms of your probability estimates for your probability estimates. That’s appropriate when, e.g., what you know about X is that it is governed by some sort of random process that makes X happen with a particular probability (a coin toss, say) but you are uncertain about the details of that random process (e.g., does something about the coin or how it’s tossed mean that Pr(heads) is far from 0.5?). But similar issues arise in different cases where there’s nothing going on that could reasonably be called a random process but your degree of knowledge is greater or less, and I’m not sure the “probabilities of probabilities” perspective is particularly helpful there.
Thanks for the detailed explanation. It helps!