Are you saying that people can’t deal with regular probability theory, but can deal with two-level “probabilities of probabilities”? That seems unlikely. I’d guess that the people who claim to use “probabilities of probabilities” cannot use them correctly either.
How about this suggestion/interpretation? There are some probabilities which are based on a lot of evidence already and so which should only be changed slightly when new evidence comes in (unless there’s a lot of it, of course), and there are some probabilities that are based on next to nothing and so that we should be prepared to shift dramatically if any actual evidence comes to light. Bad things can happen when the latter are mistakenly treated as if they were the former, but people aren’t good at keeping track of the difference. Introducing two level “probabilities of probabilities” for handling the latter may not actually make them particularly manageable, but it could at least prevent them from being confused with the former, and if it prevents their being used much at all, perhaps that’s for the best.
(I’m about 90% sure that you already know what I’m going to say, but the remaining 10% leads me to say it just in case, and it might help onlookers as well.)
A one-level prior already contains the information about how strongly you update. For example, if you have a prior about the joint outcome of two coinflips, consisting of four probabilities that sum to 1, then learning the outcome of the first coinflip allows you to update your beliefs about the second one, and any Bayesian-rational method of updating in that situation (corresponding to a single coin with known bias, single coin with unknown bias, two coins with opposite biases...) can be expressed that way.
Are you saying that people can’t deal with regular probability theory, but can deal with two-level “probabilities of probabilities”?
Not in full generality. There may be instances though. I don’t know how to articulate my intuitions here, without going into examples that are sufficiently involved so that they’d derail the conversation. If nothing else, it’s true that a probability estimate does not suffice to capture the knowledge that one has about an event and that one can better use probabilities as an input into one’s epistemology if one keeps this in mind.
Are you saying that people can’t deal with regular probability theory, but can deal with two-level “probabilities of probabilities”? That seems unlikely. I’d guess that the people who claim to use “probabilities of probabilities” cannot use them correctly either.
How about this suggestion/interpretation? There are some probabilities which are based on a lot of evidence already and so which should only be changed slightly when new evidence comes in (unless there’s a lot of it, of course), and there are some probabilities that are based on next to nothing and so that we should be prepared to shift dramatically if any actual evidence comes to light. Bad things can happen when the latter are mistakenly treated as if they were the former, but people aren’t good at keeping track of the difference. Introducing two level “probabilities of probabilities” for handling the latter may not actually make them particularly manageable, but it could at least prevent them from being confused with the former, and if it prevents their being used much at all, perhaps that’s for the best.
(I’m about 90% sure that you already know what I’m going to say, but the remaining 10% leads me to say it just in case, and it might help onlookers as well.)
A one-level prior already contains the information about how strongly you update. For example, if you have a prior about the joint outcome of two coinflips, consisting of four probabilities that sum to 1, then learning the outcome of the first coinflip allows you to update your beliefs about the second one, and any Bayesian-rational method of updating in that situation (corresponding to a single coin with known bias, single coin with unknown bias, two coins with opposite biases...) can be expressed that way.
Yes, it’s just a matter of which way of looking at things is most helpful psychologically (for humans, with human biases).
Not in full generality. There may be instances though. I don’t know how to articulate my intuitions here, without going into examples that are sufficiently involved so that they’d derail the conversation. If nothing else, it’s true that a probability estimate does not suffice to capture the knowledge that one has about an event and that one can better use probabilities as an input into one’s epistemology if one keeps this in mind.