How about this suggestion/interpretation? There are some probabilities which are based on a lot of evidence already and so which should only be changed slightly when new evidence comes in (unless there’s a lot of it, of course), and there are some probabilities that are based on next to nothing and so that we should be prepared to shift dramatically if any actual evidence comes to light. Bad things can happen when the latter are mistakenly treated as if they were the former, but people aren’t good at keeping track of the difference. Introducing two level “probabilities of probabilities” for handling the latter may not actually make them particularly manageable, but it could at least prevent them from being confused with the former, and if it prevents their being used much at all, perhaps that’s for the best.
(I’m about 90% sure that you already know what I’m going to say, but the remaining 10% leads me to say it just in case, and it might help onlookers as well.)
A one-level prior already contains the information about how strongly you update. For example, if you have a prior about the joint outcome of two coinflips, consisting of four probabilities that sum to 1, then learning the outcome of the first coinflip allows you to update your beliefs about the second one, and any Bayesian-rational method of updating in that situation (corresponding to a single coin with known bias, single coin with unknown bias, two coins with opposite biases...) can be expressed that way.
How about this suggestion/interpretation? There are some probabilities which are based on a lot of evidence already and so which should only be changed slightly when new evidence comes in (unless there’s a lot of it, of course), and there are some probabilities that are based on next to nothing and so that we should be prepared to shift dramatically if any actual evidence comes to light. Bad things can happen when the latter are mistakenly treated as if they were the former, but people aren’t good at keeping track of the difference. Introducing two level “probabilities of probabilities” for handling the latter may not actually make them particularly manageable, but it could at least prevent them from being confused with the former, and if it prevents their being used much at all, perhaps that’s for the best.
(I’m about 90% sure that you already know what I’m going to say, but the remaining 10% leads me to say it just in case, and it might help onlookers as well.)
A one-level prior already contains the information about how strongly you update. For example, if you have a prior about the joint outcome of two coinflips, consisting of four probabilities that sum to 1, then learning the outcome of the first coinflip allows you to update your beliefs about the second one, and any Bayesian-rational method of updating in that situation (corresponding to a single coin with known bias, single coin with unknown bias, two coins with opposite biases...) can be expressed that way.
Yes, it’s just a matter of which way of looking at things is most helpful psychologically (for humans, with human biases).