(I’m about 90% sure that you already know what I’m going to say, but the remaining 10% leads me to say it just in case, and it might help onlookers as well.)
A one-level prior already contains the information about how strongly you update. For example, if you have a prior about the joint outcome of two coinflips, consisting of four probabilities that sum to 1, then learning the outcome of the first coinflip allows you to update your beliefs about the second one, and any Bayesian-rational method of updating in that situation (corresponding to a single coin with known bias, single coin with unknown bias, two coins with opposite biases...) can be expressed that way.
(I’m about 90% sure that you already know what I’m going to say, but the remaining 10% leads me to say it just in case, and it might help onlookers as well.)
A one-level prior already contains the information about how strongly you update. For example, if you have a prior about the joint outcome of two coinflips, consisting of four probabilities that sum to 1, then learning the outcome of the first coinflip allows you to update your beliefs about the second one, and any Bayesian-rational method of updating in that situation (corresponding to a single coin with known bias, single coin with unknown bias, two coins with opposite biases...) can be expressed that way.
Yes, it’s just a matter of which way of looking at things is most helpful psychologically (for humans, with human biases).