Right, so there’s room here for a burden-of-proof disagreement—i.e. you find it unlikely on priors that a single distribution can accurately capture realistic states-of-knowledge, I don’t find it unlikely on priors.
If we’ve arrived at a burden-of-proof disagreement, then I’d say that’s sufficient to back up my answer at top-of-thread:
both imprecise probabilities and maximality seem like ad-hoc, unmotivated methods which add complexity to Bayesian reasoning for no particularly compelling reason.
I said I don’t know of any compelling reason—i.e. positive argument, beyond just “this seems unlikely to Anthony and some other people on priors”—to add this extra piece to Bayesian reasoning. And indeed, I still don’t. Which does not mean that I necessarily expect you to be convinced that we don’t need that extra piece; I haven’t spelled out a positive argument here either.
It’s not that I “find it unlikely on priors” — I’m literally asking what your prior on the proposition I mentioned is, and why you endorse that prior. If you answered that, I could answer why I’m skeptical that that prior really is the unique representation of your state of knowledge. (It might well be the unique representation of the most-salient-to-you intuitions about the proposition, but that’s not your state of knowledge.) I don’t know what further positive argument you’re looking for.
Someone could fail to report a unique precise prior (and one that’s consistent with their other beliefs and priors across contexts) for any of the following reasons, which seem worth distinguishing:
There is no unique precise prior that can represent their state of knowledge.
There is a unique precise prior that represents their state of knowledge, but they don’t have or use it, even approximately.
There is a unique precise prior that represents their state of knowledge, but, in practice, they can only report (precise or imprecise) approximations of it (not just computing decimal places for a real number, but also which things go into the prior could differ by approximation). Hypothetically, in the limit of resources spent on computing its values, the approximations would converge to this unique precise prior.
I’d be inclined to treat all three cases like imprecise probabilities, e.g. I wouldn’t permanently commit to a prior I wrote down to the exclusion of all other priors over the same events/possibilities.
Right, so there’s room here for a burden-of-proof disagreement—i.e. you find it unlikely on priors that a single distribution can accurately capture realistic states-of-knowledge, I don’t find it unlikely on priors.
If we’ve arrived at a burden-of-proof disagreement, then I’d say that’s sufficient to back up my answer at top-of-thread:
I said I don’t know of any compelling reason—i.e. positive argument, beyond just “this seems unlikely to Anthony and some other people on priors”—to add this extra piece to Bayesian reasoning. And indeed, I still don’t. Which does not mean that I necessarily expect you to be convinced that we don’t need that extra piece; I haven’t spelled out a positive argument here either.
It’s not that I “find it unlikely on priors” — I’m literally asking what your prior on the proposition I mentioned is, and why you endorse that prior. If you answered that, I could answer why I’m skeptical that that prior really is the unique representation of your state of knowledge. (It might well be the unique representation of the most-salient-to-you intuitions about the proposition, but that’s not your state of knowledge.) I don’t know what further positive argument you’re looking for.
Someone could fail to report a unique precise prior (and one that’s consistent with their other beliefs and priors across contexts) for any of the following reasons, which seem worth distinguishing:
There is no unique precise prior that can represent their state of knowledge.
There is a unique precise prior that represents their state of knowledge, but they don’t have or use it, even approximately.
There is a unique precise prior that represents their state of knowledge, but, in practice, they can only report (precise or imprecise) approximations of it (not just computing decimal places for a real number, but also which things go into the prior could differ by approximation). Hypothetically, in the limit of resources spent on computing its values, the approximations would converge to this unique precise prior.
I’d be inclined to treat all three cases like imprecise probabilities, e.g. I wouldn’t permanently commit to a prior I wrote down to the exclusion of all other priors over the same events/possibilities.