My reading suggests the issue is this: You have some set of axioms, which you seem to be conflating to some extent with priors, because you hold the precepts of Bayesian reasoning as direct axioms, and those axioms are the logical constructs in which you run your brain. Other people have different axioms. You’re asking for a set of logical steps, using your axiom set, by which you can “prove” their axioms, so that you can run theirs as well.
The idea of holding different axioms sounds insane to you, and you have trouble articulating why. It’s because they’re different axioms than those you think in. It sounds insane to you because it is; any attempt you make at it is going to be like somebody who only speaks English trying to imagine what it feels like to think in Japanese. You don’t “think” in that language, and until you can, you’re only aping it, using unintelligible nonsense where logic should be.
My reading suggests the issue is this: You have some set of axioms, which you seem to be conflating to some extent with priors, because you hold the precepts of Bayesian reasoning as direct axioms, and those axioms are the logical constructs in which you run your brain. Other people have different axioms. You’re asking for a set of logical steps, using your axiom set, by which you can “prove” their axioms, so that you can run theirs as well.
The idea of holding different axioms sounds insane to you, and you have trouble articulating why. It’s because they’re different axioms than those you think in. It sounds insane to you because it is; any attempt you make at it is going to be like somebody who only speaks English trying to imagine what it feels like to think in Japanese. You don’t “think” in that language, and until you can, you’re only aping it, using unintelligible nonsense where logic should be.
Is this a fair assessment?