My point is that there are some propositions—for instance the epistemic perfection of Bayesianism—to which you attach a probability of exactly 1.0. Yet you want to remain free to reject some of those “100% sure” beliefs at some future time, should evidence or argument convince you to do so. So, I am advising you to have one Bayesian in your head who believes the ‘obvious’, and at least one who doubts it. And then if the obvious ever becomes falsified, you will still have one Bayesian you can trust.
That’s definitely a good approximation of the organizational structure of the human mind of an imperfect Bayesian. You have a human consciousness simulating a Bayesian probability-computer, but the human contains heuristics powerful enough to, in some situations, overrule the Bayesian.
My point is that there are some propositions—for instance the epistemic perfection of Bayesianism—to which you attach a probability of exactly 1.0. Yet you want to remain free to reject some of those “100% sure” beliefs at some future time, should evidence or argument convince you to do so. So, I am advising you to have one Bayesian in your head who believes the ‘obvious’, and at least one who doubts it. And then if the obvious ever becomes falsified, you will still have one Bayesian you can trust.
I don’t think the other guy counts as a Bayesian.
That’s definitely a good approximation of the organizational structure of the human mind of an imperfect Bayesian. You have a human consciousness simulating a Bayesian probability-computer, but the human contains heuristics powerful enough to, in some situations, overrule the Bayesian.
This has nothing to do with arguments, though.