if there’s a domain where the model gives two incompatible predictions, then as soon as that’s noticed it has to be rectified in some way.
What do you mean by “rectified”, and are you sure you mean “rectified” rather than, say, “flagged for attention”? (A bounded approximate Bayesian approaches consistency by trying to be accurate, but doesn’t try to be consistent. I believe ‘immediately update your model somehow when you notice an inconsistency’ is a bad policy for a human [and part of a weak-man version of rationalism that harms people who try to follow it], and I don’t think this belief is opposed to “rationalism”, which should only require not indefinitely tolerating inconsistency.)
The next paragraph applies there: you can rectify it by saying it’s a conflict between hypotheses / heuristics, even if you can’t get solid evidence on which is more likely to be correct.
Cases where you notice an inconsistency are often juicy opportunities to become more accurate.
What do you mean by “rectified”, and are you sure you mean “rectified” rather than, say, “flagged for attention”? (A bounded approximate Bayesian approaches consistency by trying to be accurate, but doesn’t try to be consistent. I believe ‘immediately update your model somehow when you notice an inconsistency’ is a bad policy for a human [and part of a weak-man version of rationalism that harms people who try to follow it], and I don’t think this belief is opposed to “rationalism”, which should only require not indefinitely tolerating inconsistency.)
The next paragraph applies there: you can rectify it by saying it’s a conflict between hypotheses / heuristics, even if you can’t get solid evidence on which is more likely to be correct.
Cases where you notice an inconsistency are often juicy opportunities to become more accurate.