“Proving useful in your life” (but not necessarily “proving beneficial”) is the core of instrumental rationality, but what’s useful is not necessarily what’s true, so it’s important to refrain from using that metric in epistemic rationality.
I suppose that is a tension between epistemic and instrumental rationality.
Put in terms of a microeconomic trade-off: The marginal value of having correct beliefs diminishes beyond a certain threshold. Eventually, the marginal value of increasing one’s epistemic accuracy dips below the marginal value that comes from retaining one’s mistaken belief. At that point, an instrumentally rational agent may stop increasing accuracy.
On the other hand, it may be a problem of local-versus-global optima: The marginal value of accuracy may creep up again. Or maybe those who see it as a problem can fix it with the right augmentation.
I suppose that is a tension between epistemic and instrumental rationality.
There is no tension. Epistemic rationality is merely instrumental, while instrumental rationality is not. They are different kinds of things. Means to an end don’t compete with what the end is.
This sometimes comes at the expense of other truths, just as pursuing evidence for your preferred conclusion turns up real evidence but a less accurate map.
“Proving useful in your life” (but not necessarily “proving beneficial”) is the core of instrumental rationality, but what’s useful is not necessarily what’s true, so it’s important to refrain from using that metric in epistemic rationality.
Example: cognitive behavioral therapy is often useful “to solve problems concerning dysfunctional emotions”, but not useful for pursuing truth. There’s also mindfulness-based cognitive therapy for an example more relevant to Buddhism.
I suppose that is a tension between epistemic and instrumental rationality.
Put in terms of a microeconomic trade-off: The marginal value of having correct beliefs diminishes beyond a certain threshold. Eventually, the marginal value of increasing one’s epistemic accuracy dips below the marginal value that comes from retaining one’s mistaken belief. At that point, an instrumentally rational agent may stop increasing accuracy.
On the other hand, it may be a problem of local-versus-global optima: The marginal value of accuracy may creep up again. Or maybe those who see it as a problem can fix it with the right augmentation.
There is no tension. Epistemic rationality is merely instrumental, while instrumental rationality is not. They are different kinds of things. Means to an end don’t compete with what the end is.
Upvoted for this
It is useful for pursuing truth to the extent that it can correct actually false beliefs when they happen to tend in one direction.
This sometimes comes at the expense of other truths, just as pursuing evidence for your preferred conclusion turns up real evidence but a less accurate map.