Although instrumental rationality is an interesting category, I tend to view it as ultimately boiling down to epistemic rationality. For example, I reason that A leads to B. I wanted B but I didn’t want A, and now my motivations start traveling up and down the A → B causal chain until I reach equilibrium. Or for another example, I notice that if I choose C for reason R, my rational game-partner will likewise choose C for reason R, because of some symmetry in our properties as agents. Now I need to compare the outcome of choices {C, C} to other possibilities, but I can rule out {C, D}, say.
My attraction to various options will change in response to learning these facts. But the role of rationality seems to end with arriving at and facing the facts.
No? Or, beside the point? (But if beside the point, still an interesting new point, I reckon.)
The trouble is that there is nothing in epistemic rationality that corresponds to “motivations” or “goals” or anything like that. Epistemic rationality can tell you that pushing a button will lead to puppies not being tortured, and not pushing it will lead to puppies being tortured, but unless you have an additional system that incorporates desires for puppies to not be tortured, as well as a system for achieving those desires, that’s all you can do with epistemic rationality.
I see it as exactly the other way around: the only good reason to care about epistemic rationality is because it helps you be instrumentally rational. Obtaining accurate beliefs and then doing nothing with them is intellectual masturbation.
Although instrumental rationality is an interesting category, I tend to view it as ultimately boiling down to epistemic rationality. For example, I reason that A leads to B. I wanted B but I didn’t want A, and now my motivations start traveling up and down the A → B causal chain until I reach equilibrium. Or for another example, I notice that if I choose C for reason R, my rational game-partner will likewise choose C for reason R, because of some symmetry in our properties as agents. Now I need to compare the outcome of choices {C, C} to other possibilities, but I can rule out {C, D}, say.
My attraction to various options will change in response to learning these facts. But the role of rationality seems to end with arriving at and facing the facts.
No? Or, beside the point? (But if beside the point, still an interesting new point, I reckon.)
The trouble is that there is nothing in epistemic rationality that corresponds to “motivations” or “goals” or anything like that. Epistemic rationality can tell you that pushing a button will lead to puppies not being tortured, and not pushing it will lead to puppies being tortured, but unless you have an additional system that incorporates desires for puppies to not be tortured, as well as a system for achieving those desires, that’s all you can do with epistemic rationality.
That’s entirely compatible with my point.
I see it as exactly the other way around: the only good reason to care about epistemic rationality is because it helps you be instrumentally rational. Obtaining accurate beliefs and then doing nothing with them is intellectual masturbation.
This could be a goal in itself.
This too, is entirely compatible with my point. What rationality is, and why we care about it, are distinct questions.