I don’t know for sure that I do care; I got started on this line of questioning by asking a moral nihilist (if that’s accurate) what they meant by ‘should’ in the claim that if you want to save kids, you should save them. Turns out, the consequent of that sentence is pleonastic with the antecedent.
I’d, probably like you, raise doubts as to what the difference could be between being rational on a given occasion, and getting the highest expected return. On the other hand, I don’t entirely trust my preferences, and the best way to represent the gap between what I want and what I should want seems to be by using words like ‘rationality’, ‘truth’ and maybe ‘goodness’. If you asked me to choose between the morally right thing, and the thing that maximises my own standard of moral value, I’d unhesitatingly go for the former.
So I agree that there’s some absurdity in distinguishing in some particular case between rationality and a particular choice that maximises expected (objective) value. It may be wrong to conclude from this that we can eschew mention of rationality in our actual decision making though: the home of that term may be as a goal or aim, rather than as something standing along-side a particular decision. Once the rational decision has been arrived at, it’s identical with ‘rationality’. Until then, rationality is the ideal that guides you there. Something like that.
I—I don’t know what to answer at this point—Do you have any idea how you came to care about being rational in the first place?
Would you rather be the one who did what you think of as rational, or the one who is currently smiling from on top of a giant heap of utility? (Too bad that post must use such a potentially controversial example...)
I don’t know for sure that I do care; I got started on this line of questioning by asking a moral nihilist (if that’s accurate) what they meant by ‘should’ in the claim that if you want to save kids, you should save them. Turns out, the consequent of that sentence is pleonastic with the antecedent.
I’d, probably like you, raise doubts as to what the difference could be between being rational on a given occasion, and getting the highest expected return. On the other hand, I don’t entirely trust my preferences, and the best way to represent the gap between what I want and what I should want seems to be by using words like ‘rationality’, ‘truth’ and maybe ‘goodness’. If you asked me to choose between the morally right thing, and the thing that maximises my own standard of moral value, I’d unhesitatingly go for the former.
So I agree that there’s some absurdity in distinguishing in some particular case between rationality and a particular choice that maximises expected (objective) value. It may be wrong to conclude from this that we can eschew mention of rationality in our actual decision making though: the home of that term may be as a goal or aim, rather than as something standing along-side a particular decision. Once the rational decision has been arrived at, it’s identical with ‘rationality’. Until then, rationality is the ideal that guides you there. Something like that.