Hmm, that’s a good answer. But akratic cases seem to me to be at least a little bit different: in the case of akrasia, I want to keep working but I also clearly want to read TVTropes (otherwise, why would I be tempted?). And so it’s not as if I’m just failing to do what I want, I’m just doing what I want less instead of what I want more.
Now that I put it like that...I’m starting to wonder how akrasia is even a coherent idea. What could it mean for my desire to read TVTropes to overwhelm my desire to work except that I want to read TVTropes more? And if I want to read TVTropes more than I want to work, in what sense am I making a mistake?
Hmm, that’s a good answer. But akratic cases seem to me to be at least a little bit different: in the case of akrasia, I want to keep working but I also clearly want to read TVTropes (otherwise, why would I be tempted?). And so it’s not as if I’m just failing to do what I want, I’m just doing what I want less instead of what I want more.
And in the case of giving money to save children, you want the children to be saved but you also want to keep your money to spend it on other stuff.
Now that I put it like that...I’m starting to wonder how akrasia is even a coherent idea. What could it mean for my desire to read TVTropes to overwhelm my desire to work except that I want to read TVTropes more? And if I want to read TVTropes more than I want to work, in what sense am I making a mistake?
And in the case of giving money to save children, you want the children to be saved but you also want to keep your money to spend it on other stuff.
Sorry, I was thinking of a crazier kind of situation. I’m thinking of a situation where you want to save the kids, and this is your all-things-considered preference. There are other things you want, but you’ve reflected and you want this more than anything else (and lets say you’re not self-deceived about this). It follows then that you should save the kids. But say you don’t, what do we call that? And I want to grant straight off that there may be some kind of impossibility in my description. Only, there probably should be no impossibility here, otherwise I’m at a loss as to how the word ‘should’ is being used.
maybe you now regret spending all afternoon reading TVTropes rather than working.
Well, that’s just making a trade-off. If you like strawberry ice cream, and you like chocolate ice cream, but you can’t afford to eat both, and you like chocolate ice cream more than strawberry ice cream, you won’t eat strawberry ice cream even though you like it.
So what would you call it if, in the above scenario, I ate some strawberry ice-cream? Assume that my desires are consistant over time, and that my desiring-parts have been reconciled without contradiction, i.e. that this is not a case of akrasia. Am I describing something impossible? Or am I just behaving irrationally?
Yes, if all the assumptions you made hold (also, no declining marginal utility for any ice cream flavour, no preference for variety for variety’s sake, and similar), then I would call eating strawberry ice-cream irrational.
(How likely these assumptions are to be a reasonable approximation to a scenario in real life, that’s another story; for example, people get bored when they always do the same things.)
Okay, thanks. So the ‘should’ of ‘you should save the children (if you want to)’ is a ‘should’ of rationality. Now do I have any reason at all to be rational in this way, or do I just have reason to get the thing I want (i.e. by reason of wanting it).
I mean that if I want X, and this is a reason to get X, do I have another reason to get X, namely that to do so would be rational and to fail to do so would be irrational?
You appear to be failing the twelfth virtue. Rationality is that which leads you to systematically get what you want, not some additional thing you might want in itself.
Hmm, so this seems like a problematic thing to tell someone: if I listen to you, then I’m going to be changing my mind about an object level question (“do we have reasons to be rational?”) because taking a certain position on that question violates a ‘virtue of rationality’. So if I do heed your warning, I fail in the very same way. If I don’t, then I’m stuck in my original failure.
But fair enough, I can’t think of a way to defend the idea of having reasons (specifically) to be rational at the moment.
I don’t know for sure that I do care; I got started on this line of questioning by asking a moral nihilist (if that’s accurate) what they meant by ‘should’ in the claim that if you want to save kids, you should save them. Turns out, the consequent of that sentence is pleonastic with the antecedent.
I’d, probably like you, raise doubts as to what the difference could be between being rational on a given occasion, and getting the highest expected return. On the other hand, I don’t entirely trust my preferences, and the best way to represent the gap between what I want and what I should want seems to be by using words like ‘rationality’, ‘truth’ and maybe ‘goodness’. If you asked me to choose between the morally right thing, and the thing that maximises my own standard of moral value, I’d unhesitatingly go for the former.
So I agree that there’s some absurdity in distinguishing in some particular case between rationality and a particular choice that maximises expected (objective) value. It may be wrong to conclude from this that we can eschew mention of rationality in our actual decision making though: the home of that term may be as a goal or aim, rather than as something standing along-side a particular decision. Once the rational decision has been arrived at, it’s identical with ‘rationality’. Until then, rationality is the ideal that guides you there. Something like that.
Hmm, that’s a good answer. But akratic cases seem to me to be at least a little bit different: in the case of akrasia, I want to keep working but I also clearly want to read TVTropes (otherwise, why would I be tempted?). And so it’s not as if I’m just failing to do what I want, I’m just doing what I want less instead of what I want more.
Now that I put it like that...I’m starting to wonder how akrasia is even a coherent idea. What could it mean for my desire to read TVTropes to overwhelm my desire to work except that I want to read TVTropes more? And if I want to read TVTropes more than I want to work, in what sense am I making a mistake?
And in the case of giving money to save children, you want the children to be saved but you also want to keep your money to spend it on other stuff.
It can described as different parts of you, or different time-slices of you, wanting different things: i.e., what you-yesterday wanted is different from what you-today wish you-yesterday had done: maybe you now regret spending all afternoon reading TVTropes rather than working.
Sorry, I was thinking of a crazier kind of situation. I’m thinking of a situation where you want to save the kids, and this is your all-things-considered preference. There are other things you want, but you’ve reflected and you want this more than anything else (and lets say you’re not self-deceived about this). It follows then that you should save the kids. But say you don’t, what do we call that? And I want to grant straight off that there may be some kind of impossibility in my description. Only, there probably should be no impossibility here, otherwise I’m at a loss as to how the word ‘should’ is being used.
Thanks for the link, I’ll think this over.
Well, that’s just making a trade-off. If you like strawberry ice cream, and you like chocolate ice cream, but you can’t afford to eat both, and you like chocolate ice cream more than strawberry ice cream, you won’t eat strawberry ice cream even though you like it.
So what would you call it if, in the above scenario, I ate some strawberry ice-cream? Assume that my desires are consistant over time, and that my desiring-parts have been reconciled without contradiction, i.e. that this is not a case of akrasia. Am I describing something impossible? Or am I just behaving irrationally?
Yes, if all the assumptions you made hold (also, no declining marginal utility for any ice cream flavour, no preference for variety for variety’s sake, and similar), then I would call eating strawberry ice-cream irrational.
(How likely these assumptions are to be a reasonable approximation to a scenario in real life, that’s another story; for example, people get bored when they always do the same things.)
Okay, thanks. So the ‘should’ of ‘you should save the children (if you want to)’ is a ‘should’ of rationality. Now do I have any reason at all to be rational in this way, or do I just have reason to get the thing I want (i.e. by reason of wanting it).
I mean that if I want X, and this is a reason to get X, do I have another reason to get X, namely that to do so would be rational and to fail to do so would be irrational?
You appear to be failing the twelfth virtue. Rationality is that which leads you to systematically get what you want, not some additional thing you might want in itself.
Hmm, so this seems like a problematic thing to tell someone: if I listen to you, then I’m going to be changing my mind about an object level question (“do we have reasons to be rational?”) because taking a certain position on that question violates a ‘virtue of rationality’. So if I do heed your warning, I fail in the very same way. If I don’t, then I’m stuck in my original failure.
But fair enough, I can’t think of a way to defend the idea of having reasons (specifically) to be rational at the moment.
I—I don’t know what to answer at this point—Do you have any idea how you came to care about being rational in the first place?
Would you rather be the one who did what you think of as rational, or the one who is currently smiling from on top of a giant heap of utility? (Too bad that post must use such a potentially controversial example...)
I don’t know for sure that I do care; I got started on this line of questioning by asking a moral nihilist (if that’s accurate) what they meant by ‘should’ in the claim that if you want to save kids, you should save them. Turns out, the consequent of that sentence is pleonastic with the antecedent.
I’d, probably like you, raise doubts as to what the difference could be between being rational on a given occasion, and getting the highest expected return. On the other hand, I don’t entirely trust my preferences, and the best way to represent the gap between what I want and what I should want seems to be by using words like ‘rationality’, ‘truth’ and maybe ‘goodness’. If you asked me to choose between the morally right thing, and the thing that maximises my own standard of moral value, I’d unhesitatingly go for the former.
So I agree that there’s some absurdity in distinguishing in some particular case between rationality and a particular choice that maximises expected (objective) value. It may be wrong to conclude from this that we can eschew mention of rationality in our actual decision making though: the home of that term may be as a goal or aim, rather than as something standing along-side a particular decision. Once the rational decision has been arrived at, it’s identical with ‘rationality’. Until then, rationality is the ideal that guides you there. Something like that.