I might have such hopes, if I had a way to differentiate between the people.
(And above, when I make statements about what I would do in trolley problems, I’m just phrasing normative principles in the first person. Sufficiently powerful prudential considerations could impel me to act wrongly. For instance, I might switch a trolley away from my sister and towards a stranger just because I care about my sister more.)
Find a point of balance, where the decision swings. What about sister vs. 2 people? Sister vs. million people? Say, balance is found at N people, so you value N+1 strangers more than your sister, and N people less. Then, N+1 people can be used in place of sister in the variant with 1 person on the other track: just as you’d reroute the train from your sister and to a random stranger, you’d reroute the train from N+1 strangers (which are even more valuable) and to one stranger.
Then, work back from that. If you reroute from N+1 people to 1 person, there is the smallest number M of people that you won’t reroute from M people but would from all k>M. And there you have a weak trolley problem, closer to the original formulation.
(This is not the strongest problem with your argument, but an easy one, and a step towards seeing the central problem.)
Wait a second—is theree a difference of definitions here? That sounds a lot like what you’d get if you started with a mixed consequentialist and deontological morality, drew a boundary around the consequentialist parts and relabeled them not-morality, but didn’t actually stop following them.
I presume prudential concerns are non-moral concerns. In the way that maintaining an entertainment budget next to your charity budget while kids are starving in poorer countries is not often considered a gross moral failure, I would consider the desire for entertainment to be a prudential concern that overrides or outweighs morality.
I guess that would yield something similar. It usually looks to me like consequentialists just care about the thing I call “prudence” and not at all about the thing I call “morality”.
Well, the vast majority of situations have no conflict. Getting a bowl of cereal in the morning is both prudent and right if you want cereal and don’t have to do anything rights-violating or uncommonly destructive to get it. But in thought experiments it looks like consequentialists operate (or endorse operating) solely according to prudence.
Agreed that it looks like consequentialists operate (1) solely according to prudence, if I understand properly what you mean by “prudence.”
Agreed that in most cases there’s no conflict.
I infer you believe that in cases where there is a conflict, deontologists do (or at least endorse) the morally right thing, and consequentialists do (oale) the prudent thing. Is that right?
I also infer from other discussions that you consider killing one innocent person to save five innocent people an example of a case with conflict, where the morally right thing to do is to not-kill an innocent person. Is that right?
===
(1) Or, as you say, at least endorse operating. I doubt that we actually do, in practice, operate solely according to prudence. Then again, I doubt that anyone operates solely according to the moral principles they endorse.
If I informed you (1) that I would prefer that you choose to kill me rather than allow five other people to die so I could go on living, would that change the morally right thing to do? (Note I’m not asking you what you would do in that situation.)
==
(1) I mean convincingly informed you, not just posted a comment about it that you have no particular reason to take seriously. I’m not sure how I could do that, but just for concreteness, suppose I had Elspeth’s power.
(EDIT: Actually, it occurs to me that I could more simply ask: “If I preferred...,” given that I’m asking about your moral intuitions rather than your predicted behavior.)
Yes, if I had that information about your preferences, it would make it OK to kill you for purposes you approved. Your right to not be killed is yours; you don’t have to exercise it if you don’t care to.
Easy reader version for consequentialists: I’m like a consequentialist with a cherry on top. I think this cherry on top is very, very important, and like to borrow moralistic terminology to talk about it. Its presence makes me a very bad consequentialist sometimes, but I think that’s fine.
I don’t think cutting out deontology entirely would be a good thing. I do think that the relative weights of deontological and consequentialist rules needs to be considered, and that choosing inaction in a 5 lives:1 life trolley problem strongly suggests misweighting. But that’s just a thought experiment; and I wouldn’t consider it wrong to choose inaction in, say, a 1.2 lives:1 life trolley problem.
I don’t think cutting out deontology entirely would be a good thing. I do think that the relative weights of deontological and consequentialist rules needs to be considered, and that choosing inaction in a 5 lives:1 life trolley problem strongly suggests misweighting. But that’s just a thought experiment; and I wouldn’t consider it wrong to choose inaction in, say, a 1.2 lives:1 life trolley problem.
I agree (if not on 1.2 figure, then still on some 1+epsilon).
It’s analogous to, say, prosecuting homosexuals. If some people feel bad emotions caused by others’ homosexuality, this reason is weaker than disutility caused by the prosecution, and so sufficiently reflective bargaining between these reasons results in not prosecuting it (it’s also much easier to adjust attitude towards homosexuality than one’s sexual orientation, in the long run).
Here, we have moral intuitions that suggest adhering to moral principles and virtues, with disutility of overcoming them (in general, or just in high-stakes situations) bargaining against disutility of following them and thus making suboptimal decisions. Of these two, consequences ought to win out, as they can be much more severe (while the psychological disutility is bounded), and can’t be systematically dissolved (while a culture of consequentialism could eventually make it psychologically easier to suppress non-consequentialist drives).
Unclear. I wanted to refer to legal acceptance as reflective distillation of social attitude as much as social attitude itself. Maybe still incorrect English usage?
I interpret this as that he currently acts consequentialist, but feel guilty after breaking a dentological principle, would behave in a more dentological fashion if he had more willpower, and would self modify to be purely dentological if he had the chance. Is this correct?
I might have such hopes, if I had a way to differentiate between the people.
(And above, when I make statements about what I would do in trolley problems, I’m just phrasing normative principles in the first person. Sufficiently powerful prudential considerations could impel me to act wrongly. For instance, I might switch a trolley away from my sister and towards a stranger just because I care about my sister more.)
Find a point of balance, where the decision swings. What about sister vs. 2 people? Sister vs. million people? Say, balance is found at N people, so you value N+1 strangers more than your sister, and N people less. Then, N+1 people can be used in place of sister in the variant with 1 person on the other track: just as you’d reroute the train from your sister and to a random stranger, you’d reroute the train from N+1 strangers (which are even more valuable) and to one stranger.
Then, work back from that. If you reroute from N+1 people to 1 person, there is the smallest number M of people that you won’t reroute from M people but would from all k>M. And there you have a weak trolley problem, closer to the original formulation.
(This is not the strongest problem with your argument, but an easy one, and a step towards seeing the central problem.)
Um, my prudential considerations do indeed work more or less consequentialistically. That’s not news to me. They just aren’t morality.
Wait a second—is theree a difference of definitions here? That sounds a lot like what you’d get if you started with a mixed consequentialist and deontological morality, drew a boundary around the consequentialist parts and relabeled them not-morality, but didn’t actually stop following them.
I presume prudential concerns are non-moral concerns. In the way that maintaining an entertainment budget next to your charity budget while kids are starving in poorer countries is not often considered a gross moral failure, I would consider the desire for entertainment to be a prudential concern that overrides or outweighs morality.
I guess that would yield something similar. It usually looks to me like consequentialists just care about the thing I call “prudence” and not at all about the thing I call “morality”.
That seems like a reasonable summary to me. Does it seem to you that we ought to? (Care about morality, that is.)
I think you ought to do morally right things; caring per se doesn’t seem necessary.
Fair enough.
Does it usually look to you like consequentialists just do prudential things and not morally right things?
Well, the vast majority of situations have no conflict. Getting a bowl of cereal in the morning is both prudent and right if you want cereal and don’t have to do anything rights-violating or uncommonly destructive to get it. But in thought experiments it looks like consequentialists operate (or endorse operating) solely according to prudence.
Agreed that it looks like consequentialists operate (1) solely according to prudence, if I understand properly what you mean by “prudence.”
Agreed that in most cases there’s no conflict.
I infer you believe that in cases where there is a conflict, deontologists do (or at least endorse) the morally right thing, and consequentialists do (oale) the prudent thing. Is that right?
I also infer from other discussions that you consider killing one innocent person to save five innocent people an example of a case with conflict, where the morally right thing to do is to not-kill an innocent person. Is that right?
===
(1) Or, as you say, at least endorse operating. I doubt that we actually do, in practice, operate solely according to prudence. Then again, I doubt that anyone operates solely according to the moral principles they endorse.
Right and right.
OK, cool. Thanks.
If I informed you (1) that I would prefer that you choose to kill me rather than allow five other people to die so I could go on living, would that change the morally right thing to do? (Note I’m not asking you what you would do in that situation.)
==
(1) I mean convincingly informed you, not just posted a comment about it that you have no particular reason to take seriously. I’m not sure how I could do that, but just for concreteness, suppose I had Elspeth’s power.
(EDIT: Actually, it occurs to me that I could more simply ask: “If I preferred...,” given that I’m asking about your moral intuitions rather than your predicted behavior.)
Yes, if I had that information about your preferences, it would make it OK to kill you for purposes you approved. Your right to not be killed is yours; you don’t have to exercise it if you don’t care to.
Does the importance of prudence ever scale without bound, such that it dominates all moral concerns if the stakes get high enough?
I don’t know about all moral concerns. A subset of moral concerns are duplicated and folded into my prudential ones.
Can’t parse.
Easy reader version for consequentialists: I’m like a consequentialist with a cherry on top. I think this cherry on top is very, very important, and like to borrow moralistic terminology to talk about it. Its presence makes me a very bad consequentialist sometimes, but I think that’s fine.
If this cherry on top costs people lives, it’s not “fine”, it’s evil incarnate. You should cut this part of yourself out without mercy.
(Compare to your Luminosity vampires, that are sometimes good, nice people, even if they eat people.)
I don’t think cutting out deontology entirely would be a good thing. I do think that the relative weights of deontological and consequentialist rules needs to be considered, and that choosing inaction in a 5 lives:1 life trolley problem strongly suggests misweighting. But that’s just a thought experiment; and I wouldn’t consider it wrong to choose inaction in, say, a 1.2 lives:1 life trolley problem.
I agree (if not on 1.2 figure, then still on some 1+epsilon).
It’s analogous to, say, prosecuting homosexuals. If some people feel bad emotions caused by others’ homosexuality, this reason is weaker than disutility caused by the prosecution, and so sufficiently reflective bargaining between these reasons results in not prosecuting it (it’s also much easier to adjust attitude towards homosexuality than one’s sexual orientation, in the long run).
Here, we have moral intuitions that suggest adhering to moral principles and virtues, with disutility of overcoming them (in general, or just in high-stakes situations) bargaining against disutility of following them and thus making suboptimal decisions. Of these two, consequences ought to win out, as they can be much more severe (while the psychological disutility is bounded), and can’t be systematically dissolved (while a culture of consequentialism could eventually make it psychologically easier to suppress non-consequentialist drives).
I think you mean “persecuting”, although depending on what exactly you’re talking about I suppose you could mean “prosecuting”.
Unclear. I wanted to refer to legal acceptance as reflective distillation of social attitude as much as social attitude itself. Maybe still incorrect English usage?
I interpret this as that he currently acts consequentialist, but feel guilty after breaking a dentological principle, would behave in a more dentological fashion if he had more willpower, and would self modify to be purely dentological if he had the chance. Is this correct?
Who are you talking about?