This leads to an idea for a story in which in the far future non-consequentalist views are considered horrible. The worst insult that can be given is “non-pusher”.
How would you transport ore from mines to refineries and metal from refineries to extruders, then? Some evils really are necessary. I prefer to focus on the rope, which ought not to be securing people to tracks.
(However, if there are 5 people total, and I can arrange for the train to run over only one of those same people instead of all five, then I’ll flip the switch on the grounds that the one person is unsalvageable.)
I would predict that if the switch were initially set to send the trolley down the track with one person, you also would not flip it.
But suppose that you first see the two paths with people tied to the track, and you have not yet observed the position of the switch. As you look towards it, is there any particular position that you hope the switch is in?
I might have such hopes, if I had a way to differentiate between the people.
(And above, when I make statements about what I would do in trolley problems, I’m just phrasing normative principles in the first person. Sufficiently powerful prudential considerations could impel me to act wrongly. For instance, I might switch a trolley away from my sister and towards a stranger just because I care about my sister more.)
Find a point of balance, where the decision swings. What about sister vs. 2 people? Sister vs. million people? Say, balance is found at N people, so you value N+1 strangers more than your sister, and N people less. Then, N+1 people can be used in place of sister in the variant with 1 person on the other track: just as you’d reroute the train from your sister and to a random stranger, you’d reroute the train from N+1 strangers (which are even more valuable) and to one stranger.
Then, work back from that. If you reroute from N+1 people to 1 person, there is the smallest number M of people that you won’t reroute from M people but would from all k>M. And there you have a weak trolley problem, closer to the original formulation.
(This is not the strongest problem with your argument, but an easy one, and a step towards seeing the central problem.)
Wait a second—is theree a difference of definitions here? That sounds a lot like what you’d get if you started with a mixed consequentialist and deontological morality, drew a boundary around the consequentialist parts and relabeled them not-morality, but didn’t actually stop following them.
I presume prudential concerns are non-moral concerns. In the way that maintaining an entertainment budget next to your charity budget while kids are starving in poorer countries is not often considered a gross moral failure, I would consider the desire for entertainment to be a prudential concern that overrides or outweighs morality.
I guess that would yield something similar. It usually looks to me like consequentialists just care about the thing I call “prudence” and not at all about the thing I call “morality”.
Well, the vast majority of situations have no conflict. Getting a bowl of cereal in the morning is both prudent and right if you want cereal and don’t have to do anything rights-violating or uncommonly destructive to get it. But in thought experiments it looks like consequentialists operate (or endorse operating) solely according to prudence.
Agreed that it looks like consequentialists operate (1) solely according to prudence, if I understand properly what you mean by “prudence.”
Agreed that in most cases there’s no conflict.
I infer you believe that in cases where there is a conflict, deontologists do (or at least endorse) the morally right thing, and consequentialists do (oale) the prudent thing. Is that right?
I also infer from other discussions that you consider killing one innocent person to save five innocent people an example of a case with conflict, where the morally right thing to do is to not-kill an innocent person. Is that right?
===
(1) Or, as you say, at least endorse operating. I doubt that we actually do, in practice, operate solely according to prudence. Then again, I doubt that anyone operates solely according to the moral principles they endorse.
If I informed you (1) that I would prefer that you choose to kill me rather than allow five other people to die so I could go on living, would that change the morally right thing to do? (Note I’m not asking you what you would do in that situation.)
==
(1) I mean convincingly informed you, not just posted a comment about it that you have no particular reason to take seriously. I’m not sure how I could do that, but just for concreteness, suppose I had Elspeth’s power.
(EDIT: Actually, it occurs to me that I could more simply ask: “If I preferred...,” given that I’m asking about your moral intuitions rather than your predicted behavior.)
Yes, if I had that information about your preferences, it would make it OK to kill you for purposes you approved. Your right to not be killed is yours; you don’t have to exercise it if you don’t care to.
Easy reader version for consequentialists: I’m like a consequentialist with a cherry on top. I think this cherry on top is very, very important, and like to borrow moralistic terminology to talk about it. Its presence makes me a very bad consequentialist sometimes, but I think that’s fine.
I don’t think cutting out deontology entirely would be a good thing. I do think that the relative weights of deontological and consequentialist rules needs to be considered, and that choosing inaction in a 5 lives:1 life trolley problem strongly suggests misweighting. But that’s just a thought experiment; and I wouldn’t consider it wrong to choose inaction in, say, a 1.2 lives:1 life trolley problem.
I don’t think cutting out deontology entirely would be a good thing. I do think that the relative weights of deontological and consequentialist rules needs to be considered, and that choosing inaction in a 5 lives:1 life trolley problem strongly suggests misweighting. But that’s just a thought experiment; and I wouldn’t consider it wrong to choose inaction in, say, a 1.2 lives:1 life trolley problem.
I agree (if not on 1.2 figure, then still on some 1+epsilon).
It’s analogous to, say, prosecuting homosexuals. If some people feel bad emotions caused by others’ homosexuality, this reason is weaker than disutility caused by the prosecution, and so sufficiently reflective bargaining between these reasons results in not prosecuting it (it’s also much easier to adjust attitude towards homosexuality than one’s sexual orientation, in the long run).
Here, we have moral intuitions that suggest adhering to moral principles and virtues, with disutility of overcoming them (in general, or just in high-stakes situations) bargaining against disutility of following them and thus making suboptimal decisions. Of these two, consequences ought to win out, as they can be much more severe (while the psychological disutility is bounded), and can’t be systematically dissolved (while a culture of consequentialism could eventually make it psychologically easier to suppress non-consequentialist drives).
Unclear. I wanted to refer to legal acceptance as reflective distillation of social attitude as much as social attitude itself. Maybe still incorrect English usage?
I interpret this as that he currently acts consequentialist, but feel guilty after breaking a dentological principle, would behave in a more dentological fashion if he had more willpower, and would self modify to be purely dentological if he had the chance. Is this correct?
What if it were 50 people? 500? 5*10^6? The remainder of all humanity?
My own position is that morality should incorporate both deontological and consequentialist terms, but they scale at different rates, so that deontology dominates when the stakes are very small and consequentialism dominates when the stakes are very large.
I am obliged to act based on my best information about the situation. If that best information tells me that:
I have no special positive obligations to anyone involved,
The one person is not willing to be run over to save the others (or simply willing to be run over e.g. because ey is suicidal), and
The one person is not morally responsible for the situation at hand or for any other wrong act such that they have waived their right to life,
Then I am obliged to let the trolley go. However, I have low priors on most humans being so very uninterested in helping others (or at least having an infrastructure to live in) that they wouldn’t be willing to die to save the entire rest of the human species. So if that were really the stake at hand, the lone person tied to the track would have to be loudly announcing “I am a selfish bastard and I’d rather be the last human alive than die to save everyone else in the world!”.
And, again, prudential concerns would probably kick in, most likely well before there were hundreds of people on the line.
Would it be correct to say that, insofar as you would hope that the one person would be willing to sacrifice his/her life for the cause of saving the 5*10^6 others, you yourself would pull the switch and then willingly sacrifice yourself to the death penalty (or whatever penalty there is for murder) for the same cause?
I’d be willing to die (including as part of a legal sentence) to save that many people. (Not that I wouldn’t avoid dying if I could, but if that were a necessary part of the saving-people process I’d still enact said process.) I wouldn’t kill someone I believed unwilling, even for the same purpose, including via trolley.
I feel like the difference between “No matter what, this person will die” and “No matter what, one person will die” is very subtle. It seems like you could arrange thought experiments that trample this distinction. Would that pose a problem?
I don’t remember the details, but while I was at the SIAI house I was presented some very elaborate thought experiments that attempted something like this. I derived the answer my system gives and announced it and everyone made outraged noises, but they also make outraged noises when I answered standard trolley problems, so I’m not sure to what extent I should consider that a remarkable feature of those thought experiments. Do you have one in mind you’d like me to reply to?
Not really. I am mildly opposed to asking trolley problem questions. I mostly just observed that, in my brain, there wasn’t much difference between:
Set of 5 people where either 1 dies or 5 die. Set of 6 people where either 1 dies or 5 die.
I wasn’t sure exactly what work the word ‘unsalvageable’ was doing: was it that this person cannot in principle be saved, so er life is ‘not counted’, and really you have
I see. My brain automatically does the math for me and sees 1 or 5 as equivalent to none or four. I think it assumes that human lives are fungible or something.
Can you give a simple example where your flavor of deontology conflicts with consequentialism?
I don’t push people in front of trolleys. (Cue screams of outrage!)
This leads to an idea for a story in which in the far future non-consequentalist views are considered horrible. The worst insult that can be given is “non-pusher”.
I’m even better: I don’t think metal should be formed into trolleys or tracks in the first place.
How would you transport ore from mines to refineries and metal from refineries to extruders, then? Some evils really are necessary. I prefer to focus on the rope, which ought not to be securing people to tracks.
Please go away.
OK.
Edit: Not OK.
How about the original form of the dilemma? Would you flip a switch to divert the trolley to a track with 1 person tied to it instead of 5?
No.
(However, if there are 5 people total, and I can arrange for the train to run over only one of those same people instead of all five, then I’ll flip the switch on the grounds that the one person is unsalvageable.)
I would predict that if the switch were initially set to send the trolley down the track with one person, you also would not flip it.
But suppose that you first see the two paths with people tied to the track, and you have not yet observed the position of the switch. As you look towards it, is there any particular position that you hope the switch is in?
I might have such hopes, if I had a way to differentiate between the people.
(And above, when I make statements about what I would do in trolley problems, I’m just phrasing normative principles in the first person. Sufficiently powerful prudential considerations could impel me to act wrongly. For instance, I might switch a trolley away from my sister and towards a stranger just because I care about my sister more.)
Find a point of balance, where the decision swings. What about sister vs. 2 people? Sister vs. million people? Say, balance is found at N people, so you value N+1 strangers more than your sister, and N people less. Then, N+1 people can be used in place of sister in the variant with 1 person on the other track: just as you’d reroute the train from your sister and to a random stranger, you’d reroute the train from N+1 strangers (which are even more valuable) and to one stranger.
Then, work back from that. If you reroute from N+1 people to 1 person, there is the smallest number M of people that you won’t reroute from M people but would from all k>M. And there you have a weak trolley problem, closer to the original formulation.
(This is not the strongest problem with your argument, but an easy one, and a step towards seeing the central problem.)
Um, my prudential considerations do indeed work more or less consequentialistically. That’s not news to me. They just aren’t morality.
Wait a second—is theree a difference of definitions here? That sounds a lot like what you’d get if you started with a mixed consequentialist and deontological morality, drew a boundary around the consequentialist parts and relabeled them not-morality, but didn’t actually stop following them.
I presume prudential concerns are non-moral concerns. In the way that maintaining an entertainment budget next to your charity budget while kids are starving in poorer countries is not often considered a gross moral failure, I would consider the desire for entertainment to be a prudential concern that overrides or outweighs morality.
I guess that would yield something similar. It usually looks to me like consequentialists just care about the thing I call “prudence” and not at all about the thing I call “morality”.
That seems like a reasonable summary to me. Does it seem to you that we ought to? (Care about morality, that is.)
I think you ought to do morally right things; caring per se doesn’t seem necessary.
Fair enough.
Does it usually look to you like consequentialists just do prudential things and not morally right things?
Well, the vast majority of situations have no conflict. Getting a bowl of cereal in the morning is both prudent and right if you want cereal and don’t have to do anything rights-violating or uncommonly destructive to get it. But in thought experiments it looks like consequentialists operate (or endorse operating) solely according to prudence.
Agreed that it looks like consequentialists operate (1) solely according to prudence, if I understand properly what you mean by “prudence.”
Agreed that in most cases there’s no conflict.
I infer you believe that in cases where there is a conflict, deontologists do (or at least endorse) the morally right thing, and consequentialists do (oale) the prudent thing. Is that right?
I also infer from other discussions that you consider killing one innocent person to save five innocent people an example of a case with conflict, where the morally right thing to do is to not-kill an innocent person. Is that right?
===
(1) Or, as you say, at least endorse operating. I doubt that we actually do, in practice, operate solely according to prudence. Then again, I doubt that anyone operates solely according to the moral principles they endorse.
Right and right.
OK, cool. Thanks.
If I informed you (1) that I would prefer that you choose to kill me rather than allow five other people to die so I could go on living, would that change the morally right thing to do? (Note I’m not asking you what you would do in that situation.)
==
(1) I mean convincingly informed you, not just posted a comment about it that you have no particular reason to take seriously. I’m not sure how I could do that, but just for concreteness, suppose I had Elspeth’s power.
(EDIT: Actually, it occurs to me that I could more simply ask: “If I preferred...,” given that I’m asking about your moral intuitions rather than your predicted behavior.)
Yes, if I had that information about your preferences, it would make it OK to kill you for purposes you approved. Your right to not be killed is yours; you don’t have to exercise it if you don’t care to.
Does the importance of prudence ever scale without bound, such that it dominates all moral concerns if the stakes get high enough?
I don’t know about all moral concerns. A subset of moral concerns are duplicated and folded into my prudential ones.
Can’t parse.
Easy reader version for consequentialists: I’m like a consequentialist with a cherry on top. I think this cherry on top is very, very important, and like to borrow moralistic terminology to talk about it. Its presence makes me a very bad consequentialist sometimes, but I think that’s fine.
If this cherry on top costs people lives, it’s not “fine”, it’s evil incarnate. You should cut this part of yourself out without mercy.
(Compare to your Luminosity vampires, that are sometimes good, nice people, even if they eat people.)
I don’t think cutting out deontology entirely would be a good thing. I do think that the relative weights of deontological and consequentialist rules needs to be considered, and that choosing inaction in a 5 lives:1 life trolley problem strongly suggests misweighting. But that’s just a thought experiment; and I wouldn’t consider it wrong to choose inaction in, say, a 1.2 lives:1 life trolley problem.
I agree (if not on 1.2 figure, then still on some 1+epsilon).
It’s analogous to, say, prosecuting homosexuals. If some people feel bad emotions caused by others’ homosexuality, this reason is weaker than disutility caused by the prosecution, and so sufficiently reflective bargaining between these reasons results in not prosecuting it (it’s also much easier to adjust attitude towards homosexuality than one’s sexual orientation, in the long run).
Here, we have moral intuitions that suggest adhering to moral principles and virtues, with disutility of overcoming them (in general, or just in high-stakes situations) bargaining against disutility of following them and thus making suboptimal decisions. Of these two, consequences ought to win out, as they can be much more severe (while the psychological disutility is bounded), and can’t be systematically dissolved (while a culture of consequentialism could eventually make it psychologically easier to suppress non-consequentialist drives).
I think you mean “persecuting”, although depending on what exactly you’re talking about I suppose you could mean “prosecuting”.
Unclear. I wanted to refer to legal acceptance as reflective distillation of social attitude as much as social attitude itself. Maybe still incorrect English usage?
I interpret this as that he currently acts consequentialist, but feel guilty after breaking a dentological principle, would behave in a more dentological fashion if he had more willpower, and would self modify to be purely dentological if he had the chance. Is this correct?
Who are you talking about?
What if it were 50 people? 500? 5*10^6? The remainder of all humanity?
My own position is that morality should incorporate both deontological and consequentialist terms, but they scale at different rates, so that deontology dominates when the stakes are very small and consequentialism dominates when the stakes are very large.
I am obliged to act based on my best information about the situation. If that best information tells me that:
I have no special positive obligations to anyone involved,
The one person is not willing to be run over to save the others (or simply willing to be run over e.g. because ey is suicidal), and
The one person is not morally responsible for the situation at hand or for any other wrong act such that they have waived their right to life,
Then I am obliged to let the trolley go. However, I have low priors on most humans being so very uninterested in helping others (or at least having an infrastructure to live in) that they wouldn’t be willing to die to save the entire rest of the human species. So if that were really the stake at hand, the lone person tied to the track would have to be loudly announcing “I am a selfish bastard and I’d rather be the last human alive than die to save everyone else in the world!”.
And, again, prudential concerns would probably kick in, most likely well before there were hundreds of people on the line.
Would it be correct to say that, insofar as you would hope that the one person would be willing to sacrifice his/her life for the cause of saving the 5*10^6 others, you yourself would pull the switch and then willingly sacrifice yourself to the death penalty (or whatever penalty there is for murder) for the same cause?
I’d be willing to die (including as part of a legal sentence) to save that many people. (Not that I wouldn’t avoid dying if I could, but if that were a necessary part of the saving-people process I’d still enact said process.) I wouldn’t kill someone I believed unwilling, even for the same purpose, including via trolley.
I feel like the difference between “No matter what, this person will die” and “No matter what, one person will die” is very subtle. It seems like you could arrange thought experiments that trample this distinction. Would that pose a problem?
I don’t remember the details, but while I was at the SIAI house I was presented some very elaborate thought experiments that attempted something like this. I derived the answer my system gives and announced it and everyone made outraged noises, but they also make outraged noises when I answered standard trolley problems, so I’m not sure to what extent I should consider that a remarkable feature of those thought experiments. Do you have one in mind you’d like me to reply to?
Not really. I am mildly opposed to asking trolley problem questions. I mostly just observed that, in my brain, there wasn’t much difference between:
Set of 5 people where either 1 dies or 5 die.
Set of 6 people where either 1 dies or 5 die.
I wasn’t sure exactly what work the word ‘unsalvageable’ was doing: was it that this person cannot in principle be saved, so er life is ‘not counted’, and really you have
Set of 4 people where either none die or 4 die?
Yes, that’s the idea.
I see. My brain automatically does the math for me and sees 1 or 5 as equivalent to none or four. I think it assumes that human lives are fungible or something.
That’s a good brain. Pat it or something.