For much the same reason that people who accept gambles based on their intuitions become money pumps (see: the entire field of behavioral economics), people who do ethics entirely based on moral intuitions become “morality pumps”.
I think this thought is worth pursuing in more concrete detail. If I prefer certainly saving 400 people to .8 chance of saving 500 people, and prefer .2 chance of killing 500 people to certainly killing 100 people, what crazy things can a competing agent get me to endorse? Can you get me to something that would be obviously wrong even deontologically, in the same way that losing all my money is obviously bad even behavioral-economically?
If you have those preferences, then presumably small enough changes to the competing options in each case won’t change which outcome you prefer. And then we get this:
Competing Agent: Hey, Imm. I hear you prefer certainly saving 399 people to a 0.8 chance of saving 500 people. Is that right?
Imm: Yup.
Competing Agent: Cool. It just so happens that there’s a village near here where there are 500 people in danger, and at the moment we’re planning to do something that will save them with probability 80% of the time but otherwise let them all die. But there’s something else we could do that will save 399 of them for sure, though unfortunately the rest won’t make it. Shall we do it?
Imm: Yes.
Competing Agent: OK, done. Oh, now, I realise I have to tell you something else. There’s this village where 100 people are going to die (aside: 101, actually,but that’s even worse, right?) because of a dubious choice someone made. I hear you prefer a 20% chance of killing 499 people to the certainty of killing 100 people; is that right?
Imm: Yes, it is.
Competing Agent: Right, then I’ll get there right away and make sure they choose the 20% chance instead.
At this point, you have gone from losing 500 people with p=0.8 and saving them with p=0.2, to losing one person for sure and then losing the rest with p=0.8 and saving them with p=0.2. Oops.
Well sure. But my position only makes sense at all because I’m not a consequentialist and don’t see killing n people and saving n people as netting out to zero, so I don’t see that you can just add the people up like that.
Perhaps it wasn’t clear—those two were the same village. So I’m not adding up people, I’m not assuming that anything cancels out with anything else. I’m observing that if you have those (inconsistent) preferences, and if you have them by enough of a margin that they can be strengthened a little, then you end up happily making a sequence of changes that take you back to something plainly worse than where you started. Just like getting money-pumped.
Firstly, a deontological posistion distinguishes between directly killing people and not saving them- killing innocent people is generally an objective moral wrong. Your scenario is deceptive because it seems to lmm that innocents will be killed rather than not saved.
More importantly, Elizier’s metaethics is based on the premise that people want to be moral. That’s the ONLY argument he has for a metaethics that gets around the is-ought distinction.
Say for the sake of argument a person has a course of action compatible with deontology v.s one compatible with consequentialism and that are their choices. Shouldn’t they ignore the stone tablet and choose the deontological one if that’s what their moral intuitions say? Elizier can’t justify not doing so without contradiciting his original premise.
So, I wasn’t attempting to answer the question “Are deontologists necessarily subject to ‘pumping’?” but the different question “Are people who work entirely off moral intuition necessarily subject to ‘pumping’?”. Imm’s question—if I didn’t completely misunderstand it, which of course I might have—was about the famous framing effect where describing the exact same situation two different ways generates different preferences. If you work entirely off intuition, and if your intuitions are like most people’s, then you will be subject to this sort of framing effect and you will make the choices I ascribed to Imm in that little bit of dialogue, and the result is that you will make two decisions both of which look to you like improvements, and whose net result is that more people die. On account of your choices. Which really ought to be unacceptable to almost anyone, consequentialist or deontologist or anything else.
I wasn’t attempting a defence of Eliezer’s metaethics. I was answering the more specific question that (I thought) Imm was asking.
I did mean I was making a deontological distinction between saving and killing, not just a framing question (and I didn’t really mean that scenario specifically, it was just the example that came to mind—the general question is the one I’m interested in, it’s just that as phrased it’s too abstract for me) Sorry for the confusion.
I think this thought is worth pursuing in more concrete detail. If I prefer certainly saving 400 people to .8 chance of saving 500 people, and prefer .2 chance of killing 500 people to certainly killing 100 people, what crazy things can a competing agent get me to endorse? Can you get me to something that would be obviously wrong even deontologically, in the same way that losing all my money is obviously bad even behavioral-economically?
If you have those preferences, then presumably small enough changes to the competing options in each case won’t change which outcome you prefer. And then we get this:
Competing Agent: Hey, Imm. I hear you prefer certainly saving 399 people to a 0.8 chance of saving 500 people. Is that right?
Imm: Yup.
Competing Agent: Cool. It just so happens that there’s a village near here where there are 500 people in danger, and at the moment we’re planning to do something that will save them with probability 80% of the time but otherwise let them all die. But there’s something else we could do that will save 399 of them for sure, though unfortunately the rest won’t make it. Shall we do it?
Imm: Yes.
Competing Agent: OK, done. Oh, now, I realise I have to tell you something else. There’s this village where 100 people are going to die (aside: 101, actually,but that’s even worse, right?) because of a dubious choice someone made. I hear you prefer a 20% chance of killing 499 people to the certainty of killing 100 people; is that right?
Imm: Yes, it is.
Competing Agent: Right, then I’ll get there right away and make sure they choose the 20% chance instead.
At this point, you have gone from losing 500 people with p=0.8 and saving them with p=0.2, to losing one person for sure and then losing the rest with p=0.8 and saving them with p=0.2. Oops.
[EDITED to clarify what’s going on at one point.]
Well sure. But my position only makes sense at all because I’m not a consequentialist and don’t see killing n people and saving n people as netting out to zero, so I don’t see that you can just add the people up like that.
Perhaps it wasn’t clear—those two were the same village. So I’m not adding up people, I’m not assuming that anything cancels out with anything else. I’m observing that if you have those (inconsistent) preferences, and if you have them by enough of a margin that they can be strengthened a little, then you end up happily making a sequence of changes that take you back to something plainly worse than where you started. Just like getting money-pumped.
Firstly, a deontological posistion distinguishes between directly killing people and not saving them- killing innocent people is generally an objective moral wrong. Your scenario is deceptive because it seems to lmm that innocents will be killed rather than not saved.
More importantly, Elizier’s metaethics is based on the premise that people want to be moral. That’s the ONLY argument he has for a metaethics that gets around the is-ought distinction.
Say for the sake of argument a person has a course of action compatible with deontology v.s one compatible with consequentialism and that are their choices. Shouldn’t they ignore the stone tablet and choose the deontological one if that’s what their moral intuitions say? Elizier can’t justify not doing so without contradiciting his original premise.
(Eliezer.)
So, I wasn’t attempting to answer the question “Are deontologists necessarily subject to ‘pumping’?” but the different question “Are people who work entirely off moral intuition necessarily subject to ‘pumping’?”. Imm’s question—if I didn’t completely misunderstand it, which of course I might have—was about the famous framing effect where describing the exact same situation two different ways generates different preferences. If you work entirely off intuition, and if your intuitions are like most people’s, then you will be subject to this sort of framing effect and you will make the choices I ascribed to Imm in that little bit of dialogue, and the result is that you will make two decisions both of which look to you like improvements, and whose net result is that more people die. On account of your choices. Which really ought to be unacceptable to almost anyone, consequentialist or deontologist or anything else.
I wasn’t attempting a defence of Eliezer’s metaethics. I was answering the more specific question that (I thought) Imm was asking.
I did mean I was making a deontological distinction between saving and killing, not just a framing question (and I didn’t really mean that scenario specifically, it was just the example that came to mind—the general question is the one I’m interested in, it’s just that as phrased it’s too abstract for me) Sorry for the confusion.