I have the impression that you’re pretending not to understand, because you find that a rhetorically more effective way of indicating your contempt for the idea we’re discussing.
Nope. I express my rhetorical contempt in, um, more obvious ways. It’s not exactly that I don’t understand, it’s rather that I see multiple ways of proceeding and I don’t know which one do you have in mind (you, of course, do).
By they way, as a preface I should point out that we are not discussing “right” and “wrong” which, I feel, are anti-useful terms in this discussion. Morals are value systems and they are not coherent in humans. We’re talking mostly about implications of certain moral positions and how they might or might not conflict with other values.
you are responsible for the consequences of choosing not to do things as well as for those of choosing to do things
Yes, I accept that.
by choosing to buy the camera rather than make a donation to AMF or some such charity, you have chosen to let (on average) one more person in Africa die prematurely than otherwise would have died.
Not quite. I don’t think you can make a causal chain there. You can make a probabilistic chain of expectations with a lot of uncertainty in it. Averages are not equal to specific actions—for a hypothetical example, choosing a lifestyle which involves enough driving so that in 10 years you drive the average amount of miles per traffic fatality does not mean you kill someone every 10 years.
However in this thread I didn’t focus on that issue—for the purposes of this argument I accepted the thesis and looked into its implications.
Your objection to this wasn’t to argue with the moral principles involved but to suggest that there’s a symmetry problem
Correct.
“killing a child is morally equivalent to buying an expensive luxury” is less plausible than “buying an expensive luxury is morally equivalent to killing a child”
It’s not an issue of plausibility. It’s an issue of bringing to the forefront the connotations and value conflicts.
Singer goes for shock value by putting an equals sign between what is commonly considered heinous and what’s commonly considered normal. He does this to make the normal look (more) heinous, but you can reduce the gap from both directions—making the heinous more normal works just as well.
your proposed reversal is something like “for all cases of killing a child, there exists a morally equivalent case of buying an expensive luxury”.
I am not exactly proposing it, I am pointing out that the weaker form of this reversal (for some cases) logically follows from the Singer’s proposition and if you don’t think it does, I would like to know why it doesn’t.
to accept the Singer-ish position is to see spending money on luxuries as killing people because the money could instead have been used to save them, which means that there are cases in which one kills a child by spending money on luxuries.
Well, to accept the Singer position means that you kill a child every time you spend the appropriate amount of money (and I don’t see what “luxuries” have to do with it—you kill children by failing to max out your credit cards as well).
In common language, however, “killing a child” does not mean “fail to do something which could, we think, on the average, avoid one death somewhere in Africa”. “Killing a child” means doing something which directly and causally leads to a child’s death.
Your argument against the reversed Singerian principle seems to me to depend on assuming that the original principle is wrong.
No. I think the original principle is wrong, but that’s irrelevant here—in this context I accept the Singerian principle in order to more explicitly show the problems inherent in it.
Taking that position conveniently gets one out of having to see buying a TV as equivalent to letting a child die—but I don’t see how it’s a coherent one. (Especially if, as seems to be the case, you agree with the Singerian position that you’re as responsible for the consequences of your inactions as of your actions.)
Suppose you have a choice between two actions. One will definitely result in the death of 10 children. The other will kill each of 100 children with probability 1⁄5, so that on average 20 children die but no particular child will definitely die. (Perhaps what it does is to increase their chances of dying in some fashion, so that even the ones that do die can’t be known to be the rest of your action.) Which do you prefer?
I say the first is clearly better, even though it might be more unpleasant to contemplate. On average, and the large majority of the time, it results in fewer deaths.
In which case, taking an action (or inaction) that results in the second is surely no improvement on taking an action (or inaction) that results in the first.
Incidentally, I’m happy to bite the bullet on the driving example. Every mile I drive incurs some small but non-zero risk of killing someone, and what I am doing is trading off the danger to them (and to me) against the convenience of driving. As it happens, the risk is fairly small, and behind a Rawlsian veil of ignorance I’m content to choose a world in which people drive as much as I do rather than one in which there’s much less driving, much more inconvenience, and fewer deaths on the road. (I’ll add that I don’t drive very much, and drive quite carefully.)
making the heinous more normal works just as well.
I think that when you come at it from that direction, what you’re doing is making explicit how little most people care in practice about the suffering and death of strangers far away. Which is fair enough, but my impression is that most thoughtful people who encounter the Singerian argument have (precisely by being confronted with it) already seen that.
the weaker form of this reversal [...] logically follows from Singer’s proposition and if you don’t think it does, I would like to know why it doesn’t.
I agree: it does. The equivalence seems obvious enough to me that I’m not sure why it’s supposed to change anyone’s mind about anything, though :-).
I don’t see what “luxuries” have to do with it
Only the fact that trading luxuries against other people’s lives seems like a worse problem than trading “necessities” against other people’s lives.
“Killing a child” means doing something which directly and causally leads to a child’s death.
Sure. Which is why the claim people actually make (at least when they’re being careful about their words) is not “buying a $2000 camera is killing a child” but “buying a $2000 camera is morally equivalent to killing a child”.
I said upfront that human morality is not coherent.
However I think that the root issue here is whether you can do morality math.
You’re saying you can—take the suffering of one person, multiply it by a thousand and you have a moral force that’s a thousand times greater! And we can conveniently think of it as a number, abstracting away the details.
I’m saying morality math doesn’t work, at least it doesn’t work by normal math rules. “A single death is a tragedy; a million deaths is a statistic”—you may not like the sentiment, but it is a correct description of human morality. Let me illustrate.
First, a simple example of values/preferences math not working (note: it’s not a seed of a new morality math theory, it’s just an example). Imagine yourself as an interior decorator and me as a client.
You: Welcome to Optimal Interior Decorating! How can I help you? I: I would like to redecorate my flat and would like some help in picking a colour scheme. You: Very well. What is your name? I: Lumifer! You: What is your quest? I: To find out if strange women lyin’ in ponds distributin’ swords are a proper basis for a system of government! You: What is your favourite colour? I: Purple! You: Excellent. We will paint everything in your flat purple. I: Errr... You: Please show me your preferred shade of purple so that we can paint everything in this particular colour and thus maximize your happiness.
And now back to the serious matters of death and dismemberment. You offered me a hypothetical:
Suppose you have a choice between two actions. One will definitely result in the death of 10 children. The other will kill each of 100 children with probability 1⁄5
Let me also suggest one for you.
You’re in a boat, somewhere offshore. Another boat comes by and it’s skippered by Joker, relaxing from his tussles with Batman. He notices you and cries: “Hey! I’ve got an offer for you!” Joker’s offer looks as follows. Sometime ago he put a bomb with a timer under a children’s orphanage. He can switch off the bomb with a radio signal, but if he doesn’t, the bomb will go off (say, in a couple of hours) and many dozens of children will be killed and maimed. Joker has also kidnapped a five-year-old girl who, at the moment, is alive and unharmed in the cabin.
Joker says that if you go down into the cabin and personally kill the five-year-old girl with your bare hands—you can strangle her or beat her to death or something else, your choice—he, Joker, will press the button and deactivate the bomb. It will not go off and you will save many, many children.
Now, in this example the morality math is very clear. You need to go down into the cabin and kill that little girl. Shut up, multiply, and kill.
And yet I have doubts about your ability to do that. I consider that (expected) lack of ability to be a very good thing.
Consider a concept such as decency. It’s a silly thing, there is no place for it in the morality math. You got to maximize utility, right? And yet...
I suspect there were people who didn’t like the smell of burning flesh and were hesitant to tie women to stakes on top of firewood. But then they shut up and multiplied by the years of everlasting torment the witch’s soul would suffer, and picked up their torches and pitchforks.
I suspect there were people who didn’t particularly enjoy dragging others to the guillotine or helping arrange an artificial famine to kill off the enemies of the state. But then they shut up and multiplied by the number of poor and downtrodden people in the country, and picked up their knives and guns.
In a contemporary example, I suspect there are people who don’t think it’s a neighbourly thing to scream at pregnant women walking to a Planned Parenthood clinic and shove highly realistic bloody fetuses into their face. But then they shut up and multiplied by the number of unborn children killed each day, and they picked up their placards and megaphones.
So, no, I don’t think shut up and multiply is good advice always. Sometimes it’s appropriate, but some other times it’s a really bad idea and has bloody terrible failure modes. Often enough these other times are when people believe that morality math trumps all other considerations. So they shut up, multiply, and kill.
Accounting for possible failure modes and the potential effects of those failure modes is a crucial part of any correctly done “morality math”.
Granted, people can’t really be relied upon to actually do it right, and it may not be a good idea to “shut up and multiply” if you can expect to get it wrong… but then failing to shut up and multiply can also have significant consequences. The worst thing you can do with morality math is to only use it when it seems convenient to you, and ignore it otherwise.
However, none of this talk of failure modes represents a solid counterargument to Singer’s main point. I agree with you that there is no strict moral equivalence to killing a child, but I don’t think it matters. The point still holds that by buying luxury goods you bear moral responsibility for failing to save children who you could (and should) have saved.
Nope. I express my rhetorical contempt in, um, more obvious ways. It’s not exactly that I don’t understand, it’s rather that I see multiple ways of proceeding and I don’t know which one do you have in mind (you, of course, do).
By they way, as a preface I should point out that we are not discussing “right” and “wrong” which, I feel, are anti-useful terms in this discussion. Morals are value systems and they are not coherent in humans. We’re talking mostly about implications of certain moral positions and how they might or might not conflict with other values.
Yes, I accept that.
Not quite. I don’t think you can make a causal chain there. You can make a probabilistic chain of expectations with a lot of uncertainty in it. Averages are not equal to specific actions—for a hypothetical example, choosing a lifestyle which involves enough driving so that in 10 years you drive the average amount of miles per traffic fatality does not mean you kill someone every 10 years.
However in this thread I didn’t focus on that issue—for the purposes of this argument I accepted the thesis and looked into its implications.
Correct.
It’s not an issue of plausibility. It’s an issue of bringing to the forefront the connotations and value conflicts.
Singer goes for shock value by putting an equals sign between what is commonly considered heinous and what’s commonly considered normal. He does this to make the normal look (more) heinous, but you can reduce the gap from both directions—making the heinous more normal works just as well.
I am not exactly proposing it, I am pointing out that the weaker form of this reversal (for some cases) logically follows from the Singer’s proposition and if you don’t think it does, I would like to know why it doesn’t.
Well, to accept the Singer position means that you kill a child every time you spend the appropriate amount of money (and I don’t see what “luxuries” have to do with it—you kill children by failing to max out your credit cards as well).
In common language, however, “killing a child” does not mean “fail to do something which could, we think, on the average, avoid one death somewhere in Africa”. “Killing a child” means doing something which directly and causally leads to a child’s death.
No. I think the original principle is wrong, but that’s irrelevant here—in this context I accept the Singerian principle in order to more explicitly show the problems inherent in it.
Taking that position conveniently gets one out of having to see buying a TV as equivalent to letting a child die—but I don’t see how it’s a coherent one. (Especially if, as seems to be the case, you agree with the Singerian position that you’re as responsible for the consequences of your inactions as of your actions.)
Suppose you have a choice between two actions. One will definitely result in the death of 10 children. The other will kill each of 100 children with probability 1⁄5, so that on average 20 children die but no particular child will definitely die. (Perhaps what it does is to increase their chances of dying in some fashion, so that even the ones that do die can’t be known to be the rest of your action.) Which do you prefer?
I say the first is clearly better, even though it might be more unpleasant to contemplate. On average, and the large majority of the time, it results in fewer deaths.
In which case, taking an action (or inaction) that results in the second is surely no improvement on taking an action (or inaction) that results in the first.
Incidentally, I’m happy to bite the bullet on the driving example. Every mile I drive incurs some small but non-zero risk of killing someone, and what I am doing is trading off the danger to them (and to me) against the convenience of driving. As it happens, the risk is fairly small, and behind a Rawlsian veil of ignorance I’m content to choose a world in which people drive as much as I do rather than one in which there’s much less driving, much more inconvenience, and fewer deaths on the road. (I’ll add that I don’t drive very much, and drive quite carefully.)
I think that when you come at it from that direction, what you’re doing is making explicit how little most people care in practice about the suffering and death of strangers far away. Which is fair enough, but my impression is that most thoughtful people who encounter the Singerian argument have (precisely by being confronted with it) already seen that.
I agree: it does. The equivalence seems obvious enough to me that I’m not sure why it’s supposed to change anyone’s mind about anything, though :-).
Only the fact that trading luxuries against other people’s lives seems like a worse problem than trading “necessities” against other people’s lives.
Sure. Which is why the claim people actually make (at least when they’re being careful about their words) is not “buying a $2000 camera is killing a child” but “buying a $2000 camera is morally equivalent to killing a child”.
I said upfront that human morality is not coherent.
However I think that the root issue here is whether you can do morality math.
You’re saying you can—take the suffering of one person, multiply it by a thousand and you have a moral force that’s a thousand times greater! And we can conveniently think of it as a number, abstracting away the details.
I’m saying morality math doesn’t work, at least it doesn’t work by normal math rules. “A single death is a tragedy; a million deaths is a statistic”—you may not like the sentiment, but it is a correct description of human morality. Let me illustrate.
First, a simple example of values/preferences math not working (note: it’s not a seed of a new morality math theory, it’s just an example). Imagine yourself as an interior decorator and me as a client.
You: Welcome to Optimal Interior Decorating! How can I help you?
I: I would like to redecorate my flat and would like some help in picking a colour scheme.
You: Very well. What is your name?
I: Lumifer!
You: What is your quest?
I: To find out if strange women lyin’ in ponds distributin’ swords are a proper basis for a system of government!
You: What is your favourite colour?
I: Purple!
You: Excellent. We will paint everything in your flat purple.
I: Errr...
You: Please show me your preferred shade of purple so that we can paint everything in this particular colour and thus maximize your happiness.
And now back to the serious matters of death and dismemberment. You offered me a hypothetical:
Let me also suggest one for you.
You’re in a boat, somewhere offshore. Another boat comes by and it’s skippered by Joker, relaxing from his tussles with Batman. He notices you and cries: “Hey! I’ve got an offer for you!” Joker’s offer looks as follows. Sometime ago he put a bomb with a timer under a children’s orphanage. He can switch off the bomb with a radio signal, but if he doesn’t, the bomb will go off (say, in a couple of hours) and many dozens of children will be killed and maimed. Joker has also kidnapped a five-year-old girl who, at the moment, is alive and unharmed in the cabin.
Joker says that if you go down into the cabin and personally kill the five-year-old girl with your bare hands—you can strangle her or beat her to death or something else, your choice—he, Joker, will press the button and deactivate the bomb. It will not go off and you will save many, many children.
Now, in this example the morality math is very clear. You need to go down into the cabin and kill that little girl. Shut up, multiply, and kill.
And yet I have doubts about your ability to do that. I consider that (expected) lack of ability to be a very good thing.
Consider a concept such as decency. It’s a silly thing, there is no place for it in the morality math. You got to maximize utility, right? And yet...
I suspect there were people who didn’t like the smell of burning flesh and were hesitant to tie women to stakes on top of firewood. But then they shut up and multiplied by the years of everlasting torment the witch’s soul would suffer, and picked up their torches and pitchforks.
I suspect there were people who didn’t particularly enjoy dragging others to the guillotine or helping arrange an artificial famine to kill off the enemies of the state. But then they shut up and multiplied by the number of poor and downtrodden people in the country, and picked up their knives and guns.
In a contemporary example, I suspect there are people who don’t think it’s a neighbourly thing to scream at pregnant women walking to a Planned Parenthood clinic and shove highly realistic bloody fetuses into their face. But then they shut up and multiplied by the number of unborn children killed each day, and they picked up their placards and megaphones.
So, no, I don’t think shut up and multiply is good advice always. Sometimes it’s appropriate, but some other times it’s a really bad idea and has bloody terrible failure modes. Often enough these other times are when people believe that morality math trumps all other considerations. So they shut up, multiply, and kill.
Accounting for possible failure modes and the potential effects of those failure modes is a crucial part of any correctly done “morality math”.
Granted, people can’t really be relied upon to actually do it right, and it may not be a good idea to “shut up and multiply” if you can expect to get it wrong… but then failing to shut up and multiply can also have significant consequences. The worst thing you can do with morality math is to only use it when it seems convenient to you, and ignore it otherwise.
However, none of this talk of failure modes represents a solid counterargument to Singer’s main point. I agree with you that there is no strict moral equivalence to killing a child, but I don’t think it matters. The point still holds that by buying luxury goods you bear moral responsibility for failing to save children who you could (and should) have saved.