I said upfront that human morality is not coherent.
However I think that the root issue here is whether you can do morality math.
You’re saying you can—take the suffering of one person, multiply it by a thousand and you have a moral force that’s a thousand times greater! And we can conveniently think of it as a number, abstracting away the details.
I’m saying morality math doesn’t work, at least it doesn’t work by normal math rules. “A single death is a tragedy; a million deaths is a statistic”—you may not like the sentiment, but it is a correct description of human morality. Let me illustrate.
First, a simple example of values/preferences math not working (note: it’s not a seed of a new morality math theory, it’s just an example). Imagine yourself as an interior decorator and me as a client.
You: Welcome to Optimal Interior Decorating! How can I help you? I: I would like to redecorate my flat and would like some help in picking a colour scheme. You: Very well. What is your name? I: Lumifer! You: What is your quest? I: To find out if strange women lyin’ in ponds distributin’ swords are a proper basis for a system of government! You: What is your favourite colour? I: Purple! You: Excellent. We will paint everything in your flat purple. I: Errr... You: Please show me your preferred shade of purple so that we can paint everything in this particular colour and thus maximize your happiness.
And now back to the serious matters of death and dismemberment. You offered me a hypothetical:
Suppose you have a choice between two actions. One will definitely result in the death of 10 children. The other will kill each of 100 children with probability 1⁄5
Let me also suggest one for you.
You’re in a boat, somewhere offshore. Another boat comes by and it’s skippered by Joker, relaxing from his tussles with Batman. He notices you and cries: “Hey! I’ve got an offer for you!” Joker’s offer looks as follows. Sometime ago he put a bomb with a timer under a children’s orphanage. He can switch off the bomb with a radio signal, but if he doesn’t, the bomb will go off (say, in a couple of hours) and many dozens of children will be killed and maimed. Joker has also kidnapped a five-year-old girl who, at the moment, is alive and unharmed in the cabin.
Joker says that if you go down into the cabin and personally kill the five-year-old girl with your bare hands—you can strangle her or beat her to death or something else, your choice—he, Joker, will press the button and deactivate the bomb. It will not go off and you will save many, many children.
Now, in this example the morality math is very clear. You need to go down into the cabin and kill that little girl. Shut up, multiply, and kill.
And yet I have doubts about your ability to do that. I consider that (expected) lack of ability to be a very good thing.
Consider a concept such as decency. It’s a silly thing, there is no place for it in the morality math. You got to maximize utility, right? And yet...
I suspect there were people who didn’t like the smell of burning flesh and were hesitant to tie women to stakes on top of firewood. But then they shut up and multiplied by the years of everlasting torment the witch’s soul would suffer, and picked up their torches and pitchforks.
I suspect there were people who didn’t particularly enjoy dragging others to the guillotine or helping arrange an artificial famine to kill off the enemies of the state. But then they shut up and multiplied by the number of poor and downtrodden people in the country, and picked up their knives and guns.
In a contemporary example, I suspect there are people who don’t think it’s a neighbourly thing to scream at pregnant women walking to a Planned Parenthood clinic and shove highly realistic bloody fetuses into their face. But then they shut up and multiplied by the number of unborn children killed each day, and they picked up their placards and megaphones.
So, no, I don’t think shut up and multiply is good advice always. Sometimes it’s appropriate, but some other times it’s a really bad idea and has bloody terrible failure modes. Often enough these other times are when people believe that morality math trumps all other considerations. So they shut up, multiply, and kill.
Accounting for possible failure modes and the potential effects of those failure modes is a crucial part of any correctly done “morality math”.
Granted, people can’t really be relied upon to actually do it right, and it may not be a good idea to “shut up and multiply” if you can expect to get it wrong… but then failing to shut up and multiply can also have significant consequences. The worst thing you can do with morality math is to only use it when it seems convenient to you, and ignore it otherwise.
However, none of this talk of failure modes represents a solid counterargument to Singer’s main point. I agree with you that there is no strict moral equivalence to killing a child, but I don’t think it matters. The point still holds that by buying luxury goods you bear moral responsibility for failing to save children who you could (and should) have saved.
I said upfront that human morality is not coherent.
However I think that the root issue here is whether you can do morality math.
You’re saying you can—take the suffering of one person, multiply it by a thousand and you have a moral force that’s a thousand times greater! And we can conveniently think of it as a number, abstracting away the details.
I’m saying morality math doesn’t work, at least it doesn’t work by normal math rules. “A single death is a tragedy; a million deaths is a statistic”—you may not like the sentiment, but it is a correct description of human morality. Let me illustrate.
First, a simple example of values/preferences math not working (note: it’s not a seed of a new morality math theory, it’s just an example). Imagine yourself as an interior decorator and me as a client.
You: Welcome to Optimal Interior Decorating! How can I help you?
I: I would like to redecorate my flat and would like some help in picking a colour scheme.
You: Very well. What is your name?
I: Lumifer!
You: What is your quest?
I: To find out if strange women lyin’ in ponds distributin’ swords are a proper basis for a system of government!
You: What is your favourite colour?
I: Purple!
You: Excellent. We will paint everything in your flat purple.
I: Errr...
You: Please show me your preferred shade of purple so that we can paint everything in this particular colour and thus maximize your happiness.
And now back to the serious matters of death and dismemberment. You offered me a hypothetical:
Let me also suggest one for you.
You’re in a boat, somewhere offshore. Another boat comes by and it’s skippered by Joker, relaxing from his tussles with Batman. He notices you and cries: “Hey! I’ve got an offer for you!” Joker’s offer looks as follows. Sometime ago he put a bomb with a timer under a children’s orphanage. He can switch off the bomb with a radio signal, but if he doesn’t, the bomb will go off (say, in a couple of hours) and many dozens of children will be killed and maimed. Joker has also kidnapped a five-year-old girl who, at the moment, is alive and unharmed in the cabin.
Joker says that if you go down into the cabin and personally kill the five-year-old girl with your bare hands—you can strangle her or beat her to death or something else, your choice—he, Joker, will press the button and deactivate the bomb. It will not go off and you will save many, many children.
Now, in this example the morality math is very clear. You need to go down into the cabin and kill that little girl. Shut up, multiply, and kill.
And yet I have doubts about your ability to do that. I consider that (expected) lack of ability to be a very good thing.
Consider a concept such as decency. It’s a silly thing, there is no place for it in the morality math. You got to maximize utility, right? And yet...
I suspect there were people who didn’t like the smell of burning flesh and were hesitant to tie women to stakes on top of firewood. But then they shut up and multiplied by the years of everlasting torment the witch’s soul would suffer, and picked up their torches and pitchforks.
I suspect there were people who didn’t particularly enjoy dragging others to the guillotine or helping arrange an artificial famine to kill off the enemies of the state. But then they shut up and multiplied by the number of poor and downtrodden people in the country, and picked up their knives and guns.
In a contemporary example, I suspect there are people who don’t think it’s a neighbourly thing to scream at pregnant women walking to a Planned Parenthood clinic and shove highly realistic bloody fetuses into their face. But then they shut up and multiplied by the number of unborn children killed each day, and they picked up their placards and megaphones.
So, no, I don’t think shut up and multiply is good advice always. Sometimes it’s appropriate, but some other times it’s a really bad idea and has bloody terrible failure modes. Often enough these other times are when people believe that morality math trumps all other considerations. So they shut up, multiply, and kill.
Accounting for possible failure modes and the potential effects of those failure modes is a crucial part of any correctly done “morality math”.
Granted, people can’t really be relied upon to actually do it right, and it may not be a good idea to “shut up and multiply” if you can expect to get it wrong… but then failing to shut up and multiply can also have significant consequences. The worst thing you can do with morality math is to only use it when it seems convenient to you, and ignore it otherwise.
However, none of this talk of failure modes represents a solid counterargument to Singer’s main point. I agree with you that there is no strict moral equivalence to killing a child, but I don’t think it matters. The point still holds that by buying luxury goods you bear moral responsibility for failing to save children who you could (and should) have saved.