Normally, I use the example of a man who is the sole provider of an arbitrarily large family murdering an old homeless man. Utilitarianism says he should go free. The murder’s family, of size X, will all experience disutility from his imprisonment. Call that Y. The homeless man, literally no one will miss. No family members to gain utility from exacting justice. Therefore, since X*Y > 0, the murderer should go back to providing for his family. I do not believe any rational person would consider that just, moral, or even reasonable.
Err...effectively legalizing murder of large classes of the population would tend to increase the murder rate, costing far more lives in aggregate, setting aside the dire consequences on social order and cooperation. You should use an example where the repellent recommendation actually increases rather than decreases happiness/welfare.
Well, I could qualify my example, saying surveillance ensures only people who provide zero utility are allowed to be murdered, but as I said, the article makes my point much better, even if it doesn’t mean to. A single speck of dust, even an annoying and slightly painful one, in the eyes of X people NEVER adds up to 50 years of torture for an individual. It doesn’t matter how large you make X, 7 billion, a googolplex, or 13^^^^^^^^41. It’s irrelevant.
Imagine that you find yourself visiting a hypothetical culture that acknowledges two importantly distinct classes of people: masters and slaves. By cultural convention, slaves are understood to have effectively no moral weight; causing their suffering, death, injury etc. is simply a property crime, analogous to vandalism. Slaves and masters are distinguished solely by a visible hereditable trait that you don’t consider in any wayrelevant to their moral weight as people.
Shortly after your arrival, a thousand slaves are rounded up and killed. You, as a properly emotional moral thinker, presumably express your dismay at this, and the natives explain that you needn’t worry; it was just a market correction and the economics of the situation are such that the masters are better off now. You explain in turn that your dismay is not economic in nature; it’s because those slaves have moral weight.
They look at you, puzzled.
How might you go about explaining to them that they’re wrong, and slaves really do have moral weight?
Some time later, you return home, and find yourself entertaining a visitor from another realm who is horrified by the discovery that a million old automobiles have recently been destroyed. You explain that it’s OK, the materials are being recycled to make better products, and he explains in turn that his dismay is because automobiles have moral weight.
How might you go about explaining to him that he’s wrong, and cars really don’t have moral weight?
Yup. But would they argue as Jagan did that “rationality does not apply to moral arguments. Morality is an emotional response by its very nature”? I’m specifically interested in Jagan’s answers to my questions, given that assertion.
I could qualify my example, saying surveillance ensures only people who provide zero utility are allowed to be murdered,
If some people’s lives are worth zero utility, then by definition they are worthless. That’s what “zero utility” means. Did you mean something else? Because it seems to me that nobody is worthless to me in real life, and that’s why your example doesn’t work.
A single speck of dust, even an annoying and slightly painful one, in the eyes of X people NEVER adds up to 50 years of torture for an individual. It doesn’t matter how large you make X, 7 billion, a googolplex, or 13^^^^^^^^41. It’s irrelevant.
And you judge it irrelevant based on what? Considering that scope insensitivity is a known bias in humans, so “instinct” is reliably going to go wrong in this case without mindhacking. Two murders are worse than one murder, two gorups of people with dustspecks in heir eyes are worse than one such group; at what point does this stop being true?
Err...effectively legalizing murder of large classes of the population would tend to increase the murder rate, costing far more lives in aggregate, setting aside the dire consequences on social order and cooperation. You should use an example where the repellent recommendation actually increases rather than decreases happiness/welfare.
Well, I could qualify my example, saying surveillance ensures only people who provide zero utility are allowed to be murdered, but as I said, the article makes my point much better, even if it doesn’t mean to. A single speck of dust, even an annoying and slightly painful one, in the eyes of X people NEVER adds up to 50 years of torture for an individual. It doesn’t matter how large you make X, 7 billion, a googolplex, or 13^^^^^^^^41. It’s irrelevant.
Imagine that you find yourself visiting a hypothetical culture that acknowledges two importantly distinct classes of people: masters and slaves. By cultural convention, slaves are understood to have effectively no moral weight; causing their suffering, death, injury etc. is simply a property crime, analogous to vandalism. Slaves and masters are distinguished solely by a visible hereditable trait that you don’t consider in any wayrelevant to their moral weight as people.
Shortly after your arrival, a thousand slaves are rounded up and killed. You, as a properly emotional moral thinker, presumably express your dismay at this, and the natives explain that you needn’t worry; it was just a market correction and the economics of the situation are such that the masters are better off now. You explain in turn that your dismay is not economic in nature; it’s because those slaves have moral weight.
They look at you, puzzled.
How might you go about explaining to them that they’re wrong, and slaves really do have moral weight?
Some time later, you return home, and find yourself entertaining a visitor from another realm who is horrified by the discovery that a million old automobiles have recently been destroyed. You explain that it’s OK, the materials are being recycled to make better products, and he explains in turn that his dismay is because automobiles have moral weight.
How might you go about explaining to him that he’s wrong, and cars really don’t have moral weight?
“You might have been a slave” is imaginable in a way that “you might have been an automobil” is not. See Rawls and Kant.
Yup. But would they argue as Jagan did that “rationality does not apply to moral arguments. Morality is an emotional response by its very nature”? I’m specifically interested in Jagan’s answers to my questions, given that assertion.
If some people’s lives are worth zero utility, then by definition they are worthless. That’s what “zero utility” means. Did you mean something else? Because it seems to me that nobody is worthless to me in real life, and that’s why your example doesn’t work.
And you judge it irrelevant based on what? Considering that scope insensitivity is a known bias in humans, so “instinct” is reliably going to go wrong in this case without mindhacking. Two murders are worse than one murder, two gorups of people with dustspecks in heir eyes are worse than one such group; at what point does this stop being true?