It is true, I wasn’t specific enough, but I wanted to emphasize the opinion part, and the suffering part was meant to emphasize his life condition.
He was, presumably—killed without his consent, and therefore the whole affair seems so morally icky from a non-utilitarian perspective.
If your utility function does not penalize for making bad things as long as net result is correct, you are likely to end up in a world full of utility monsters.
I am assuming that all the old sad hermits are of this world are being systematically chopped for spare parts granted to deserving and happy young people, while good meaning utilitarians hide this sad truth from us, so that I don’t become upset about those atrocities that are currently being committed in my name?
We are not even close to utility monster, and personally I know very few people who I would consider actual utilitarians.
No, but cows, pigs, hens and so on are being systematically chopped up for the gustatory pleasure of people who could get their protein elsewhere. For free-range, humanely slaughtered livestock you could make an argument that this is a net utility gain for them, since they wouldn’t exist otherwise, but the same cannot be said for battery animals.
But driving this reasoning to its logical conclusion you get a lot of strange results.
The premise is that humans are differnt from animals in that they know that they inflict suffering and are thus able to change it, and according to some ethics have to.
Actually this would be kind of a disadvantage of knowledge. There was a not so recent game theoretic post about situations where if you know more you have to choose probabilistically to win on average whereas those who don’t know will always choose defect and thus reap a higher benefit than you—except if they are too many.
So either
You need to construct a world without animals as animals suffer from each other and humans know that and can modify the world to get rid of this.
Humans could alter themselves to not know that they inflict harm (or consider harm unimportant or restrict empathy to humans...) and thus avoid the problem thereby.
The key point I think is that a concept that rests on some aspect of human being is being selected and taken to its ‘logical conclusion’ out of context and without regard to that this concept is an evolved feature itself.
As there is no intrinsic moral fabric of the universe we effectively force our evolved values on our environment and make it conform to it.
In sofar excessive empathy (which is an aggregated driver behind ethics) is not much different from excessive greed which also affects our environment—only we have already learned that the latter might be no good idea).
The conclusion is that you also have to balance extreme empathy with reality.
Utilitarian theory is embarrassed by the possibility of utility monsters who get enormously greater sums of utility from any sacrifice of others than these others lose . . . the theory seems to require that we all be sacrificed in the monster’s maw, in order to increase total utility.
My point is that humans mostly act as though they are utility monsters with respect to non-humans (and possibly humans they don’t identify with); they act as though the utility of non-sapient animal is vastly smaller than the utility of a human and so making the humans happy is always the best option. Some people put a much higher value on animal welfare than others, but there are few environmentalists willing to say that there is some number of hamsters (or whatever you assign minimal moral value to) worth killing a child to protect.
That way it looks. And this is probably part of being human.
I’d like to rephrase your answer as follows to drive home that ethics is most driven by empathy:
Humans mostly act as though they are utility monsters with respect to entities they have empathy with; they act as though the utility of entities they have no empathy toward is vastly smaller than the utility of those they relate to and so caring for them is always the best option.
In this case, I concur that your argument may be true if you include animals in your utility calculations.
While I do have reservations against causing suffering in humans, I don’t explicitly include animals in my utility calculations, and while I don’t support causing suffering for the sake of suffering, I don’t have any ethical qualms against products made with animal fur, animal testing or factory farming, so that in regards to pigs, cows and chickens, I am an utility monster.
It’s specified that he was killed painlessly.
It is true, I wasn’t specific enough, but I wanted to emphasize the opinion part, and the suffering part was meant to emphasize his life condition.
He was, presumably—killed without his consent, and therefore the whole affair seems so morally icky from a non-utilitarian perspective.
If your utility function does not penalize for making bad things as long as net result is correct, you are likely to end up in a world full of utility monsters.
We live in a world full of utility monsters. We call them humans.
I am assuming that all the old sad hermits are of this world are being systematically chopped for spare parts granted to deserving and happy young people, while good meaning utilitarians hide this sad truth from us, so that I don’t become upset about those atrocities that are currently being committed in my name?
We are not even close to utility monster, and personally I know very few people who I would consider actual utilitarians.
No, but cows, pigs, hens and so on are being systematically chopped up for the gustatory pleasure of people who could get their protein elsewhere. For free-range, humanely slaughtered livestock you could make an argument that this is a net utility gain for them, since they wouldn’t exist otherwise, but the same cannot be said for battery animals.
But driving this reasoning to its logical conclusion you get a lot of strange results.
The premise is that humans are differnt from animals in that they know that they inflict suffering and are thus able to change it, and according to some ethics have to.
Actually this would be kind of a disadvantage of knowledge. There was a not so recent game theoretic post about situations where if you know more you have to choose probabilistically to win on average whereas those who don’t know will always choose defect and thus reap a higher benefit than you—except if they are too many.
So either
You need to construct a world without animals as animals suffer from each other and humans know that and can modify the world to get rid of this.
Humans could alter themselves to not know that they inflict harm (or consider harm unimportant or restrict empathy to humans...) and thus avoid the problem thereby.
The key point I think is that a concept that rests on some aspect of human being is being selected and taken to its ‘logical conclusion’ out of context and without regard to that this concept is an evolved feature itself.
As there is no intrinsic moral fabric of the universe we effectively force our evolved values on our environment and make it conform to it.
In sofar excessive empathy (which is an aggregated driver behind ethics) is not much different from excessive greed which also affects our environment—only we have already learned that the latter might be no good idea).
The conclusion is that you also have to balance extreme empathy with reality.
ADDED: Just found this relevant link: http://lesswrong.com/lw/69w/utility_maximization_and_complex_values/
Robert Nozick:
My point is that humans mostly act as though they are utility monsters with respect to non-humans (and possibly humans they don’t identify with); they act as though the utility of non-sapient animal is vastly smaller than the utility of a human and so making the humans happy is always the best option. Some people put a much higher value on animal welfare than others, but there are few environmentalists willing to say that there is some number of hamsters (or whatever you assign minimal moral value to) worth killing a child to protect.
That way it looks. And this is probably part of being human.
I’d like to rephrase your answer as follows to drive home that ethics is most driven by empathy:
In this case, I concur that your argument may be true if you include animals in your utility calculations.
While I do have reservations against causing suffering in humans, I don’t explicitly include animals in my utility calculations, and while I don’t support causing suffering for the sake of suffering, I don’t have any ethical qualms against products made with animal fur, animal testing or factory farming, so that in regards to pigs, cows and chickens, I am an utility monster.