But driving this reasoning to its logical conclusion you get a lot of strange results.
The premise is that humans are differnt from animals in that they know that they inflict suffering and are thus able to change it, and according to some ethics have to.
Actually this would be kind of a disadvantage of knowledge. There was a not so recent game theoretic post about situations where if you know more you have to choose probabilistically to win on average whereas those who don’t know will always choose defect and thus reap a higher benefit than you—except if they are too many.
So either
You need to construct a world without animals as animals suffer from each other and humans know that and can modify the world to get rid of this.
Humans could alter themselves to not know that they inflict harm (or consider harm unimportant or restrict empathy to humans...) and thus avoid the problem thereby.
The key point I think is that a concept that rests on some aspect of human being is being selected and taken to its ‘logical conclusion’ out of context and without regard to that this concept is an evolved feature itself.
As there is no intrinsic moral fabric of the universe we effectively force our evolved values on our environment and make it conform to it.
In sofar excessive empathy (which is an aggregated driver behind ethics) is not much different from excessive greed which also affects our environment—only we have already learned that the latter might be no good idea).
The conclusion is that you also have to balance extreme empathy with reality.
Utilitarian theory is embarrassed by the possibility of utility monsters who get enormously greater sums of utility from any sacrifice of others than these others lose . . . the theory seems to require that we all be sacrificed in the monster’s maw, in order to increase total utility.
My point is that humans mostly act as though they are utility monsters with respect to non-humans (and possibly humans they don’t identify with); they act as though the utility of non-sapient animal is vastly smaller than the utility of a human and so making the humans happy is always the best option. Some people put a much higher value on animal welfare than others, but there are few environmentalists willing to say that there is some number of hamsters (or whatever you assign minimal moral value to) worth killing a child to protect.
That way it looks. And this is probably part of being human.
I’d like to rephrase your answer as follows to drive home that ethics is most driven by empathy:
Humans mostly act as though they are utility monsters with respect to entities they have empathy with; they act as though the utility of entities they have no empathy toward is vastly smaller than the utility of those they relate to and so caring for them is always the best option.
But driving this reasoning to its logical conclusion you get a lot of strange results.
The premise is that humans are differnt from animals in that they know that they inflict suffering and are thus able to change it, and according to some ethics have to.
Actually this would be kind of a disadvantage of knowledge. There was a not so recent game theoretic post about situations where if you know more you have to choose probabilistically to win on average whereas those who don’t know will always choose defect and thus reap a higher benefit than you—except if they are too many.
So either
You need to construct a world without animals as animals suffer from each other and humans know that and can modify the world to get rid of this.
Humans could alter themselves to not know that they inflict harm (or consider harm unimportant or restrict empathy to humans...) and thus avoid the problem thereby.
The key point I think is that a concept that rests on some aspect of human being is being selected and taken to its ‘logical conclusion’ out of context and without regard to that this concept is an evolved feature itself.
As there is no intrinsic moral fabric of the universe we effectively force our evolved values on our environment and make it conform to it.
In sofar excessive empathy (which is an aggregated driver behind ethics) is not much different from excessive greed which also affects our environment—only we have already learned that the latter might be no good idea).
The conclusion is that you also have to balance extreme empathy with reality.
ADDED: Just found this relevant link: http://lesswrong.com/lw/69w/utility_maximization_and_complex_values/
Robert Nozick:
My point is that humans mostly act as though they are utility monsters with respect to non-humans (and possibly humans they don’t identify with); they act as though the utility of non-sapient animal is vastly smaller than the utility of a human and so making the humans happy is always the best option. Some people put a much higher value on animal welfare than others, but there are few environmentalists willing to say that there is some number of hamsters (or whatever you assign minimal moral value to) worth killing a child to protect.
That way it looks. And this is probably part of being human.
I’d like to rephrase your answer as follows to drive home that ethics is most driven by empathy: