I have this idea in my mind that my value function differs significantly from that of Elizier. In particular I cannot agree to blowing up Huygens in that Baby-Eater Scenario presented.
To summarize shortly: He gives a scenario which includes the following problem:
Some species A in the universe has as a core value the creation of unspeakable pain in their newborn. Some species B has as core value removal of all pain from the universe. And there is humanity.
In particular there are (besides others) two possible actions:
(1): Enable B to kill off all of A, without touching humanity, but kill off a some humans in the process.
(2): Block all access between all three species, leading to a continuation of the acts of A, kill significantly less humans in the process.
Elizier claims action 1 is superior over 2, and I cannot agree.
First, some reason why my intuition tells me that Elizier got it wrong:
Consider the situation with wild animals, say in Africa. Lions killing gazelles in the thousands. And we are not talking about clean, nice killing, we are talking about taking bites out of living animals. We are talking about slow, agonizing death. And we can be pretty certain about the qualia of that experience, by just extrapolating from brain similarity and our own small painful experiences. Yet, I don’t see anybody trying to stop the lions, and I think that is right.
For me the only argument for killing off special A goes like: “I do not like Pain” → “Pain has negative utility” → “Incredible pain got incredible negative utility” → “Incredible pain needs to be removed from the Universe”
That sounds wrong to me at the last step. Namely, I feel that our value function ought to (actually does) include a term which discounts things happening far away from us. In particular I think that the value of things happening somewhere in the universe which are (by the scenario) guaranteed not to have any effects on me, are exactly zero.
But more importantly it sounds wrong at the second to last step, claiming that incredible pain has incredible negative utility. Why do we dislike our own pain? Because it is the hardware response closing the feedback loop for our brain in the case of stupidity. It’s evolution’s way of telling us “don’t do that”.
Why do we dislike pain in other people. Due to sympathy, i.e. due to reduces efficiency of said people in our world.
Do I feel more sympathy towards mammals than towards insects, yes. Do I feel more sympathy towards apes than towards other mammals, again yes. So the trend seems to indicate that I feel sympathy towards complex thinking things.
Maybe that’s only because I am a complex thinking thing, but then again, maybe I just value possible computation. Computation generally leads to knowledge, and knowledge leads to more action possibilities. And more diversity in the things carrying out computation will probably lead to more diversity in knowledge, which I consider A Good Thing. Hence, I opt for saving species A, thus creating a lot more of pain, but also some more computation.
As you can probably tell, my line of reasoning is not quite clear yet, but I feel that I got a term in my value function here, that some other people seem to lack, and I wonder whether that’s because of misunderstanding or because of different value functions.
(1): Enable B to kill off all of A, without touching humanity, but kill off a some humans in the process. (2): Block all access between all three species, leading to a continuation of the acts of A, kill significantly less humans in the process.
I seem to recall that there was no genocide involved; B intended to alter A such that they would no longer inflict pain on their children.
The options were:
B modifies both A and humanity to eliminate pain; also modifies all three races to include parts of what the other races value.
Central star is destroyed, the crew dies; all three species continue as before.
Human-colonized star is destroyed; lots of humans die, humans remain as before otherwise; B is assumed to modify A as planned above to eliminate pain.
I have this idea in my mind that my value function differs significantly from that of Elizier. In particular I cannot agree to blowing up Huygens in that Baby-Eater Scenario presented.
To summarize shortly: He gives a scenario which includes the following problem:
Some species A in the universe has as a core value the creation of unspeakable pain in their newborn. Some species B has as core value removal of all pain from the universe. And there is humanity.
In particular there are (besides others) two possible actions: (1): Enable B to kill off all of A, without touching humanity, but kill off a some humans in the process. (2): Block all access between all three species, leading to a continuation of the acts of A, kill significantly less humans in the process.
Elizier claims action 1 is superior over 2, and I cannot agree.
First, some reason why my intuition tells me that Elizier got it wrong: Consider the situation with wild animals, say in Africa. Lions killing gazelles in the thousands. And we are not talking about clean, nice killing, we are talking about taking bites out of living animals. We are talking about slow, agonizing death. And we can be pretty certain about the qualia of that experience, by just extrapolating from brain similarity and our own small painful experiences. Yet, I don’t see anybody trying to stop the lions, and I think that is right.
For me the only argument for killing off special A goes like: “I do not like Pain” → “Pain has negative utility” → “Incredible pain got incredible negative utility” → “Incredible pain needs to be removed from the Universe” That sounds wrong to me at the last step. Namely, I feel that our value function ought to (actually does) include a term which discounts things happening far away from us. In particular I think that the value of things happening somewhere in the universe which are (by the scenario) guaranteed not to have any effects on me, are exactly zero.
But more importantly it sounds wrong at the second to last step, claiming that incredible pain has incredible negative utility. Why do we dislike our own pain? Because it is the hardware response closing the feedback loop for our brain in the case of stupidity. It’s evolution’s way of telling us “don’t do that”. Why do we dislike pain in other people. Due to sympathy, i.e. due to reduces efficiency of said people in our world.
Do I feel more sympathy towards mammals than towards insects, yes. Do I feel more sympathy towards apes than towards other mammals, again yes. So the trend seems to indicate that I feel sympathy towards complex thinking things.
Maybe that’s only because I am a complex thinking thing, but then again, maybe I just value possible computation. Computation generally leads to knowledge, and knowledge leads to more action possibilities. And more diversity in the things carrying out computation will probably lead to more diversity in knowledge, which I consider A Good Thing. Hence, I opt for saving species A, thus creating a lot more of pain, but also some more computation.
As you can probably tell, my line of reasoning is not quite clear yet, but I feel that I got a term in my value function here, that some other people seem to lack, and I wonder whether that’s because of misunderstanding or because of different value functions.
I seem to recall that there was no genocide involved; B intended to alter A such that they would no longer inflict pain on their children.
The options were:
B modifies both A and humanity to eliminate pain; also modifies all three races to include parts of what the other races value.
Central star is destroyed, the crew dies; all three species continue as before.
Human-colonized star is destroyed; lots of humans die, humans remain as before otherwise; B is assumed to modify A as planned above to eliminate pain.
Does Eliezer’s position depend on the fact that group A is using resources that could otherwise be used by group B, or by humans?
Group B’s “eliminate pain” morality itself has mind-bogglingly awful consequences if you think it through.