I haven’t seen any argument for total utility being proportional to total number of people other than bald assertions. Do you have anything better than that?
I don’t have an absolute binding argument, just some intuitions. Some of these intuitions are:
It feels unfair to value human’s differently based on something as arbitrary as the order in which they are presented to me, or the number of other humans they are standing next to.
It seems probable to me that the humans in the group of 1 trillion would want to be treated equally to the humans in the group of 100.
It does not seem like there is anything different about the individual members of a group of 100 humans and a group of a trillion, either physically or mentally. They all still have the same amount of subjective experience, and I have a strong intuition that subjective experience has something very important to do with the value of human life.
It does not feel to me like I become less valuable when there are more other humans around, and it doesn’t seem like there’s anything special enough about me that this cannot be generalised.
It feels elegant, as a solution. Why should they become less valuable? Why not more valuable? Perhaps oscillating between two different values depending on parity? Perhaps some other even weirder function? A constant function at least has a certain symmetry to it.
These are just intuitions, they are convincing to me, but not to all possible minds. Are any of them convincing to you?
It’s your choice whether you count those simulations as human or not. Be sure to be aware of having the choice, and to take responsibility for the choice you make.
Is it also my choice whether I count black people or women as human?
In a trivial sense it is my choice, in that the laws of rationality do not forbid me from having any set of values I want. In a more realistic sense, it is not my choice, one option is obviously morally repugnant (to me at any rate) and I do not want it, I do not want to want it, I do not want to want to want it and so on ad infinitum (my values are in a state of reflective equilibrium on the question).
You’re human and you’re saying that humanity is not a priori valuable? What?
Humans are valuable. Humanity is valuable because it consists of humans, and has the capacity to create more. There is no explicit term in my utility for ‘humanity’ as distinct from the humans that make it up.
Odd, my intuitions are different. Taking the first example:
It feels unfair to value human’s differently based on something as arbitrary as the order in which they are presented to me, or the number of other humans they are standing next to.
If I’m doing something special nobody else is doing and it needs to be done, then I’d better damn well get it done. If I’m standing next to a bunch of other humans doing the same thing, then I’m free! I can leave and nothing especially important happens. I am much less important to the entire enterprise in that case.
If I’m doing something special nobody else is doing and it needs to be done, then I’d better damn well get it done. If I’m standing next to a bunch of other humans doing the same thing, then I’m free! I can leave and nothing especially important happens. I am much less important to the entire enterprise in that case.
The instrumental value of a human may vary from one human to the next. It doesn’t seem to me like this should always go down though, for instance if you have roughly one doctor per every 200 people in you group then each doctor is roughly as instrumentally valuable whether the total number of people is 1 million or 1 billion.
But this is all besides the point, since I personally assign terminal value to humans, independent of any practical use they have (you can’t value everything only instrumentally, trying to do so leads to an infinite regress). I am also inclined to say that except in edge cases, this terminal value is significantly more important than any instrumental value a human may offer.
Coming back to the original discussion we see the following:
The simulations are doing no harm or good to anyone, so their only value is terminal.
The humans on earth are causing untold pain to huge numbers of sentient beings simply by breathing, and may also be doing other things. They have a terminal value, plus a huge negative instrumental value, plus a variety of other positive and negative instrumental values, which average out at not very much.
Yup, you really are on the pro-mass-suicide side of the issue. Whatever. Be sure to pay attention to the proof about bounded utility and figure out which of the premises you disagree with.
I don’t have an absolute binding argument, just some intuitions. Some of these intuitions are:
It feels unfair to value human’s differently based on something as arbitrary as the order in which they are presented to me, or the number of other humans they are standing next to.
It seems probable to me that the humans in the group of 1 trillion would want to be treated equally to the humans in the group of 100.
It does not seem like there is anything different about the individual members of a group of 100 humans and a group of a trillion, either physically or mentally. They all still have the same amount of subjective experience, and I have a strong intuition that subjective experience has something very important to do with the value of human life.
It does not feel to me like I become less valuable when there are more other humans around, and it doesn’t seem like there’s anything special enough about me that this cannot be generalised.
It feels elegant, as a solution. Why should they become less valuable? Why not more valuable? Perhaps oscillating between two different values depending on parity? Perhaps some other even weirder function? A constant function at least has a certain symmetry to it.
These are just intuitions, they are convincing to me, but not to all possible minds. Are any of them convincing to you?
Is it also my choice whether I count black people or women as human?
In a trivial sense it is my choice, in that the laws of rationality do not forbid me from having any set of values I want. In a more realistic sense, it is not my choice, one option is obviously morally repugnant (to me at any rate) and I do not want it, I do not want to want it, I do not want to want to want it and so on ad infinitum (my values are in a state of reflective equilibrium on the question).
Humans are valuable. Humanity is valuable because it consists of humans, and has the capacity to create more. There is no explicit term in my utility for ‘humanity’ as distinct from the humans that make it up.
Odd, my intuitions are different. Taking the first example:
If I’m doing something special nobody else is doing and it needs to be done, then I’d better damn well get it done. If I’m standing next to a bunch of other humans doing the same thing, then I’m free! I can leave and nothing especially important happens. I am much less important to the entire enterprise in that case.
Be sure to watch the ongoing conversation at
http://lesswrong.com/lw/5te/a_summary_of_savages_foundations_for_probability/
because there’s a plausible axiomatic definition of probability and utility there from which one can apparently prove that utilities are bounded.
The instrumental value of a human may vary from one human to the next. It doesn’t seem to me like this should always go down though, for instance if you have roughly one doctor per every 200 people in you group then each doctor is roughly as instrumentally valuable whether the total number of people is 1 million or 1 billion.
But this is all besides the point, since I personally assign terminal value to humans, independent of any practical use they have (you can’t value everything only instrumentally, trying to do so leads to an infinite regress). I am also inclined to say that except in edge cases, this terminal value is significantly more important than any instrumental value a human may offer.
Coming back to the original discussion we see the following:
The simulations are doing no harm or good to anyone, so their only value is terminal.
The humans on earth are causing untold pain to huge numbers of sentient beings simply by breathing, and may also be doing other things. They have a terminal value, plus a huge negative instrumental value, plus a variety of other positive and negative instrumental values, which average out at not very much.
Yup, you really are on the pro-mass-suicide side of the issue. Whatever. Be sure to pay attention to the proof about bounded utility and figure out which of the premises you disagree with.
For the record, allow me to say that under the vast majority of possible circumstances I am strongly anti-mass-suicide.
To counter your comment, I accuse you of being pro-torture ;)
.
Well, it’s good to hear that neither of us are against anything, and are fundamentally positive, up-beat people. :-)
Sounds like a set-up for a debate: “Would you like to take the pro-mass-suicide point of view, or the pro-torture point of view?”