I’m afraid that backing away from the whole “one child over eight” think but standing by the rest of scope sensitivity doesn’t save you from killing. For example, if you value ten million people only twice as much as an a million then you can be persuaded to prefer 20% chance of death for 10 million over certain death for 1 million, which means, on average, condemning 1 million people to death.
Any utility function that does not assign utility to human life in direct proportion to the number of lives at stake is going to kill people in some scenarios.
Humans can’t really have unbounded utility. The brain structures that represent those preferences have finite size, so they can’t intuit unbounded quantities. I believe you care some finite amount for all the rest of humanity, and the total amount you care asymptotically approaches that limit as the number of people involved increases to infinity. The marginal utility to you of the happiness of the trillionth person is approximately zero. Seriously, what does he add to the universe that the previous 999,999,999 didn’t already give enough of?
I go with the ‘revealed preference’ theory of utility myself. I don’t think the human brain actually includes anything that looks like a utility function. Instead, it contains a bunch of pleasure pain drives, a bunch of emotional reactions independent of those drives, and something capable of reflecting on the former two and if necessary overriding them. Put together, under sufficient reflection, these form an agent that may act as if it had a utility function, but there’s no little counter in its brain that’s actually tracking utility.
Thus, the way to deduce things about my utility function is not to scan my brain, but to examine the choices I make and see what they reveal about my preferences. For example, I think that if faced with a choice between saving n people with certainty and a 99.9999% chance of saving 2n people, I would always pick the latter regardless of n (I may be wrong about this, I have never actually faced such a scenario for very large values of n). This proves mathematically that my utility function is unbounded in lives saved.
For example, if you value ten million people only twice as much as an a million then you can be persuaded to prefer 20% chance of death for 10 million over certain death for 1 million, which means, on average, condemning 1 million people to death.
The merit of those alternatives depends on how many people total there are. If there are only 10 million people, I’d much rather have 1 million certain deaths than 20% chance of 10 million deaths, since we can repopulate from 8 million but we can’t repopulate from 0.
Any utility function that does not assign utility to human life in direct proportion to the number of lives at stake is going to kill people in some scenarios.
Even if condemning 1 million to death on the average is wrong when all options involve the possible deaths of large numbers of people, deriving positive utility from condemning random children to death when there’s no dilemma is an entirely different level of wrong. Utility as a function of lives should flatten out but not slope back down, assuming overpopulation isn’t an issue. The analogy isn’t valid. Let’s give up on the killing children example.
Thus, the way to deduce things about my utility function is not to scan my brain, but to examine the choices I make and see what they reveal about my preferences.
Yes! Agreed completely, in cases where doing the experiment is practical. All our scenarios seem to involve killing large numbers of people, so the experiment is not practical. I don’t see any reliable path forward—maybe we’re just stuck with not knowing what people prefer in those situations any time soon.
For example, I think that if faced with a choice between saving n people with certainty and a 99.9999% chance of saving 2n people, I would always pick the latter regardless of n
If 2n is the entire population, in one case we have 0 probability of ending up with 0 people and in the other case we have 0.0001% chance of losing the entire species all at once. So you seem to be more open to mass suicide than I’d like, even when there is no simulation or extortion involved. The other interpretation is that you’re introspecting incorrectly, and I hope that’s the case.
Someone voted your comment down. I don’t know why. I voted your comment up because it’s worth talking about, even though I disagree.
Okay, I guess the long term existence of the species does count as quite a significant externality, so in the case where 2n made up the whole species I probably would (I generally assume, unless stated otherwise, that both populations are negligible proportions of the species as a whole.
However, I don’t think humanity is a priori valuable, and if humanity now consists of 99.9% simulations being tortured then I think we really are better off dead.
Even if condemning 1 million to death on the average is wrong when all options involve the possible deaths of large numbers of people, deriving positive utility from condemning random children to death when there’s no dilemma is an entirely different level of wrong. Utility as a function of lives should flatten out but not slope back down, assuming overpopulation isn’t an issue. The analogy isn’t valid. Let’s give up on the killing children example.
It may be that, in a certain sense, one is more ‘wrong’ than the other. However, both amount to an intentional choice that more humans die, and I would say that if you value human life, they are equally poor choices.
How can 10 million humans not be 10 times as valuable as 1 million humans. How does the value of a person’s life depend on the number of other people in danger?
Yes! Agreed completely, in cases where doing the experiment is practical. All our scenarios seem to involve killing large numbers of people, so the experiment is not practical. I don’t see any reliable path forward—maybe we’re just stuck with not knowing what people prefer in those situations any time soon.
I’m inclined to say that my intuitions are probably fairly good on these sort of hypothetical scenarios, provided that the implications of my choices are quite distant and do not affect me personally (i.e. I would be more sceptical of my intuitions if I was in one of the groups).
How can 10 million humans not be 10 times as valuable as 1 million humans. How does the value of a person’s life depend on the number of other people in danger?
I already answered that. The first few hundred survivors are much more valuable than the rest. Even if survival isn’t an issue, the trillionth human adds much less to what I value about humanity than the 100th human does.
I haven’t seen any argument for total utility being proportional to total number of people other than bald assertions. Do you have anything better than that?
However, I don’t think humanity is a priori valuable, and if humanity now consists of 99.9% simulations being tortured then I think we really are better off dead.
It’s your choice whether you count those simulations as human or not. Be sure to be aware of having the choice, and to take responsibility for the choice you make.
I’m inclined to say that my intuitions are probably fairly good on these sort of hypothetical scenarios, provided that the implications of my choices are quite distant and do not affect me personally
You’re human and you’re saying that humanity is not a priori valuable? What?
I haven’t seen any argument for total utility being proportional to total number of people other than bald assertions. Do you have anything better than that?
I don’t have an absolute binding argument, just some intuitions. Some of these intuitions are:
It feels unfair to value human’s differently based on something as arbitrary as the order in which they are presented to me, or the number of other humans they are standing next to.
It seems probable to me that the humans in the group of 1 trillion would want to be treated equally to the humans in the group of 100.
It does not seem like there is anything different about the individual members of a group of 100 humans and a group of a trillion, either physically or mentally. They all still have the same amount of subjective experience, and I have a strong intuition that subjective experience has something very important to do with the value of human life.
It does not feel to me like I become less valuable when there are more other humans around, and it doesn’t seem like there’s anything special enough about me that this cannot be generalised.
It feels elegant, as a solution. Why should they become less valuable? Why not more valuable? Perhaps oscillating between two different values depending on parity? Perhaps some other even weirder function? A constant function at least has a certain symmetry to it.
These are just intuitions, they are convincing to me, but not to all possible minds. Are any of them convincing to you?
It’s your choice whether you count those simulations as human or not. Be sure to be aware of having the choice, and to take responsibility for the choice you make.
Is it also my choice whether I count black people or women as human?
In a trivial sense it is my choice, in that the laws of rationality do not forbid me from having any set of values I want. In a more realistic sense, it is not my choice, one option is obviously morally repugnant (to me at any rate) and I do not want it, I do not want to want it, I do not want to want to want it and so on ad infinitum (my values are in a state of reflective equilibrium on the question).
You’re human and you’re saying that humanity is not a priori valuable? What?
Humans are valuable. Humanity is valuable because it consists of humans, and has the capacity to create more. There is no explicit term in my utility for ‘humanity’ as distinct from the humans that make it up.
Odd, my intuitions are different. Taking the first example:
It feels unfair to value human’s differently based on something as arbitrary as the order in which they are presented to me, or the number of other humans they are standing next to.
If I’m doing something special nobody else is doing and it needs to be done, then I’d better damn well get it done. If I’m standing next to a bunch of other humans doing the same thing, then I’m free! I can leave and nothing especially important happens. I am much less important to the entire enterprise in that case.
If I’m doing something special nobody else is doing and it needs to be done, then I’d better damn well get it done. If I’m standing next to a bunch of other humans doing the same thing, then I’m free! I can leave and nothing especially important happens. I am much less important to the entire enterprise in that case.
The instrumental value of a human may vary from one human to the next. It doesn’t seem to me like this should always go down though, for instance if you have roughly one doctor per every 200 people in you group then each doctor is roughly as instrumentally valuable whether the total number of people is 1 million or 1 billion.
But this is all besides the point, since I personally assign terminal value to humans, independent of any practical use they have (you can’t value everything only instrumentally, trying to do so leads to an infinite regress). I am also inclined to say that except in edge cases, this terminal value is significantly more important than any instrumental value a human may offer.
Coming back to the original discussion we see the following:
The simulations are doing no harm or good to anyone, so their only value is terminal.
The humans on earth are causing untold pain to huge numbers of sentient beings simply by breathing, and may also be doing other things. They have a terminal value, plus a huge negative instrumental value, plus a variety of other positive and negative instrumental values, which average out at not very much.
Yup, you really are on the pro-mass-suicide side of the issue. Whatever. Be sure to pay attention to the proof about bounded utility and figure out which of the premises you disagree with.
How can 10 million humans not be 10 times as valuable as 1 million humans. How does the value of a person’s life depend on the number of other people in danger?
Heavily for most people—due to scope insensitivity. Saving 1 person makes you a hero. Saving a million people does not produce a million times the effect. Thus the size sensitivity.
I am aware that it happens. I’m just saying that it shouldn’t. I’m making the case that this intuition does not fit in reflective equilibrium with our others, and should be scrapped.
How can 10 million humans not be 10 times as valuable as 1 million humans. How does the value of a person’s life depend on the number of other people in danger?
Heavily for most people—due to scope insensitivity. Saving 1 person makes you a hero. Saving a million people does not produce a million times the effect. Thus the size sensitivity.
I’m afraid that backing away from the whole “one child over eight” think but standing by the rest of scope sensitivity doesn’t save you from killing. For example, if you value ten million people only twice as much as an a million then you can be persuaded to prefer 20% chance of death for 10 million over certain death for 1 million, which means, on average, condemning 1 million people to death.
Any utility function that does not assign utility to human life in direct proportion to the number of lives at stake is going to kill people in some scenarios.
I go with the ‘revealed preference’ theory of utility myself. I don’t think the human brain actually includes anything that looks like a utility function. Instead, it contains a bunch of pleasure pain drives, a bunch of emotional reactions independent of those drives, and something capable of reflecting on the former two and if necessary overriding them. Put together, under sufficient reflection, these form an agent that may act as if it had a utility function, but there’s no little counter in its brain that’s actually tracking utility.
Thus, the way to deduce things about my utility function is not to scan my brain, but to examine the choices I make and see what they reveal about my preferences. For example, I think that if faced with a choice between saving n people with certainty and a 99.9999% chance of saving 2n people, I would always pick the latter regardless of n (I may be wrong about this, I have never actually faced such a scenario for very large values of n). This proves mathematically that my utility function is unbounded in lives saved.
The merit of those alternatives depends on how many people total there are. If there are only 10 million people, I’d much rather have 1 million certain deaths than 20% chance of 10 million deaths, since we can repopulate from 8 million but we can’t repopulate from 0.
Even if condemning 1 million to death on the average is wrong when all options involve the possible deaths of large numbers of people, deriving positive utility from condemning random children to death when there’s no dilemma is an entirely different level of wrong. Utility as a function of lives should flatten out but not slope back down, assuming overpopulation isn’t an issue. The analogy isn’t valid. Let’s give up on the killing children example.
Yes! Agreed completely, in cases where doing the experiment is practical. All our scenarios seem to involve killing large numbers of people, so the experiment is not practical. I don’t see any reliable path forward—maybe we’re just stuck with not knowing what people prefer in those situations any time soon.
If 2n is the entire population, in one case we have 0 probability of ending up with 0 people and in the other case we have 0.0001% chance of losing the entire species all at once. So you seem to be more open to mass suicide than I’d like, even when there is no simulation or extortion involved. The other interpretation is that you’re introspecting incorrectly, and I hope that’s the case.
Someone voted your comment down. I don’t know why. I voted your comment up because it’s worth talking about, even though I disagree.
Okay, I guess the long term existence of the species does count as quite a significant externality, so in the case where 2n made up the whole species I probably would (I generally assume, unless stated otherwise, that both populations are negligible proportions of the species as a whole.
However, I don’t think humanity is a priori valuable, and if humanity now consists of 99.9% simulations being tortured then I think we really are better off dead.
It may be that, in a certain sense, one is more ‘wrong’ than the other. However, both amount to an intentional choice that more humans die, and I would say that if you value human life, they are equally poor choices.
How can 10 million humans not be 10 times as valuable as 1 million humans. How does the value of a person’s life depend on the number of other people in danger?
I’m inclined to say that my intuitions are probably fairly good on these sort of hypothetical scenarios, provided that the implications of my choices are quite distant and do not affect me personally (i.e. I would be more sceptical of my intuitions if I was in one of the groups).
I already answered that. The first few hundred survivors are much more valuable than the rest. Even if survival isn’t an issue, the trillionth human adds much less to what I value about humanity than the 100th human does.
I haven’t seen any argument for total utility being proportional to total number of people other than bald assertions. Do you have anything better than that?
It’s your choice whether you count those simulations as human or not. Be sure to be aware of having the choice, and to take responsibility for the choice you make.
You’re human and you’re saying that humanity is not a priori valuable? What?
I don’t have an absolute binding argument, just some intuitions. Some of these intuitions are:
It feels unfair to value human’s differently based on something as arbitrary as the order in which they are presented to me, or the number of other humans they are standing next to.
It seems probable to me that the humans in the group of 1 trillion would want to be treated equally to the humans in the group of 100.
It does not seem like there is anything different about the individual members of a group of 100 humans and a group of a trillion, either physically or mentally. They all still have the same amount of subjective experience, and I have a strong intuition that subjective experience has something very important to do with the value of human life.
It does not feel to me like I become less valuable when there are more other humans around, and it doesn’t seem like there’s anything special enough about me that this cannot be generalised.
It feels elegant, as a solution. Why should they become less valuable? Why not more valuable? Perhaps oscillating between two different values depending on parity? Perhaps some other even weirder function? A constant function at least has a certain symmetry to it.
These are just intuitions, they are convincing to me, but not to all possible minds. Are any of them convincing to you?
Is it also my choice whether I count black people or women as human?
In a trivial sense it is my choice, in that the laws of rationality do not forbid me from having any set of values I want. In a more realistic sense, it is not my choice, one option is obviously morally repugnant (to me at any rate) and I do not want it, I do not want to want it, I do not want to want to want it and so on ad infinitum (my values are in a state of reflective equilibrium on the question).
Humans are valuable. Humanity is valuable because it consists of humans, and has the capacity to create more. There is no explicit term in my utility for ‘humanity’ as distinct from the humans that make it up.
Odd, my intuitions are different. Taking the first example:
If I’m doing something special nobody else is doing and it needs to be done, then I’d better damn well get it done. If I’m standing next to a bunch of other humans doing the same thing, then I’m free! I can leave and nothing especially important happens. I am much less important to the entire enterprise in that case.
Be sure to watch the ongoing conversation at
http://lesswrong.com/lw/5te/a_summary_of_savages_foundations_for_probability/
because there’s a plausible axiomatic definition of probability and utility there from which one can apparently prove that utilities are bounded.
The instrumental value of a human may vary from one human to the next. It doesn’t seem to me like this should always go down though, for instance if you have roughly one doctor per every 200 people in you group then each doctor is roughly as instrumentally valuable whether the total number of people is 1 million or 1 billion.
But this is all besides the point, since I personally assign terminal value to humans, independent of any practical use they have (you can’t value everything only instrumentally, trying to do so leads to an infinite regress). I am also inclined to say that except in edge cases, this terminal value is significantly more important than any instrumental value a human may offer.
Coming back to the original discussion we see the following:
The simulations are doing no harm or good to anyone, so their only value is terminal.
The humans on earth are causing untold pain to huge numbers of sentient beings simply by breathing, and may also be doing other things. They have a terminal value, plus a huge negative instrumental value, plus a variety of other positive and negative instrumental values, which average out at not very much.
Yup, you really are on the pro-mass-suicide side of the issue. Whatever. Be sure to pay attention to the proof about bounded utility and figure out which of the premises you disagree with.
For the record, allow me to say that under the vast majority of possible circumstances I am strongly anti-mass-suicide.
To counter your comment, I accuse you of being pro-torture ;)
.
Well, it’s good to hear that neither of us are against anything, and are fundamentally positive, up-beat people. :-)
Sounds like a set-up for a debate: “Would you like to take the pro-mass-suicide point of view, or the pro-torture point of view?”
Heavily for most people—due to scope insensitivity. Saving 1 person makes you a hero. Saving a million people does not produce a million times the effect. Thus the size sensitivity.
I am aware that it happens. I’m just saying that it shouldn’t. I’m making the case that this intuition does not fit in reflective equilibrium with our others, and should be scrapped.
Heavily for most people—due to scope insensitivity. Saving 1 person makes you a hero. Saving a million people does not produce a million times the effect. Thus the size sensitivity.