I reject “3” (We ought to value both kinds of copies the same way), but don’t think that it is arbitrary at all. Rather it is based off of an important aspect of our moral values called “Separability.” Separability is, in my view, an extremely important moral intuition, but it is one that is not frequently discussed or thought about because we encounter situations where it applies very infrequently. Many Less Wrongers, however, have expressed the intuition of separability when stating that they don’t think that non-causally connected parallel universe should affect their behavior.
Separability basically says that how connected someone is to certain events matters morally in certain ways. There is some debate as to whether this principle is a basic moral intuition, or whether it can be derived from other intuitions, I am firmly in favor of the former.
That probably sounds rather abstract, so let me give a concrete example: Imagine that the government is considering taking an action that will destroy a unique ecosystem. There are millions of environmentalists who oppose this action, protest against it, and lobby to stop it. Should their preference for the ecosystem to not be destroyed be taken into consideration when calculating the utility of this situation? Have they, in a sense, been harmed if the ecosystem is destroyed? I’d say yes, and I think a lot of people would agree with me.
Now imagine that in a distant galaxy there exist approximately 90 quadrillion alien brain emulators living in a Matrioshka Brain. All these aliens are fervent environmentalists and have a strong preference that no unique ecosystem ever be destroyed. Assume we will never meet these aliens. Should their preference for the ecosystem to not be destroyed be taken into consideration when calculating the utility of this situation? Have they, in a sense, been harmed if the ecosystem is destroyed? I’d say no, even if Omega told me they existed.
What makes these two situations different? I would say that in the first situation the environmentalists possess strong causal connections to the ecosystem in question, while the aliens do not. For this reason the environmentalists’ preferences were morally relevant, the aliens’ not so.
Separability is really essential for utilitarianism to avoid paralysis. After all, if everyone’s desires count equally when evaluating the morality of situations, regardless of how connected they are to them, then there is no way of knowing if you are doing right or not. Somewhere in the universe there is doubtless a vast amount of people who would prefer you not do whatever it is you are doing.
So how does this apply to the question of creating copies in my own universe, versus desiring a copy of me in another universe not be destroyed by a quantum grenade?
Well, in the issue of whether or not to create identical copies in my own universe, I would not spend a cent trying to do that. I believe in everything Eliezer wrote in In Praise of Boredom and place great value on having new, unique experiences. Creating lockstep copies of me would be counterproductive, to say the least.
However, at first this approach seems to run into trouble in MWI. If there are so many parallel universes it stands to reason that I’ll be duplicating an experience some other me has already had no matter what I do. Fortunately, the Principle of Separability allows me to rescue my values. Since all those other worlds lack any causal connection to me, they are not relevant in determining whether I am living up to the Value of Boredom.
This allows us to explain why I am upset when the grenade is thrown at me. The copy that was killed had no causal connection to me. Nothing I or anyone else did resulted in his creation, and I cannot really interact with him. So when I assess the badness of his death, I do not include my desire to have unique, nonduplicated experiences in my assessment. All that matters is that he was killed.
So rejecting (3) does not make our values arbitrary, not in the slightest. There is an extremely important moral principle behind doing so, a moral principle that is essential to our system of ethics. Namely, the Principle of Separability.
You say that “separability is really essential for utilitarianism to avoid paralysis” but also that it “is not frequently discussed or thought about because we encounter situations where it applies very infrequently.”
I have trouble understanding how both of these can be true. If situations where it applies are very infrequent, how essential can it really be?
To avoid paralysis, utilitarians need some way of resolving intersubjective differences in utility calculation for the same shared world-state. Using “separability” to discount the unknowable utility calculations of unknown Matrioshka Brains is a negligible portion of the work that needs to be done here.
For my own part, I would spend considerably more than a cent to create an identical copy of myself whom I can interact with, because the experience of interacting with an identical but non-colocalized version of myself would be novel and interesting, and also because I suspect that we would both get net value out of the alliance.
Identical copies I can’t interact with directly are less valuable, but I’d still spend a fair amount to create one, because I would expect them to differentially create things in the world I value, just as I do myself.
Identical copies I can’t interact with even indirectly—nothing they do or don’t do will affect my life—I care about much much less, more due to selfishness than any kind of abstract principle of separability. What’s in it for me?
I have trouble understanding how both of these can be true. If situations where it applies are very infrequent, how essential can it really be?
What I should have said is “When discussing or thinking about morality we consider situations where it applies very infrequently.” When people think about morality, and posit moral dilemmas, they typically only consider situations where everyone involved is capable of interacting. When people consider the Trolley Problem they only consider the six people on the tracks and the one person with the switch.
I suppose that technically separability applies to every decision we make. For every action we take there is a possibility that someone, somewhere does not approve of our taking it and would stop us if they could. This is especially true if the universe is as vast as we now think it is. So we need separability in order to discount the desires of those extremely causally distant people.
To avoid paralysis, utilitarians need some way of resolving intersubjective differences in utility calculation for the same shared world-state. Using “separability” to discount the unknowable utility calculations of unknown Matrioshka Brains is a negligible portion of the work that needs to be done here.
You are certainly right that separability isn’t the only thing that utilitarianism needs to avoid paralysis, and that there are other issues that it needs to resolve before it even gets to the stage where separability is needed. I’m merely saying that, at that particular stage, separability is essential. It certainly isn’t the only possible way utilitarianism could be paralyzed, or otherwise run into problems.
For my own part, I would spend considerably more than a cent to create an identical copy of myself whom I can interact with
When I refer to identical copies I mean a copy that starts out identical to me, and remains identical throughout its entire lifespan, like the copies that exist in parallel universes, or the ones in this matrix-scenario Wei Dai describes. You appear to also be using “identical” to refer to copies that start out identical, but diverge later and have different experiences.
Like you, I would probably pay to create copies I could interact with, but I’m not sure how enthusiastic about it I would be. This is because I find experiences to be much more valuable if I can remember them afterwards and compare them to other experiences. If both mes get net value out of the experience like you expect then this isn’t a relevant concern. But I certainly wouldn’t consider having 3650 copies of me existing for one day and then being deleted to be equivalent to living an extra 10 years the way Robin Hanson appears to.
I reject “3” (We ought to value both kinds of copies the same way), but don’t think that it is arbitrary at all. Rather it is based off of an important aspect of our moral values called “Separability.” Separability is, in my view, an extremely important moral intuition, but it is one that is not frequently discussed or thought about because we encounter situations where it applies very infrequently. Many Less Wrongers, however, have expressed the intuition of separability when stating that they don’t think that non-causally connected parallel universe should affect their behavior.
Separability basically says that how connected someone is to certain events matters morally in certain ways. There is some debate as to whether this principle is a basic moral intuition, or whether it can be derived from other intuitions, I am firmly in favor of the former.
That probably sounds rather abstract, so let me give a concrete example: Imagine that the government is considering taking an action that will destroy a unique ecosystem. There are millions of environmentalists who oppose this action, protest against it, and lobby to stop it. Should their preference for the ecosystem to not be destroyed be taken into consideration when calculating the utility of this situation? Have they, in a sense, been harmed if the ecosystem is destroyed? I’d say yes, and I think a lot of people would agree with me.
Now imagine that in a distant galaxy there exist approximately 90 quadrillion alien brain emulators living in a Matrioshka Brain. All these aliens are fervent environmentalists and have a strong preference that no unique ecosystem ever be destroyed. Assume we will never meet these aliens. Should their preference for the ecosystem to not be destroyed be taken into consideration when calculating the utility of this situation? Have they, in a sense, been harmed if the ecosystem is destroyed? I’d say no, even if Omega told me they existed.
What makes these two situations different? I would say that in the first situation the environmentalists possess strong causal connections to the ecosystem in question, while the aliens do not. For this reason the environmentalists’ preferences were morally relevant, the aliens’ not so.
Separability is really essential for utilitarianism to avoid paralysis. After all, if everyone’s desires count equally when evaluating the morality of situations, regardless of how connected they are to them, then there is no way of knowing if you are doing right or not. Somewhere in the universe there is doubtless a vast amount of people who would prefer you not do whatever it is you are doing.
So how does this apply to the question of creating copies in my own universe, versus desiring a copy of me in another universe not be destroyed by a quantum grenade?
Well, in the issue of whether or not to create identical copies in my own universe, I would not spend a cent trying to do that. I believe in everything Eliezer wrote in In Praise of Boredom and place great value on having new, unique experiences. Creating lockstep copies of me would be counterproductive, to say the least.
However, at first this approach seems to run into trouble in MWI. If there are so many parallel universes it stands to reason that I’ll be duplicating an experience some other me has already had no matter what I do. Fortunately, the Principle of Separability allows me to rescue my values. Since all those other worlds lack any causal connection to me, they are not relevant in determining whether I am living up to the Value of Boredom.
This allows us to explain why I am upset when the grenade is thrown at me. The copy that was killed had no causal connection to me. Nothing I or anyone else did resulted in his creation, and I cannot really interact with him. So when I assess the badness of his death, I do not include my desire to have unique, nonduplicated experiences in my assessment. All that matters is that he was killed.
So rejecting (3) does not make our values arbitrary, not in the slightest. There is an extremely important moral principle behind doing so, a moral principle that is essential to our system of ethics. Namely, the Principle of Separability.
You say that “separability is really essential for utilitarianism to avoid paralysis” but also that it “is not frequently discussed or thought about because we encounter situations where it applies very infrequently.”
I have trouble understanding how both of these can be true. If situations where it applies are very infrequent, how essential can it really be?
To avoid paralysis, utilitarians need some way of resolving intersubjective differences in utility calculation for the same shared world-state. Using “separability” to discount the unknowable utility calculations of unknown Matrioshka Brains is a negligible portion of the work that needs to be done here.
For my own part, I would spend considerably more than a cent to create an identical copy of myself whom I can interact with, because the experience of interacting with an identical but non-colocalized version of myself would be novel and interesting, and also because I suspect that we would both get net value out of the alliance.
Identical copies I can’t interact with directly are less valuable, but I’d still spend a fair amount to create one, because I would expect them to differentially create things in the world I value, just as I do myself.
Identical copies I can’t interact with even indirectly—nothing they do or don’t do will affect my life—I care about much much less, more due to selfishness than any kind of abstract principle of separability. What’s in it for me?
What I should have said is “When discussing or thinking about morality we consider situations where it applies very infrequently.” When people think about morality, and posit moral dilemmas, they typically only consider situations where everyone involved is capable of interacting. When people consider the Trolley Problem they only consider the six people on the tracks and the one person with the switch.
I suppose that technically separability applies to every decision we make. For every action we take there is a possibility that someone, somewhere does not approve of our taking it and would stop us if they could. This is especially true if the universe is as vast as we now think it is. So we need separability in order to discount the desires of those extremely causally distant people.
You are certainly right that separability isn’t the only thing that utilitarianism needs to avoid paralysis, and that there are other issues that it needs to resolve before it even gets to the stage where separability is needed. I’m merely saying that, at that particular stage, separability is essential. It certainly isn’t the only possible way utilitarianism could be paralyzed, or otherwise run into problems.
When I refer to identical copies I mean a copy that starts out identical to me, and remains identical throughout its entire lifespan, like the copies that exist in parallel universes, or the ones in this matrix-scenario Wei Dai describes. You appear to also be using “identical” to refer to copies that start out identical, but diverge later and have different experiences.
Like you, I would probably pay to create copies I could interact with, but I’m not sure how enthusiastic about it I would be. This is because I find experiences to be much more valuable if I can remember them afterwards and compare them to other experiences. If both mes get net value out of the experience like you expect then this isn’t a relevant concern. But I certainly wouldn’t consider having 3650 copies of me existing for one day and then being deleted to be equivalent to living an extra 10 years the way Robin Hanson appears to.