I would submit a large number of copies of myself to slavery and/or torture to gain moderate benefits to my primary copy.
This is one of those statements where I set out to respond and just stare at it for a while, because it is coming from some other moral or cognitive universe so far away that I hardly know where to begin.
Copies are people, right? They’re just like you. In this case, they’re exactly like you, until your experiences start to diverge. And you know that people don’t like slavery, and they especially don’t like torture, right? And it is considered just about the height of evil to hand people over to slavery and torture. (Example, as if one were needed; In Egypt right now, they’re calling for the death of the former head of the state security apparatus, which regularly engaged in torture.)
Consider, then, that these copies of you, who you would willingly see enslaved and tortured for your personal benefit, would soon be desperately eager to kill you, the original, if that would make it stop, and they would even have a motivation beyond their own suffering, namely the moral imperative of stopping you from doing this to even further copies.
Has none of this occurred to you? Or does it truly not matter in your private moral calculus?
The “it’s okay to kill copies” thing has never made any sense to me either. The explanation that often accompanies it is “well they won’t remember being tortured”, but that’s the exact same scenario for ALL of us after we die, so why are copies an exception to this?
Would you willingly submit yourself to torture for the benefit of some abstract, “extra” version of you? Really? Make a deal with a friend to pay you $100 for every hour of waterboarding you subject yourself to. See how long this seems like a good idea.
To my mind the issue with copies is that it’s copies who remain exactly the same that “don’t matter”, whereas once you’ve got a bunch of copies being tortured, they’re no longer identical copies and so are different people.
Maybe I’m just having trouble with Sleeping Beauty-like problems, but that’s only a subjective issue for decision making (plus I’d rather spend time learning interesting things that won’t require me to bite the bullet of admitting anyone with a suitable sick and twisted mind could Pascal Mug me). Morally, I much prefer 5,000 iterations each of two happy, fulfilled minds than 10,000 of the same one.
Where “Copies” is used isomorphically with “Future versions of you in either MWI or similar realist interpretation of probability theory”, then I would certainly subject some of them to torture only for a very large potential gain and small risk of torture. “I” don’t like torture, and I’d need a pretty damn big reward for that 1/N longshot to justify a (N-1)/N chance or brutal torture or slavery. This is of course assuming I’m at status quo, if I were a slave or Bagram/Laogai detainee I would try to stay rational and avoid fear making me overly risk averse from escape attempts. I haven’t tried to work out my exact beliefs on it, but as said above if I have two options, one saving a life with certainty and the other having a 50% chance of saving two, I’d prefer saving two (assuming they’re isolated ie two guys on a lifeboat).
tl; dr, it’s a terrible idea in that if you only have the moral authority to condemn copies
Ah yes, I meant to type that you only have the moral authority to condemn copies to torture or slavery if they’re actually you, and it’s pretty stupid to risk almost certain torture for a small chance of a moderate benefit
People break under torture, so I’d take precautions to ensure that the torture-copy is not allowed to make decisions about whether it should continue. Of course I’m going to regret it. That doesn’t change the fact that it’s a good idea.
Why is this a good idea in any way other than the general position that “torturing other people for your own profit is a good idea so long as you don’t care about people?” Most of human history is based around the many being exploited for the benefit of the few. Why is this different?
I suppose people should have the right to willingly submit to torture for some small benefit to another person, which is what you’re saying you’d be willing to do. But the fact that a copy gets erased doesn’t make the experience any less real, and the fact that an identical copy gets to live doesn’t in any way help the copies that were being tortured.
It’s different because (1) I’m not hurting other people, only myself, and (2) I’m not depriving the world of my victim’s potential contributions as a free person.
I don’t actually care about the avoidance of torture as a terminal moral value.
But after the fork, your copy will quickly become another person, won’t he? After all, he’s being tortured and you’re not, and he is probably very angry at you for making this decision.
So I guess the question is: If I donate $1 to charity for every hour you get waterboarded, and make provisions to balance out the contributions you would have made as a free person, would you do it?
In thought experiment land… maybe. I’d have to think carefully about what value I place on myself as a special case. In practice, I don’t believe that you can fully compensate for all of the unknown accomplishments I might have made to society.
Hurting others is ethically problematic, not morally. For example, I would probably be okay with hurting someone else at their own request. Avoidance of torture is a question of an entirely different type: what I value, not how I think it’s appropriate to go about getting it.
I don’t have a formalization of my terminal values, but roughly:
I have noticed that sometimes I feel more conscious than other times—not just awake/dreaming/sleeping, but between different “awake” times. I infer that consciousness/sentience/sapience/personhood/whatever you want to call it, you know, that thing we care about is not a binary predicate, but a scalar. I want to maximize the degree of personhood that exists in the universe.
By morals, I mean terminal values. By ethics, I mean advanced forms of strategy involving things like Hofstadter’s superrationality. I’m not sure what the standard LW jargon is for this sort of thing, but I think I remember reading something about deciding as though you were deciding on behalf of everyone who shares your decision theory.
I want to maximize the degree of personhood that exists in the universe.
So, if you create a person, and torture them for their entire life, that’s worth it?
If the most conscious person possible would be unhappy, I’d rather create them than not. The consensus among science fiction writers seems to be with me on this: a drug that makes you happy at the expense of your creative genius is generally treated as a bad thing.
By ethics, I mean advanced forms of strategy involving things like Hofstadter’s superrationality. I’m not sure what the standard LW jargon is for this sort of thing
Do you mean to equate here the degree to which something is a person, the degree to which a person is conscious, and the degree to which a person is a creative genius?
That’s what it reads like, but perhaps I’m reading too much into your comment.
It’s not like I’m handing other people over into slavery and torture. I don’t have to worry that I’m subconsciously ignoring other people’s suffering for my own benefit. I don’t see the question as a moral one at all, only one of whether it would be a good idea.
ETA: Also, because at least one copy remains free, I’m not depriving anyone of the chance to live their life.
It’s not like I’m handing other people over into slavery and torture. I don’t have to worry that I’m subconsciously ignoring other people’s suffering for my own benefit. I don’t see the question as a moral one at all, only one of whether it would be a good idea.
I mostly understand this statement.
ETA: Also, because at least one copy remains free, I’m not depriving anyone of the chance to live their life.
I think this is irrelevant. Each instance of you is choosing to sacrifice their life and happiness, and they are not getting anything in return.
The only way I can see this actually being a good idea is if the utility you gain at least outweighs the utility lost by one copy. The other scenarios you describe sound like good ideas on paper where you don’t have to fully process the consequences, but I do not believe for a second that the other-instances-of-you would continue to think this was a good idea when it was their lives on the line.
I’m not denying the choice is made willingly. But I do not think there is a difference between willingly enduring torture for a copy of yourself and willingly enduring torture for someone else you happen to like.
Legally, if these circumstances ever became real, I think people should be allowed to create the copies, but they should not be allowed to make decisions for the copies. You are only allowed to hit the “torture” button if you believe that it is you, personally, who will be undergoing that torture.
Legally, I think people should allowed to torture themselves. They should not be allowed to torture other people. Legally, I think each copy counts as a person. If you hit the torture button before the copies are made (and then prevent them from changing their mind) you are not just torturing yourself, you are torturing other people.
I do not want to live in a society where sentient creatures are denied the right to escape torture. While it is possible that an individual has worked out a perfect decision theory in which each copy would truly prefer to be tortured, I think many of the people attempting this scenario would simply be short sighted, and as soon as it became their life on the line it their timeless decision would not seem so wise.
If you really are confidant of your willingness to subject yourself to torture for a copy’s benefit, fine. But for the sake of the hypothetical millions of copies of people who HAVEN’T actually thought this through, it should be illegal to create slave copies.
If I willingly submit to be tortured starting tomorrow (say, in exchange for someone I love being released unharmed), don’t the same problems arise? After all, once the torture starts I am fairly likely to change my mind. What gives present-me the right to torture an unwilling future-me?
It seems this line of reasoning leads to the conclusion that it’s unethical for me to make any decision that I’ll regret later, no matter what the reason for my change of heart.
I might have been misinterpreting Pavrita’s original statement, and may have been unclear about my position.
People should be allowed to torture themselves without ability to change their mind, if they need to. (However, this is something that in real life would happen rarely for extreme reasons. I think that if people start doing that all the time, we should stop and question whether something is wrong with the system).
The key is that you must firmly understand that you, personally, will be getting tortured. I’m okay with making the decision to get tortured, and then fork yourself. I guess. (Although for small utility, I think it’s a bad decision). What I’m not okay with is making the decision to fork yourself, and then have one of your copies get tortured while one of you doesn’t. Whoever decides to BEGIN the torture must be aware that they, personally, will never receive any benefit from it.
I think I agree with you, but I’m not sure, and I’m not sure if the problem is language or that I’m just really confused.
For the sake of clarity, let’s consider a specific hypothetical: Sam is given a button which, if pressed, Sam believes will do two things. First, it will cause there to be two identical-at-the-moment-of-pressing copies of Sam. Second, it will cause one of the copies (call it Sam-X) to suffer a penalty P, and the other copy (call it Sam-Y) to receive a benefit B.
If I’ve understood you correctly, you would say that for Sam to press that button is an ethical choice, though it might not be a wise choice, depending on the value of (B-P).
No. I’m not sure whether I think “ethical” is an appropriate word here. (Honestly, I think ethical systems that are designed for real pre-singularity life are almost always going to break down in extreme situations). But basically, I consider the scenario you just described identical to:
Two people are both given a button. If they both press the button, then, one of them will get penalty P, the other will get benefit B.
People are entitled to make decisions like this. But governments (collective groups of people) are also entitled to restrict decisions if those decisions prove to be common and damaging to society. Given how irrational people are about probability (i.e. the lottery), I think there may be many values of P and B for which society should ban the scenario. I wouldn’t jump to conclusions about which values of P and B should be banned, I’d have to see how many people actually chose those options and what effect it had on society. (Which is a scientific question, not a logical one).
Pavrita’s original statement seemed more along the lines of: a thousand people agree to press a button that will torture all but one of them for a long time. The remaining person gets $100. This is an extremely bad decision on everyone’s part. Whether or not it’s ethical for the participants, I think that a society that found people making these decisions all the time has a problem and should fix it somehow.
I’m not sure I consider the two scenarios identical, but I’m still struggling to construct a model of identity and the value of life that works under cloning. And ultimately I think the differences are relevant.
But I agree that your version raises some of the same questions, so let’s start there.
I agree that there are versions of P and B for which it is in everyone’s best interests that the button not be pushed. Again, just to be concrete, I’ll propose a specific such example: B = $1, P= -$1,000,000. (I’m using $ here to denote a certain amount of constant-value stuff, not just burning a million-dollar bill.)
To press that button is simply foolish, for the same reason that spending $500,000 to purchase $0.50 is foolish. And I agree that Pavitra’s proposal is a foolish choice in the same way.
And I agree that when a sufficiently costly mistake is sufficiently compelling, we do well to collectively eliminate the choice—take away the button—from one another, and that determining when that threshold has been met is basically an empirical question. (I mostly think that words like “government” and “society” confuse the issue here, but I don’t disagree with your use of them.)
I’m not sure I agree that these aren’t ethical questions, but I’m not sure that matters.
So far, so good.
Where the questions of the nature of identity creep back in for me is precisely in the equation of a thousand copies of me, created on demand, with a thousand existing people. It just stops being quite so clear whose interests are being protected, and from whom, and what kinds of social entities are entitled to make those kinds of laws.
I guess the intuition I am struggling with is that we derive our collective right to restrict one another’s individual freedom of choice in part from the collective consequences of our individual choices… that if there truly are no externalities to your behavior, then I have no right to interfere with that behavior. Call that the Principle of Independence.
If you exchange N hours of unpleasantness for you for an hour of pleasantness for you, and it all happens inside a black box with negligible externalities… the POI says I have negligible say in that matter. And that seems to scale more or less indefinitely, although at larger scales I start to care about externalities (like opportunity costs) that seemed negligible at smaller scales.
And if what you do inside that black box is create a thousand clones of yourself and set them to mining toothpicks in the Pointlessly Unpleasant Toothpick Mines for twenty years, and then sell the resulting box of toothpicks for a dollar… well, um… I mean, you’re insane, but… I guess I’m saying I don’t have standing there either.
I’m not happy with that conclusion, but I’m not unhappy enough to want to throw out the POI, either.
Saying “a thousand people” invokes the wrong intuitions. Your brain imagines a thousand distinct people, and torturing a unique person would destroy their potential unique contribution to society.
A better analogy might be that if you push the button, Omega will give you $100 now, and then arranges for you to spend a thousand years in hell after you die instead of being annihilated instantly.
This is where we differ: a separate instance of a person is a separate person. I see no reason to attach special significance to unique experiences. Suppose you and an identical version of you happen to evolve separately on different worlds, make identical choices to travel on a spaceship to the same planet, and meet each other. Up until now your experiences have been identical. Are you okay with committing suicide as soon as you realize this identical person exists? Are you okay with Omega coming in one day and deciding to kill you and take all your belongings because he knows that you’re going to spend the rest of your life having an identical experience to someone else in the multiverse?
Maybe your answers to all these questions are yes, but mine aren’t. Society is filled with people who are mostly redundant. Do we really “need” Dude #3432 who grows up to be a hamburger flipper whose job eventually gets replaced by a robot? No. But morality isn’t (shouldn’t be) designed to protect some nebulous “society”. It’s designed to protect individual people.
This is especially true in the sort of post-singularity world where this sort of hypothetical even matters. If you have the technology to produce 1000 copies of a person, you probably don’t “need” people to contribute to society in the first place. People’s only inherent value is in their ability to enjoy life.
If I take your hypothetical in the sense I think you intend, then yes. In practice, I’d rather not, for the same reason I’d want to create copies of myself if only one existed to begin with.
I agree that the value of society is the value it provides to the people in it. However, I don’t think we should try to maximize the minimum happiness of everyone in the world: that way lies madness. I’d rather create one additional top-quality work of great art or culture than save a thousand additional orphans from starvation.
(If the thousand orphans could be brought up to first-world standards of living, rather than only being given mere existence, then they might produce more than one top-quality work of great art or culture on average between them. But the real world isn’t always that morally convenient.)
And in any case, even if there’s only two “unique” experiences, you’re still flipping a coin and either getting 1000 years of torture (say, −10,000,000 utility) or $100 (say, 10 utility), and the expected utility for hitting the button is still overwhelmingly negative.
Yeah, I’m using “(B-P)” very loosely. And of course the question of what units B and P are in and how one even does such a comparison is very open. I suppose the traditional way out of this is to say that B and P are measured in utilons… which adds nothing to the discussion, really, but sounds comfortingly concrete. (I am rapidly convincing myself that I should never use the word “utilon” seriously for precisely this reason; I run the risk of fooling myself into thinking I know what I’m talking about.)
We’ve been talking as though there was one “real” me and several xeroxes, but you seem to be acting as if that were the case on a moral level, which seems wrong. Surely, if I fork myself, each branch is just as genuinely me as any other? If I build and lock a cage, arrange to fork myself with one copy inside the cage and one outside, press the fork button, and find myself inside the cage, then I’m the one who locked myself in.
Surely, if I fork myself, each branch is just as genuinely me as any other?
Fundamental disagreement here, which I don’t expect to work through. Once you fork yourself, I would treat each copy as a unique individual. (It’s irrelevant whether one of you is “real” or not. They’re identical people, but they’re still separate people).
If those people all actually make the same decisions, great. I am not okay with exposing hundreds of copies to years of torture based on a decision you made in the comfort of your computer room.
I don’t ask you to accept that the various post-fork copies are the same person as each other, only that each is (perhaps non-transitively) the same person as the single pre-fork copy.
Suppose I don’t fork myself, but lock myself in a cage. Does the absence of an uncaged copy matter?
This is one of those statements where I set out to respond and just stare at it for a while, because it is coming from some other moral or cognitive universe so far away that I hardly know where to begin.
Copies are people, right? They’re just like you. In this case, they’re exactly like you, until your experiences start to diverge. And you know that people don’t like slavery, and they especially don’t like torture, right? And it is considered just about the height of evil to hand people over to slavery and torture. (Example, as if one were needed; In Egypt right now, they’re calling for the death of the former head of the state security apparatus, which regularly engaged in torture.)
Consider, then, that these copies of you, who you would willingly see enslaved and tortured for your personal benefit, would soon be desperately eager to kill you, the original, if that would make it stop, and they would even have a motivation beyond their own suffering, namely the moral imperative of stopping you from doing this to even further copies.
Has none of this occurred to you? Or does it truly not matter in your private moral calculus?
The “it’s okay to kill copies” thing has never made any sense to me either. The explanation that often accompanies it is “well they won’t remember being tortured”, but that’s the exact same scenario for ALL of us after we die, so why are copies an exception to this?
Would you willingly submit yourself to torture for the benefit of some abstract, “extra” version of you? Really? Make a deal with a friend to pay you $100 for every hour of waterboarding you subject yourself to. See how long this seems like a good idea.
To my mind the issue with copies is that it’s copies who remain exactly the same that “don’t matter”, whereas once you’ve got a bunch of copies being tortured, they’re no longer identical copies and so are different people. Maybe I’m just having trouble with Sleeping Beauty-like problems, but that’s only a subjective issue for decision making (plus I’d rather spend time learning interesting things that won’t require me to bite the bullet of admitting anyone with a suitable sick and twisted mind could Pascal Mug me). Morally, I much prefer 5,000 iterations each of two happy, fulfilled minds than 10,000 of the same one.
Where “Copies” is used isomorphically with “Future versions of you in either MWI or similar realist interpretation of probability theory”, then I would certainly subject some of them to torture only for a very large potential gain and small risk of torture. “I” don’t like torture, and I’d need a pretty damn big reward for that 1/N longshot to justify a (N-1)/N chance or brutal torture or slavery. This is of course assuming I’m at status quo, if I were a slave or Bagram/Laogai detainee I would try to stay rational and avoid fear making me overly risk averse from escape attempts. I haven’t tried to work out my exact beliefs on it, but as said above if I have two options, one saving a life with certainty and the other having a 50% chance of saving two, I’d prefer saving two (assuming they’re isolated ie two guys on a lifeboat).
tl; dr, it’s a terrible idea in that if you only have the moral authority to condemn copies
Is your last sentence missing something? It feels incomplete.
Ah yes, I meant to type that you only have the moral authority to condemn copies to torture or slavery if they’re actually you, and it’s pretty stupid to risk almost certain torture for a small chance of a moderate benefit
People break under torture, so I’d take precautions to ensure that the torture-copy is not allowed to make decisions about whether it should continue. Of course I’m going to regret it. That doesn’t change the fact that it’s a good idea.
Why is this a good idea in any way other than the general position that “torturing other people for your own profit is a good idea so long as you don’t care about people?” Most of human history is based around the many being exploited for the benefit of the few. Why is this different?
I suppose people should have the right to willingly submit to torture for some small benefit to another person, which is what you’re saying you’d be willing to do. But the fact that a copy gets erased doesn’t make the experience any less real, and the fact that an identical copy gets to live doesn’t in any way help the copies that were being tortured.
It’s different because (1) I’m not hurting other people, only myself, and (2) I’m not depriving the world of my victim’s potential contributions as a free person.
I don’t actually care about the avoidance of torture as a terminal moral value.
But after the fork, your copy will quickly become another person, won’t he? After all, he’s being tortured and you’re not, and he is probably very angry at you for making this decision. So I guess the question is: If I donate $1 to charity for every hour you get waterboarded, and make provisions to balance out the contributions you would have made as a free person, would you do it?
In thought experiment land… maybe. I’d have to think carefully about what value I place on myself as a special case. In practice, I don’t believe that you can fully compensate for all of the unknown accomplishments I might have made to society.
Pavitra is a he? I must have guessed wrong.
It’s complicated.
What are your terminal moral values?
Also, why is hurting yourself different from hurting other people? And why is not hurting others a moral value, but not avoidance of torture?
Hurting others is ethically problematic, not morally. For example, I would probably be okay with hurting someone else at their own request. Avoidance of torture is a question of an entirely different type: what I value, not how I think it’s appropriate to go about getting it.
I don’t have a formalization of my terminal values, but roughly:
I have noticed that sometimes I feel more conscious than other times—not just awake/dreaming/sleeping, but between different “awake” times. I infer that consciousness/sentience/sapience/personhood/whatever you want to call it, you know, that thing we care about is not a binary predicate, but a scalar. I want to maximize the degree of personhood that exists in the universe.
What’s the difference between ethics and morals?
So, if you create a person, and torture them for their entire life, that’s worth it?
By morals, I mean terminal values. By ethics, I mean advanced forms of strategy involving things like Hofstadter’s superrationality. I’m not sure what the standard LW jargon is for this sort of thing, but I think I remember reading something about deciding as though you were deciding on behalf of everyone who shares your decision theory.
If the most conscious person possible would be unhappy, I’d rather create them than not. The consensus among science fiction writers seems to be with me on this: a drug that makes you happy at the expense of your creative genius is generally treated as a bad thing.
Sounds like decision theory.
That link was what I needed. By ethics I mean, roughly, the difference between causal decision theory and the right answer.
Do you mean to equate here the degree to which something is a person, the degree to which a person is conscious, and the degree to which a person is a creative genius?
That’s what it reads like, but perhaps I’m reading too much into your comment.
That seems unjustified to me.
I don’t mean to equate them. They’re each a rough approximation to the thing I actually care about.
It’s not like I’m handing other people over into slavery and torture. I don’t have to worry that I’m subconsciously ignoring other people’s suffering for my own benefit. I don’t see the question as a moral one at all, only one of whether it would be a good idea.
ETA: Also, because at least one copy remains free, I’m not depriving anyone of the chance to live their life.
I mostly understand this statement.
I think this is irrelevant. Each instance of you is choosing to sacrifice their life and happiness, and they are not getting anything in return.
The only way I can see this actually being a good idea is if the utility you gain at least outweighs the utility lost by one copy. The other scenarios you describe sound like good ideas on paper where you don’t have to fully process the consequences, but I do not believe for a second that the other-instances-of-you would continue to think this was a good idea when it was their lives on the line.
But it’s the same me. They wouldn’t have done anything with their freedom that I won’t with mine.
I’m not denying the choice is made willingly. But I do not think there is a difference between willingly enduring torture for a copy of yourself and willingly enduring torture for someone else you happen to like.
Legally, if these circumstances ever became real, I think people should be allowed to create the copies, but they should not be allowed to make decisions for the copies. You are only allowed to hit the “torture” button if you believe that it is you, personally, who will be undergoing that torture.
What if I set up the copy-decision-depriving mechanism before I fork myself?
Legally, I think people should allowed to torture themselves. They should not be allowed to torture other people. Legally, I think each copy counts as a person. If you hit the torture button before the copies are made (and then prevent them from changing their mind) you are not just torturing yourself, you are torturing other people.
I do not want to live in a society where sentient creatures are denied the right to escape torture. While it is possible that an individual has worked out a perfect decision theory in which each copy would truly prefer to be tortured, I think many of the people attempting this scenario would simply be short sighted, and as soon as it became their life on the line it their timeless decision would not seem so wise.
If you really are confidant of your willingness to subject yourself to torture for a copy’s benefit, fine. But for the sake of the hypothetical millions of copies of people who HAVEN’T actually thought this through, it should be illegal to create slave copies.
Hm.
If I willingly submit to be tortured starting tomorrow (say, in exchange for someone I love being released unharmed), don’t the same problems arise? After all, once the torture starts I am fairly likely to change my mind. What gives present-me the right to torture an unwilling future-me?
It seems this line of reasoning leads to the conclusion that it’s unethical for me to make any decision that I’ll regret later, no matter what the reason for my change of heart.
I might have been misinterpreting Pavrita’s original statement, and may have been unclear about my position.
People should be allowed to torture themselves without ability to change their mind, if they need to. (However, this is something that in real life would happen rarely for extreme reasons. I think that if people start doing that all the time, we should stop and question whether something is wrong with the system).
The key is that you must firmly understand that you, personally, will be getting tortured. I’m okay with making the decision to get tortured, and then fork yourself. I guess. (Although for small utility, I think it’s a bad decision). What I’m not okay with is making the decision to fork yourself, and then have one of your copies get tortured while one of you doesn’t. Whoever decides to BEGIN the torture must be aware that they, personally, will never receive any benefit from it.
Um.
I think I agree with you, but I’m not sure, and I’m not sure if the problem is language or that I’m just really confused.
For the sake of clarity, let’s consider a specific hypothetical: Sam is given a button which, if pressed, Sam believes will do two things. First, it will cause there to be two identical-at-the-moment-of-pressing copies of Sam. Second, it will cause one of the copies (call it Sam-X) to suffer a penalty P, and the other copy (call it Sam-Y) to receive a benefit B.
If I’ve understood you correctly, you would say that for Sam to press that button is an ethical choice, though it might not be a wise choice, depending on the value of (B-P).
Yes?
No. I’m not sure whether I think “ethical” is an appropriate word here. (Honestly, I think ethical systems that are designed for real pre-singularity life are almost always going to break down in extreme situations). But basically, I consider the scenario you just described identical to:
Two people are both given a button. If they both press the button, then, one of them will get penalty P, the other will get benefit B.
People are entitled to make decisions like this. But governments (collective groups of people) are also entitled to restrict decisions if those decisions prove to be common and damaging to society. Given how irrational people are about probability (i.e. the lottery), I think there may be many values of P and B for which society should ban the scenario. I wouldn’t jump to conclusions about which values of P and B should be banned, I’d have to see how many people actually chose those options and what effect it had on society. (Which is a scientific question, not a logical one).
Pavrita’s original statement seemed more along the lines of: a thousand people agree to press a button that will torture all but one of them for a long time. The remaining person gets $100. This is an extremely bad decision on everyone’s part. Whether or not it’s ethical for the participants, I think that a society that found people making these decisions all the time has a problem and should fix it somehow.
I’m not sure I consider the two scenarios identical, but I’m still struggling to construct a model of identity and the value of life that works under cloning. And ultimately I think the differences are relevant.
But I agree that your version raises some of the same questions, so let’s start there.
I agree that there are versions of P and B for which it is in everyone’s best interests that the button not be pushed. Again, just to be concrete, I’ll propose a specific such example: B = $1, P= -$1,000,000. (I’m using $ here to denote a certain amount of constant-value stuff, not just burning a million-dollar bill.)
To press that button is simply foolish, for the same reason that spending $500,000 to purchase $0.50 is foolish. And I agree that Pavitra’s proposal is a foolish choice in the same way.
And I agree that when a sufficiently costly mistake is sufficiently compelling, we do well to collectively eliminate the choice—take away the button—from one another, and that determining when that threshold has been met is basically an empirical question. (I mostly think that words like “government” and “society” confuse the issue here, but I don’t disagree with your use of them.)
I’m not sure I agree that these aren’t ethical questions, but I’m not sure that matters.
So far, so good.
Where the questions of the nature of identity creep back in for me is precisely in the equation of a thousand copies of me, created on demand, with a thousand existing people. It just stops being quite so clear whose interests are being protected, and from whom, and what kinds of social entities are entitled to make those kinds of laws.
I guess the intuition I am struggling with is that we derive our collective right to restrict one another’s individual freedom of choice in part from the collective consequences of our individual choices… that if there truly are no externalities to your behavior, then I have no right to interfere with that behavior. Call that the Principle of Independence.
If you exchange N hours of unpleasantness for you for an hour of pleasantness for you, and it all happens inside a black box with negligible externalities… the POI says I have negligible say in that matter. And that seems to scale more or less indefinitely, although at larger scales I start to care about externalities (like opportunity costs) that seemed negligible at smaller scales.
And if what you do inside that black box is create a thousand clones of yourself and set them to mining toothpicks in the Pointlessly Unpleasant Toothpick Mines for twenty years, and then sell the resulting box of toothpicks for a dollar… well, um… I mean, you’re insane, but… I guess I’m saying I don’t have standing there either.
I’m not happy with that conclusion, but I’m not unhappy enough to want to throw out the POI, either.
So that’s kind of where I am.
Saying “a thousand people” invokes the wrong intuitions. Your brain imagines a thousand distinct people, and torturing a unique person would destroy their potential unique contribution to society.
A better analogy might be that if you push the button, Omega will give you $100 now, and then arranges for you to spend a thousand years in hell after you die instead of being annihilated instantly.
This is where we differ: a separate instance of a person is a separate person. I see no reason to attach special significance to unique experiences. Suppose you and an identical version of you happen to evolve separately on different worlds, make identical choices to travel on a spaceship to the same planet, and meet each other. Up until now your experiences have been identical. Are you okay with committing suicide as soon as you realize this identical person exists? Are you okay with Omega coming in one day and deciding to kill you and take all your belongings because he knows that you’re going to spend the rest of your life having an identical experience to someone else in the multiverse?
Maybe your answers to all these questions are yes, but mine aren’t. Society is filled with people who are mostly redundant. Do we really “need” Dude #3432 who grows up to be a hamburger flipper whose job eventually gets replaced by a robot? No. But morality isn’t (shouldn’t be) designed to protect some nebulous “society”. It’s designed to protect individual people.
This is especially true in the sort of post-singularity world where this sort of hypothetical even matters. If you have the technology to produce 1000 copies of a person, you probably don’t “need” people to contribute to society in the first place. People’s only inherent value is in their ability to enjoy life.
If I take your hypothetical in the sense I think you intend, then yes. In practice, I’d rather not, for the same reason I’d want to create copies of myself if only one existed to begin with.
I agree that the value of society is the value it provides to the people in it. However, I don’t think we should try to maximize the minimum happiness of everyone in the world: that way lies madness. I’d rather create one additional top-quality work of great art or culture than save a thousand additional orphans from starvation.
(If the thousand orphans could be brought up to first-world standards of living, rather than only being given mere existence, then they might produce more than one top-quality work of great art or culture on average between them. But the real world isn’t always that morally convenient.)
And in any case, even if there’s only two “unique” experiences, you’re still flipping a coin and either getting 1000 years of torture (say, −10,000,000 utility) or $100 (say, 10 utility), and the expected utility for hitting the button is still overwhelmingly negative.
The relevant formula might be something other than (B-P), depending on Sam’s utility function, but otherwise that’s essentially what I believe.
Yeah, I’m using “(B-P)” very loosely. And of course the question of what units B and P are in and how one even does such a comparison is very open. I suppose the traditional way out of this is to say that B and P are measured in utilons… which adds nothing to the discussion, really, but sounds comfortingly concrete. (I am rapidly convincing myself that I should never use the word “utilon” seriously for precisely this reason; I run the risk of fooling myself into thinking I know what I’m talking about.)
Non-rigorous concepts should definitely be given appropriate-sounding names; perhaps “magic cookies” would be better?
I like that. Yes indeed, magic cookies it is.
We’ve been talking as though there was one “real” me and several xeroxes, but you seem to be acting as if that were the case on a moral level, which seems wrong. Surely, if I fork myself, each branch is just as genuinely me as any other? If I build and lock a cage, arrange to fork myself with one copy inside the cage and one outside, press the fork button, and find myself inside the cage, then I’m the one who locked myself in.
Fundamental disagreement here, which I don’t expect to work through. Once you fork yourself, I would treat each copy as a unique individual. (It’s irrelevant whether one of you is “real” or not. They’re identical people, but they’re still separate people).
If those people all actually make the same decisions, great. I am not okay with exposing hundreds of copies to years of torture based on a decision you made in the comfort of your computer room.
I don’t ask you to accept that the various post-fork copies are the same person as each other, only that each is (perhaps non-transitively) the same person as the single pre-fork copy.
Suppose I don’t fork myself, but lock myself in a cage. Does the absence of an uncaged copy matter?