Legally, I think people should allowed to torture themselves. They should not be allowed to torture other people. Legally, I think each copy counts as a person. If you hit the torture button before the copies are made (and then prevent them from changing their mind) you are not just torturing yourself, you are torturing other people.
I do not want to live in a society where sentient creatures are denied the right to escape torture. While it is possible that an individual has worked out a perfect decision theory in which each copy would truly prefer to be tortured, I think many of the people attempting this scenario would simply be short sighted, and as soon as it became their life on the line it their timeless decision would not seem so wise.
If you really are confidant of your willingness to subject yourself to torture for a copy’s benefit, fine. But for the sake of the hypothetical millions of copies of people who HAVEN’T actually thought this through, it should be illegal to create slave copies.
If I willingly submit to be tortured starting tomorrow (say, in exchange for someone I love being released unharmed), don’t the same problems arise? After all, once the torture starts I am fairly likely to change my mind. What gives present-me the right to torture an unwilling future-me?
It seems this line of reasoning leads to the conclusion that it’s unethical for me to make any decision that I’ll regret later, no matter what the reason for my change of heart.
I might have been misinterpreting Pavrita’s original statement, and may have been unclear about my position.
People should be allowed to torture themselves without ability to change their mind, if they need to. (However, this is something that in real life would happen rarely for extreme reasons. I think that if people start doing that all the time, we should stop and question whether something is wrong with the system).
The key is that you must firmly understand that you, personally, will be getting tortured. I’m okay with making the decision to get tortured, and then fork yourself. I guess. (Although for small utility, I think it’s a bad decision). What I’m not okay with is making the decision to fork yourself, and then have one of your copies get tortured while one of you doesn’t. Whoever decides to BEGIN the torture must be aware that they, personally, will never receive any benefit from it.
I think I agree with you, but I’m not sure, and I’m not sure if the problem is language or that I’m just really confused.
For the sake of clarity, let’s consider a specific hypothetical: Sam is given a button which, if pressed, Sam believes will do two things. First, it will cause there to be two identical-at-the-moment-of-pressing copies of Sam. Second, it will cause one of the copies (call it Sam-X) to suffer a penalty P, and the other copy (call it Sam-Y) to receive a benefit B.
If I’ve understood you correctly, you would say that for Sam to press that button is an ethical choice, though it might not be a wise choice, depending on the value of (B-P).
No. I’m not sure whether I think “ethical” is an appropriate word here. (Honestly, I think ethical systems that are designed for real pre-singularity life are almost always going to break down in extreme situations). But basically, I consider the scenario you just described identical to:
Two people are both given a button. If they both press the button, then, one of them will get penalty P, the other will get benefit B.
People are entitled to make decisions like this. But governments (collective groups of people) are also entitled to restrict decisions if those decisions prove to be common and damaging to society. Given how irrational people are about probability (i.e. the lottery), I think there may be many values of P and B for which society should ban the scenario. I wouldn’t jump to conclusions about which values of P and B should be banned, I’d have to see how many people actually chose those options and what effect it had on society. (Which is a scientific question, not a logical one).
Pavrita’s original statement seemed more along the lines of: a thousand people agree to press a button that will torture all but one of them for a long time. The remaining person gets $100. This is an extremely bad decision on everyone’s part. Whether or not it’s ethical for the participants, I think that a society that found people making these decisions all the time has a problem and should fix it somehow.
I’m not sure I consider the two scenarios identical, but I’m still struggling to construct a model of identity and the value of life that works under cloning. And ultimately I think the differences are relevant.
But I agree that your version raises some of the same questions, so let’s start there.
I agree that there are versions of P and B for which it is in everyone’s best interests that the button not be pushed. Again, just to be concrete, I’ll propose a specific such example: B = $1, P= -$1,000,000. (I’m using $ here to denote a certain amount of constant-value stuff, not just burning a million-dollar bill.)
To press that button is simply foolish, for the same reason that spending $500,000 to purchase $0.50 is foolish. And I agree that Pavitra’s proposal is a foolish choice in the same way.
And I agree that when a sufficiently costly mistake is sufficiently compelling, we do well to collectively eliminate the choice—take away the button—from one another, and that determining when that threshold has been met is basically an empirical question. (I mostly think that words like “government” and “society” confuse the issue here, but I don’t disagree with your use of them.)
I’m not sure I agree that these aren’t ethical questions, but I’m not sure that matters.
So far, so good.
Where the questions of the nature of identity creep back in for me is precisely in the equation of a thousand copies of me, created on demand, with a thousand existing people. It just stops being quite so clear whose interests are being protected, and from whom, and what kinds of social entities are entitled to make those kinds of laws.
I guess the intuition I am struggling with is that we derive our collective right to restrict one another’s individual freedom of choice in part from the collective consequences of our individual choices… that if there truly are no externalities to your behavior, then I have no right to interfere with that behavior. Call that the Principle of Independence.
If you exchange N hours of unpleasantness for you for an hour of pleasantness for you, and it all happens inside a black box with negligible externalities… the POI says I have negligible say in that matter. And that seems to scale more or less indefinitely, although at larger scales I start to care about externalities (like opportunity costs) that seemed negligible at smaller scales.
And if what you do inside that black box is create a thousand clones of yourself and set them to mining toothpicks in the Pointlessly Unpleasant Toothpick Mines for twenty years, and then sell the resulting box of toothpicks for a dollar… well, um… I mean, you’re insane, but… I guess I’m saying I don’t have standing there either.
I’m not happy with that conclusion, but I’m not unhappy enough to want to throw out the POI, either.
Saying “a thousand people” invokes the wrong intuitions. Your brain imagines a thousand distinct people, and torturing a unique person would destroy their potential unique contribution to society.
A better analogy might be that if you push the button, Omega will give you $100 now, and then arranges for you to spend a thousand years in hell after you die instead of being annihilated instantly.
This is where we differ: a separate instance of a person is a separate person. I see no reason to attach special significance to unique experiences. Suppose you and an identical version of you happen to evolve separately on different worlds, make identical choices to travel on a spaceship to the same planet, and meet each other. Up until now your experiences have been identical. Are you okay with committing suicide as soon as you realize this identical person exists? Are you okay with Omega coming in one day and deciding to kill you and take all your belongings because he knows that you’re going to spend the rest of your life having an identical experience to someone else in the multiverse?
Maybe your answers to all these questions are yes, but mine aren’t. Society is filled with people who are mostly redundant. Do we really “need” Dude #3432 who grows up to be a hamburger flipper whose job eventually gets replaced by a robot? No. But morality isn’t (shouldn’t be) designed to protect some nebulous “society”. It’s designed to protect individual people.
This is especially true in the sort of post-singularity world where this sort of hypothetical even matters. If you have the technology to produce 1000 copies of a person, you probably don’t “need” people to contribute to society in the first place. People’s only inherent value is in their ability to enjoy life.
If I take your hypothetical in the sense I think you intend, then yes. In practice, I’d rather not, for the same reason I’d want to create copies of myself if only one existed to begin with.
I agree that the value of society is the value it provides to the people in it. However, I don’t think we should try to maximize the minimum happiness of everyone in the world: that way lies madness. I’d rather create one additional top-quality work of great art or culture than save a thousand additional orphans from starvation.
(If the thousand orphans could be brought up to first-world standards of living, rather than only being given mere existence, then they might produce more than one top-quality work of great art or culture on average between them. But the real world isn’t always that morally convenient.)
And in any case, even if there’s only two “unique” experiences, you’re still flipping a coin and either getting 1000 years of torture (say, −10,000,000 utility) or $100 (say, 10 utility), and the expected utility for hitting the button is still overwhelmingly negative.
Yeah, I’m using “(B-P)” very loosely. And of course the question of what units B and P are in and how one even does such a comparison is very open. I suppose the traditional way out of this is to say that B and P are measured in utilons… which adds nothing to the discussion, really, but sounds comfortingly concrete. (I am rapidly convincing myself that I should never use the word “utilon” seriously for precisely this reason; I run the risk of fooling myself into thinking I know what I’m talking about.)
We’ve been talking as though there was one “real” me and several xeroxes, but you seem to be acting as if that were the case on a moral level, which seems wrong. Surely, if I fork myself, each branch is just as genuinely me as any other? If I build and lock a cage, arrange to fork myself with one copy inside the cage and one outside, press the fork button, and find myself inside the cage, then I’m the one who locked myself in.
Surely, if I fork myself, each branch is just as genuinely me as any other?
Fundamental disagreement here, which I don’t expect to work through. Once you fork yourself, I would treat each copy as a unique individual. (It’s irrelevant whether one of you is “real” or not. They’re identical people, but they’re still separate people).
If those people all actually make the same decisions, great. I am not okay with exposing hundreds of copies to years of torture based on a decision you made in the comfort of your computer room.
I don’t ask you to accept that the various post-fork copies are the same person as each other, only that each is (perhaps non-transitively) the same person as the single pre-fork copy.
Suppose I don’t fork myself, but lock myself in a cage. Does the absence of an uncaged copy matter?
Legally, I think people should allowed to torture themselves. They should not be allowed to torture other people. Legally, I think each copy counts as a person. If you hit the torture button before the copies are made (and then prevent them from changing their mind) you are not just torturing yourself, you are torturing other people.
I do not want to live in a society where sentient creatures are denied the right to escape torture. While it is possible that an individual has worked out a perfect decision theory in which each copy would truly prefer to be tortured, I think many of the people attempting this scenario would simply be short sighted, and as soon as it became their life on the line it their timeless decision would not seem so wise.
If you really are confidant of your willingness to subject yourself to torture for a copy’s benefit, fine. But for the sake of the hypothetical millions of copies of people who HAVEN’T actually thought this through, it should be illegal to create slave copies.
Hm.
If I willingly submit to be tortured starting tomorrow (say, in exchange for someone I love being released unharmed), don’t the same problems arise? After all, once the torture starts I am fairly likely to change my mind. What gives present-me the right to torture an unwilling future-me?
It seems this line of reasoning leads to the conclusion that it’s unethical for me to make any decision that I’ll regret later, no matter what the reason for my change of heart.
I might have been misinterpreting Pavrita’s original statement, and may have been unclear about my position.
People should be allowed to torture themselves without ability to change their mind, if they need to. (However, this is something that in real life would happen rarely for extreme reasons. I think that if people start doing that all the time, we should stop and question whether something is wrong with the system).
The key is that you must firmly understand that you, personally, will be getting tortured. I’m okay with making the decision to get tortured, and then fork yourself. I guess. (Although for small utility, I think it’s a bad decision). What I’m not okay with is making the decision to fork yourself, and then have one of your copies get tortured while one of you doesn’t. Whoever decides to BEGIN the torture must be aware that they, personally, will never receive any benefit from it.
Um.
I think I agree with you, but I’m not sure, and I’m not sure if the problem is language or that I’m just really confused.
For the sake of clarity, let’s consider a specific hypothetical: Sam is given a button which, if pressed, Sam believes will do two things. First, it will cause there to be two identical-at-the-moment-of-pressing copies of Sam. Second, it will cause one of the copies (call it Sam-X) to suffer a penalty P, and the other copy (call it Sam-Y) to receive a benefit B.
If I’ve understood you correctly, you would say that for Sam to press that button is an ethical choice, though it might not be a wise choice, depending on the value of (B-P).
Yes?
No. I’m not sure whether I think “ethical” is an appropriate word here. (Honestly, I think ethical systems that are designed for real pre-singularity life are almost always going to break down in extreme situations). But basically, I consider the scenario you just described identical to:
Two people are both given a button. If they both press the button, then, one of them will get penalty P, the other will get benefit B.
People are entitled to make decisions like this. But governments (collective groups of people) are also entitled to restrict decisions if those decisions prove to be common and damaging to society. Given how irrational people are about probability (i.e. the lottery), I think there may be many values of P and B for which society should ban the scenario. I wouldn’t jump to conclusions about which values of P and B should be banned, I’d have to see how many people actually chose those options and what effect it had on society. (Which is a scientific question, not a logical one).
Pavrita’s original statement seemed more along the lines of: a thousand people agree to press a button that will torture all but one of them for a long time. The remaining person gets $100. This is an extremely bad decision on everyone’s part. Whether or not it’s ethical for the participants, I think that a society that found people making these decisions all the time has a problem and should fix it somehow.
I’m not sure I consider the two scenarios identical, but I’m still struggling to construct a model of identity and the value of life that works under cloning. And ultimately I think the differences are relevant.
But I agree that your version raises some of the same questions, so let’s start there.
I agree that there are versions of P and B for which it is in everyone’s best interests that the button not be pushed. Again, just to be concrete, I’ll propose a specific such example: B = $1, P= -$1,000,000. (I’m using $ here to denote a certain amount of constant-value stuff, not just burning a million-dollar bill.)
To press that button is simply foolish, for the same reason that spending $500,000 to purchase $0.50 is foolish. And I agree that Pavitra’s proposal is a foolish choice in the same way.
And I agree that when a sufficiently costly mistake is sufficiently compelling, we do well to collectively eliminate the choice—take away the button—from one another, and that determining when that threshold has been met is basically an empirical question. (I mostly think that words like “government” and “society” confuse the issue here, but I don’t disagree with your use of them.)
I’m not sure I agree that these aren’t ethical questions, but I’m not sure that matters.
So far, so good.
Where the questions of the nature of identity creep back in for me is precisely in the equation of a thousand copies of me, created on demand, with a thousand existing people. It just stops being quite so clear whose interests are being protected, and from whom, and what kinds of social entities are entitled to make those kinds of laws.
I guess the intuition I am struggling with is that we derive our collective right to restrict one another’s individual freedom of choice in part from the collective consequences of our individual choices… that if there truly are no externalities to your behavior, then I have no right to interfere with that behavior. Call that the Principle of Independence.
If you exchange N hours of unpleasantness for you for an hour of pleasantness for you, and it all happens inside a black box with negligible externalities… the POI says I have negligible say in that matter. And that seems to scale more or less indefinitely, although at larger scales I start to care about externalities (like opportunity costs) that seemed negligible at smaller scales.
And if what you do inside that black box is create a thousand clones of yourself and set them to mining toothpicks in the Pointlessly Unpleasant Toothpick Mines for twenty years, and then sell the resulting box of toothpicks for a dollar… well, um… I mean, you’re insane, but… I guess I’m saying I don’t have standing there either.
I’m not happy with that conclusion, but I’m not unhappy enough to want to throw out the POI, either.
So that’s kind of where I am.
Saying “a thousand people” invokes the wrong intuitions. Your brain imagines a thousand distinct people, and torturing a unique person would destroy their potential unique contribution to society.
A better analogy might be that if you push the button, Omega will give you $100 now, and then arranges for you to spend a thousand years in hell after you die instead of being annihilated instantly.
This is where we differ: a separate instance of a person is a separate person. I see no reason to attach special significance to unique experiences. Suppose you and an identical version of you happen to evolve separately on different worlds, make identical choices to travel on a spaceship to the same planet, and meet each other. Up until now your experiences have been identical. Are you okay with committing suicide as soon as you realize this identical person exists? Are you okay with Omega coming in one day and deciding to kill you and take all your belongings because he knows that you’re going to spend the rest of your life having an identical experience to someone else in the multiverse?
Maybe your answers to all these questions are yes, but mine aren’t. Society is filled with people who are mostly redundant. Do we really “need” Dude #3432 who grows up to be a hamburger flipper whose job eventually gets replaced by a robot? No. But morality isn’t (shouldn’t be) designed to protect some nebulous “society”. It’s designed to protect individual people.
This is especially true in the sort of post-singularity world where this sort of hypothetical even matters. If you have the technology to produce 1000 copies of a person, you probably don’t “need” people to contribute to society in the first place. People’s only inherent value is in their ability to enjoy life.
If I take your hypothetical in the sense I think you intend, then yes. In practice, I’d rather not, for the same reason I’d want to create copies of myself if only one existed to begin with.
I agree that the value of society is the value it provides to the people in it. However, I don’t think we should try to maximize the minimum happiness of everyone in the world: that way lies madness. I’d rather create one additional top-quality work of great art or culture than save a thousand additional orphans from starvation.
(If the thousand orphans could be brought up to first-world standards of living, rather than only being given mere existence, then they might produce more than one top-quality work of great art or culture on average between them. But the real world isn’t always that morally convenient.)
And in any case, even if there’s only two “unique” experiences, you’re still flipping a coin and either getting 1000 years of torture (say, −10,000,000 utility) or $100 (say, 10 utility), and the expected utility for hitting the button is still overwhelmingly negative.
The relevant formula might be something other than (B-P), depending on Sam’s utility function, but otherwise that’s essentially what I believe.
Yeah, I’m using “(B-P)” very loosely. And of course the question of what units B and P are in and how one even does such a comparison is very open. I suppose the traditional way out of this is to say that B and P are measured in utilons… which adds nothing to the discussion, really, but sounds comfortingly concrete. (I am rapidly convincing myself that I should never use the word “utilon” seriously for precisely this reason; I run the risk of fooling myself into thinking I know what I’m talking about.)
Non-rigorous concepts should definitely be given appropriate-sounding names; perhaps “magic cookies” would be better?
I like that. Yes indeed, magic cookies it is.
We’ve been talking as though there was one “real” me and several xeroxes, but you seem to be acting as if that were the case on a moral level, which seems wrong. Surely, if I fork myself, each branch is just as genuinely me as any other? If I build and lock a cage, arrange to fork myself with one copy inside the cage and one outside, press the fork button, and find myself inside the cage, then I’m the one who locked myself in.
Fundamental disagreement here, which I don’t expect to work through. Once you fork yourself, I would treat each copy as a unique individual. (It’s irrelevant whether one of you is “real” or not. They’re identical people, but they’re still separate people).
If those people all actually make the same decisions, great. I am not okay with exposing hundreds of copies to years of torture based on a decision you made in the comfort of your computer room.
I don’t ask you to accept that the various post-fork copies are the same person as each other, only that each is (perhaps non-transitively) the same person as the single pre-fork copy.
Suppose I don’t fork myself, but lock myself in a cage. Does the absence of an uncaged copy matter?