The person that anticipates surviving just walks through and comes away $1000 richer.
No; a person walks out, who has the memories of the person who walked in, plus the memories of winning ten duels to the death against a copy of themselves. But they don’t have the memories of being killed by a copy of themselves, even though there were ten persons who experienced just that.
But if we delete our memory of what just happened when exiting the black boxes, and the boxes themselves, then the resulting universes would be indistinguishable!
If an alien civilization on the other side of the galaxy gets completely destroyed by a supernova, but humans never know about it, does that mean that nothing bad happened?
I know that’s your idea, I’m saying it’s stupid. If I torture you every night and wipe your memory before morning, are you just indifferent to that? I could add this to the torture: “I asked your daylight self after the mindwipe if it would be wrong to do what I’m doing to you, and he said no, because by black-box reasoning torturing you now doesn’t matter, so long as I erase the effects by morning.”
ETA: Maybe it’s harsh to call it stupid when your original scenario wasn’t about deliberately ignoring torture inside the black box. It was just an innocent exercise in being copied and then one of you deleted.
But you cannot presume that the person who anticipates surviving with certainty is correct, just because a copy of them certainly survives to get the bigger payoff. Your argument is: hey cryonics skeptic, here we see someone with a decision procedure which identifies the original with its copies, and it gets the bigger payoff; so judged by the criterion of results obtained (“winning”) this is the superior attitude, therefore the more rational attitude, and so your objection to cryonics is irrational.
However, this argument begs the question of whether the copy is the same person as the original. A decision procedure would normally be regarded as defective if it favors an outcome because of mistaken identity—because person X gets the big payoff, and it incorrectly identifies X with the intended beneficiary of the decision making. And here I might instead reason as follows: that poor fool who volunteers for iterated russian roulette, the setup has fooled him into thinking that he gets to experience the payoff, just because a copy of him does.
As I recently wrote here, there is a “local self”, the “current instance” of you, and then there may be a larger “extended self” made of multiple instances with which your current instance identifies. In effect, you are asking people to adopt a particular expansive identity theory—you want them to regard their copies as themselves—because it means bigger payoffs for them in your thought-experiment. But the argument is circular. For someone with a narrow identity theory (“I am only my current instance”), to run the gauntlet of iterated russian roulette really is to make a mistake.
The scenario where we torture you and then mindwipe you is not an outright rebuttal of an expansive attitude to one’s own personal identity, but it does show that the black-box argument is bogus.
And your edit leaves you with an interesting conundrum.
It can put you in a situation where you see people around yourself adopting one of two strategies, and the people who adopt one strategy consistently win, and the people who adopt another strategy consistently lose, but you still refuse to adopt the winning strategy because you think the people who win are .. wrong.
“Win” by what standards? If I think it is ontologically and factually incorrect—an intellectual mistake—to identify with your copies, then those who do aren’t winning, any more than individual lemmings win when they dive off a cliff. If I am happy to regard a person’s attitude to their copies as a matter of choice, then I may regard their choices as correct for them and my choices as correct or me.
Robin Hanson predicts a Malthusian galactic destiny, in which the posthuman intelligences of the far future are all poorer than human individuals of the present, because selection will favor value systems which are pro-replication. His readers often freak out over Robin’s apparent approval of this scenario of crowded galactic poverty; he approves because he says that these far-future beings will be emotionally adapted to their world; they will want things to be that way.
So this is a similar story. I am under no obligation to adopt an expansive personal identity theory, even if that is a theory whose spread is favored by the conditions of uploaded life. That is merely a statement about how a particular philosophical meme prospers under new conditions, and about the implications of that for posthuman demographics; it is not a fact which would compel me to support the new regime out of self-interest, precisely because I do not already regard my copies as me, and I therefore do not regard their winnings as mine.
Winning by the standard that a person who thinks gaining $1k is worth creating 1023 doomed copies of themselves will, in this situation, get ahead by $1k.
The thing is, I’m genuinely not sure if it matters. To restate what you’re doing another way, “If I make a copy of you every night and suspend it until morning, and also there’s a you that gets tortured but it never causally affects anything else”—I think if you’re vulnerable to coercion via that, you’d also have to be vulnerable to “a thousand tortured copies in a box” style arguments.
You may have missed the long addition I just made to my comment, which avoids the torture issue… however, being vulnerable to “a thousand tortured copies in a box” is not necessarily a bad thing! Just because viewing outcome A as bad renders you vulnerable to blackmail by the threat of A, doesn’t automatically mean that you should change your attitude to A. Otherwise, why not just accept death and the natural lifespan, rather than bother with expensive attempts to live, like cryonics? If you care about dying, you end up spending all this time and energy trying to stay alive, when you could just be enjoying life; so why not change your value system and save yourself the trouble of unnatural life extension… I hope you see the analogy.
I can’t say I do. Death doesn’t care what I think. Other actors may care how you perceive things. Ironically, if you want to minimize torture for coercion, it may be most effective to ignore it. Like not negotiating with terrorists.
On one hand you’re saying it’s good to identify with your copies, because then you can play iterated russian roulette and win. On the other hand, you’re saying it’s bad to identify with your copies, to the extent of caring whether someone tortures them. Presumably you don’t want to be tortured, and your copies don’t want to be tortured, and your copies are you, but you don’t care whether they are tortured… congratulations, I think you’ve invented strategic identity hypocrisy for uploads!
I think the issue of causal interpolation comes up. From where I’m standing right now, the tortured copies never become important in my future; what I’m doing with the boxes is sort of smooth out the becoming-important-ness so that even if I turn out to be a losing copy, I will identify with the winning copy since they’re what dominates the future. Call it mangled-priorities. You could effectively threaten me by releasing the tortured copies into my future-coexistence, at which point it might be the most practical solution for my tortured copies to chose suicide, since they wouldn’t want their broken existence to dominate set-of-copies-that-are-me-and-causally-interacting’s future. How the situation would evolve if the tortured copies never interacted again—I don’t know. I’d need to ask a superintelligence what ought to determine anticipation of subjective existence.
[edit] Honestly, what I’m really doing is trying to precommit to the stance that maximizes my future effectiveness.
No; a person walks out, who has the memories of the person who walked in, plus the memories of winning ten duels to the death against a copy of themselves. But they don’t have the memories of being killed by a copy of themselves, even though there were ten persons who experienced just that.
If an alien civilization on the other side of the galaxy gets completely destroyed by a supernova, but humans never know about it, does that mean that nothing bad happened?
Treat it as just a black box. Person comes in, person comes out, atoms are indistinguishable, they’re $1000 richer.
I know that’s your idea, I’m saying it’s stupid. If I torture you every night and wipe your memory before morning, are you just indifferent to that? I could add this to the torture: “I asked your daylight self after the mindwipe if it would be wrong to do what I’m doing to you, and he said no, because by black-box reasoning torturing you now doesn’t matter, so long as I erase the effects by morning.”
ETA: Maybe it’s harsh to call it stupid when your original scenario wasn’t about deliberately ignoring torture inside the black box. It was just an innocent exercise in being copied and then one of you deleted.
But you cannot presume that the person who anticipates surviving with certainty is correct, just because a copy of them certainly survives to get the bigger payoff. Your argument is: hey cryonics skeptic, here we see someone with a decision procedure which identifies the original with its copies, and it gets the bigger payoff; so judged by the criterion of results obtained (“winning”) this is the superior attitude, therefore the more rational attitude, and so your objection to cryonics is irrational.
However, this argument begs the question of whether the copy is the same person as the original. A decision procedure would normally be regarded as defective if it favors an outcome because of mistaken identity—because person X gets the big payoff, and it incorrectly identifies X with the intended beneficiary of the decision making. And here I might instead reason as follows: that poor fool who volunteers for iterated russian roulette, the setup has fooled him into thinking that he gets to experience the payoff, just because a copy of him does.
As I recently wrote here, there is a “local self”, the “current instance” of you, and then there may be a larger “extended self” made of multiple instances with which your current instance identifies. In effect, you are asking people to adopt a particular expansive identity theory—you want them to regard their copies as themselves—because it means bigger payoffs for them in your thought-experiment. But the argument is circular. For someone with a narrow identity theory (“I am only my current instance”), to run the gauntlet of iterated russian roulette really is to make a mistake.
The scenario where we torture you and then mindwipe you is not an outright rebuttal of an expansive attitude to one’s own personal identity, but it does show that the black-box argument is bogus.
And your edit leaves you with an interesting conundrum.
It can put you in a situation where you see people around yourself adopting one of two strategies, and the people who adopt one strategy consistently win, and the people who adopt another strategy consistently lose, but you still refuse to adopt the winning strategy because you think the people who win are .. wrong.
I’m not sure if you can call that a win.
“Win” by what standards? If I think it is ontologically and factually incorrect—an intellectual mistake—to identify with your copies, then those who do aren’t winning, any more than individual lemmings win when they dive off a cliff. If I am happy to regard a person’s attitude to their copies as a matter of choice, then I may regard their choices as correct for them and my choices as correct or me.
Robin Hanson predicts a Malthusian galactic destiny, in which the posthuman intelligences of the far future are all poorer than human individuals of the present, because selection will favor value systems which are pro-replication. His readers often freak out over Robin’s apparent approval of this scenario of crowded galactic poverty; he approves because he says that these far-future beings will be emotionally adapted to their world; they will want things to be that way.
So this is a similar story. I am under no obligation to adopt an expansive personal identity theory, even if that is a theory whose spread is favored by the conditions of uploaded life. That is merely a statement about how a particular philosophical meme prospers under new conditions, and about the implications of that for posthuman demographics; it is not a fact which would compel me to support the new regime out of self-interest, precisely because I do not already regard my copies as me, and I therefore do not regard their winnings as mine.
Winning by the standard that a person who thinks gaining $1k is worth creating 1023 doomed copies of themselves will, in this situation, get ahead by $1k.
The thing is, I’m genuinely not sure if it matters. To restate what you’re doing another way, “If I make a copy of you every night and suspend it until morning, and also there’s a you that gets tortured but it never causally affects anything else”—I think if you’re vulnerable to coercion via that, you’d also have to be vulnerable to “a thousand tortured copies in a box” style arguments.
You may have missed the long addition I just made to my comment, which avoids the torture issue… however, being vulnerable to “a thousand tortured copies in a box” is not necessarily a bad thing! Just because viewing outcome A as bad renders you vulnerable to blackmail by the threat of A, doesn’t automatically mean that you should change your attitude to A. Otherwise, why not just accept death and the natural lifespan, rather than bother with expensive attempts to live, like cryonics? If you care about dying, you end up spending all this time and energy trying to stay alive, when you could just be enjoying life; so why not change your value system and save yourself the trouble of unnatural life extension… I hope you see the analogy.
I can’t say I do. Death doesn’t care what I think. Other actors may care how you perceive things. Ironically, if you want to minimize torture for coercion, it may be most effective to ignore it. Like not negotiating with terrorists.
On one hand you’re saying it’s good to identify with your copies, because then you can play iterated russian roulette and win. On the other hand, you’re saying it’s bad to identify with your copies, to the extent of caring whether someone tortures them. Presumably you don’t want to be tortured, and your copies don’t want to be tortured, and your copies are you, but you don’t care whether they are tortured… congratulations, I think you’ve invented strategic identity hypocrisy for uploads!
I think the issue of causal interpolation comes up. From where I’m standing right now, the tortured copies never become important in my future; what I’m doing with the boxes is sort of smooth out the becoming-important-ness so that even if I turn out to be a losing copy, I will identify with the winning copy since they’re what dominates the future. Call it mangled-priorities. You could effectively threaten me by releasing the tortured copies into my future-coexistence, at which point it might be the most practical solution for my tortured copies to chose suicide, since they wouldn’t want their broken existence to dominate set-of-copies-that-are-me-and-causally-interacting’s future. How the situation would evolve if the tortured copies never interacted again—I don’t know. I’d need to ask a superintelligence what ought to determine anticipation of subjective existence.
[edit] Honestly, what I’m really doing is trying to precommit to the stance that maximizes my future effectiveness.
Nah, I care about the copies that can interact with me in the future.
[edit] No that doesn’t work. Rethinking.
If a tree falls in the forest, and no one is around, does it make a sound?
But someone was around to see it happen—everyone in the destroyed civilization.