Depend on how you feel about anthropically selfish preferences, and altruistic preferences that try to satisfy other peoples’ selfish preferences. I, for instance, do not think it’s okay to kill a copy of me even if I know I will live on.
In the earth-mars teleporter thought experiment, the missing piece is the idea that people care selfishly about their causal descendants (though this phrase is obscuring a lot of unsolved questions about what kind of causation counts). If the teleporter annihilates a person as it scans them, the person who get annihilated has a direct causal descendant on the other side. If it waits ten minutes, gives the original some tea and cake, and then annihilates them, the person who gets annihilated has no direct causal descendant—they really are getting killed off in a way that matters more to them than before.
I, for instance, do not think it’s okay to kill a copy of me even if I know I will live on
Not OK in what sense—as in morally wrong to kill sapient beings or as terrifying as getting killed? I tend to care more about people who are closer to me, so by induction I will probably care about my copy more than any other human, but I still alieve the experience of getting killed to be fundamentally different and fundamentally more terrifying than the experience of my copy getting killed.
From the linked post:
The counterargument is also simple, though: Making copies of myself has no causal effect on me. Swearing this oath does not move my body to a tropical paradise. What really happens is that I just sit there in the cold just the same, but then later I make some simulations where I lie to myself.
If I understand correctly, the argument of timeless identity is that your copy is you in absolutely any meaningful sense, and therefore prioritizing one copy (original) over the others isn’t just wrong, but even meaningless, and cannot be defined very well. I’m totally not buying that on gut level, but at the same time I don’t see any strong logical arguments against it, even if I operate with 100% selfish 0% altruistic ethics.
When there is a decision your original body can make that creates a bunch of copies, and the copies are also faced with this decision, your decision lets you control whether you are the original or a copy.
I don’t quite get this part—can you elaborate?
If it waits ten minutes, gives the original some tea and cake, and then annihilates them, the person who gets annihilated has no direct causal descendant—they really are getting killed off in a way that matters more to them than before
What’s about the thought experiment with erasing memories though? I doesn’t physically violate causality, but from the experience perspective it does—suddenly the person loses a chunk of their experience, and they’re basically replaced with an earlier version of themselves, even though the universe has moved on. This experience may not be very pleasant, but it doesn’t seem to be nearly as bad as getting cake and death in the Earth-Mars experiment. Yet it’s hard to distinguish them on the logical level.
Not OK in what sense—as in morally wrong to kill sapient beings or as terrifying as getting killed?
The first one—they’re just a close relative :)
I don’t quite get this part—can you elaborate?
TDT says to treat the world as a causal diagram that has as its input your decision algorithm, and outputs (among other things) whether you’re a copy (at least, iff your decision changes how many copies of you there are). So you should literally evaluate the choices as if your action controlled whether or not you are a copy.
As to erasing memories—yeah I’m not sure either, but I’m learning towards it being somewhere between “almost a causal descendant” and “about as bad as being killed and a copy from earlier being saved.”
Depend on how you feel about anthropically selfish preferences, and altruistic preferences that try to satisfy other peoples’ selfish preferences. I, for instance, do not think it’s okay to kill a copy of me even if I know I will live on.
In the earth-mars teleporter thought experiment, the missing piece is the idea that people care selfishly about their causal descendants (though this phrase is obscuring a lot of unsolved questions about what kind of causation counts). If the teleporter annihilates a person as it scans them, the person who get annihilated has a direct causal descendant on the other side. If it waits ten minutes, gives the original some tea and cake, and then annihilates them, the person who gets annihilated has no direct causal descendant—they really are getting killed off in a way that matters more to them than before.
Not OK in what sense—as in morally wrong to kill sapient beings or as terrifying as getting killed? I tend to care more about people who are closer to me, so by induction I will probably care about my copy more than any other human, but I still alieve the experience of getting killed to be fundamentally different and fundamentally more terrifying than the experience of my copy getting killed.
From the linked post:
If I understand correctly, the argument of timeless identity is that your copy is you in absolutely any meaningful sense, and therefore prioritizing one copy (original) over the others isn’t just wrong, but even meaningless, and cannot be defined very well. I’m totally not buying that on gut level, but at the same time I don’t see any strong logical arguments against it, even if I operate with 100% selfish 0% altruistic ethics.
I don’t quite get this part—can you elaborate?
What’s about the thought experiment with erasing memories though? I doesn’t physically violate causality, but from the experience perspective it does—suddenly the person loses a chunk of their experience, and they’re basically replaced with an earlier version of themselves, even though the universe has moved on. This experience may not be very pleasant, but it doesn’t seem to be nearly as bad as getting cake and death in the Earth-Mars experiment. Yet it’s hard to distinguish them on the logical level.
The first one—they’re just a close relative :)
TDT says to treat the world as a causal diagram that has as its input your decision algorithm, and outputs (among other things) whether you’re a copy (at least, iff your decision changes how many copies of you there are). So you should literally evaluate the choices as if your action controlled whether or not you are a copy.
As to erasing memories—yeah I’m not sure either, but I’m learning towards it being somewhere between “almost a causal descendant” and “about as bad as being killed and a copy from earlier being saved.”
OK, I’ll have to read deeper into TDT to understand why that happens, currently that seems counterintuitive as heck.