Disclaimer: the identity theory that I actually alieve is the most common intuitionist one, and it’s philosophically inconsistent: I regard as death teleportation but not sleeping. This comment, however, is written from System 2 perspective, that can operate even with concepts that I don’t alieve
The basic idea behind timeless identity is that “I” can only be meaningfully defined inductively as “an entity that has experience continuity with my current self”. Thus, we can safely replace “I value my life” with “I value the existence of an entity that feels and behaves exactly like me”. That allows us to be OK with quite useful (although hypothetical) things like teleportation, mind uploading, mind backups, etc. It also seems to provide an insight into why it’s OK to make a copy of me on Mars, and immediately destroy Earth!me, but not OK to destroy Earth!me hours later: the experiences of Earth!me and Mars!me would diverge, and each of them would value their own lives.
However, here is the thing: in this case we merely replace the requirement “to have an entity with experience continuity with me” with “to have an entity with experience continuity with me, except this one hour”. They’re actually pretty interchangeable. For example, I forget most of my dreams, which means I’m nearly guaranteed to forget several hours of experience every day, and I’m OK with that. One might say that the value of genuine experiences exceeds that of hallucinations, but I would still be pretty OK with taking a suppressor of RNA synthesis, that would temporarily give me anterograde amnesia, and do something that I don’t really care about remembering—clean the house or something. Heck, even retroactively erasing my most cherished memories, although extremely frustrating, is still not nearly as bad as death.
That implies that is there are multiple copies of me, the badness of killing any of them is no more than the increase in the likelihood of all of them being destroyed (which is not a lot, unless there’s Armageddon happening around) plus the value of memories formed since the last replication. Also, every individual copy should consider alieve being killed to be no worse than forgetting what happened since the last replication, which also sounds not nearly as horrible as death. That also implies that simulating time travel by discarding time branches is also a pretty OK thing to do, unless the universes diverge strongly enough to create uniquely valuable memories.
Depend on how you feel about anthropically selfish preferences, and altruistic preferences that try to satisfy other peoples’ selfish preferences. I, for instance, do not think it’s okay to kill a copy of me even if I know I will live on.
In the earth-mars teleporter thought experiment, the missing piece is the idea that people care selfishly about their causal descendants (though this phrase is obscuring a lot of unsolved questions about what kind of causation counts). If the teleporter annihilates a person as it scans them, the person who get annihilated has a direct causal descendant on the other side. If it waits ten minutes, gives the original some tea and cake, and then annihilates them, the person who gets annihilated has no direct causal descendant—they really are getting killed off in a way that matters more to them than before.
I, for instance, do not think it’s okay to kill a copy of me even if I know I will live on
Not OK in what sense—as in morally wrong to kill sapient beings or as terrifying as getting killed? I tend to care more about people who are closer to me, so by induction I will probably care about my copy more than any other human, but I still alieve the experience of getting killed to be fundamentally different and fundamentally more terrifying than the experience of my copy getting killed.
From the linked post:
The counterargument is also simple, though: Making copies of myself has no causal effect on me. Swearing this oath does not move my body to a tropical paradise. What really happens is that I just sit there in the cold just the same, but then later I make some simulations where I lie to myself.
If I understand correctly, the argument of timeless identity is that your copy is you in absolutely any meaningful sense, and therefore prioritizing one copy (original) over the others isn’t just wrong, but even meaningless, and cannot be defined very well. I’m totally not buying that on gut level, but at the same time I don’t see any strong logical arguments against it, even if I operate with 100% selfish 0% altruistic ethics.
When there is a decision your original body can make that creates a bunch of copies, and the copies are also faced with this decision, your decision lets you control whether you are the original or a copy.
I don’t quite get this part—can you elaborate?
If it waits ten minutes, gives the original some tea and cake, and then annihilates them, the person who gets annihilated has no direct causal descendant—they really are getting killed off in a way that matters more to them than before
What’s about the thought experiment with erasing memories though? I doesn’t physically violate causality, but from the experience perspective it does—suddenly the person loses a chunk of their experience, and they’re basically replaced with an earlier version of themselves, even though the universe has moved on. This experience may not be very pleasant, but it doesn’t seem to be nearly as bad as getting cake and death in the Earth-Mars experiment. Yet it’s hard to distinguish them on the logical level.
Not OK in what sense—as in morally wrong to kill sapient beings or as terrifying as getting killed?
The first one—they’re just a close relative :)
I don’t quite get this part—can you elaborate?
TDT says to treat the world as a causal diagram that has as its input your decision algorithm, and outputs (among other things) whether you’re a copy (at least, iff your decision changes how many copies of you there are). So you should literally evaluate the choices as if your action controlled whether or not you are a copy.
As to erasing memories—yeah I’m not sure either, but I’m learning towards it being somewhere between “almost a causal descendant” and “about as bad as being killed and a copy from earlier being saved.”
Disclaimer: the identity theory that I actually alieve is the most common intuitionist one, and it’s philosophically inconsistent: I regard as death teleportation but not sleeping. This comment, however, is written from System 2 perspective, that can operate even with concepts that I don’t alieve
The basic idea behind timeless identity is that “I” can only be meaningfully defined inductively as “an entity that has experience continuity with my current self”. Thus, we can safely replace “I value my life” with “I value the existence of an entity that feels and behaves exactly like me”. That allows us to be OK with quite useful (although hypothetical) things like teleportation, mind uploading, mind backups, etc. It also seems to provide an insight into why it’s OK to make a copy of me on Mars, and immediately destroy Earth!me, but not OK to destroy Earth!me hours later: the experiences of Earth!me and Mars!me would diverge, and each of them would value their own lives.
However, here is the thing: in this case we merely replace the requirement “to have an entity with experience continuity with me” with “to have an entity with experience continuity with me, except this one hour”. They’re actually pretty interchangeable. For example, I forget most of my dreams, which means I’m nearly guaranteed to forget several hours of experience every day, and I’m OK with that. One might say that the value of genuine experiences exceeds that of hallucinations, but I would still be pretty OK with taking a suppressor of RNA synthesis, that would temporarily give me anterograde amnesia, and do something that I don’t really care about remembering—clean the house or something. Heck, even retroactively erasing my most cherished memories, although extremely frustrating, is still not nearly as bad as death.
That implies that is there are multiple copies of me, the badness of killing any of them is no more than the increase in the likelihood of all of them being destroyed (which is not a lot, unless there’s Armageddon happening around) plus the value of memories formed since the last replication. Also, every individual copy should consider alieve being killed to be no worse than forgetting what happened since the last replication, which also sounds not nearly as horrible as death. That also implies that simulating time travel by discarding time branches is also a pretty OK thing to do, unless the universes diverge strongly enough to create uniquely valuable memories.
Is that correct or am I missing something?
Depend on how you feel about anthropically selfish preferences, and altruistic preferences that try to satisfy other peoples’ selfish preferences. I, for instance, do not think it’s okay to kill a copy of me even if I know I will live on.
In the earth-mars teleporter thought experiment, the missing piece is the idea that people care selfishly about their causal descendants (though this phrase is obscuring a lot of unsolved questions about what kind of causation counts). If the teleporter annihilates a person as it scans them, the person who get annihilated has a direct causal descendant on the other side. If it waits ten minutes, gives the original some tea and cake, and then annihilates them, the person who gets annihilated has no direct causal descendant—they really are getting killed off in a way that matters more to them than before.
Not OK in what sense—as in morally wrong to kill sapient beings or as terrifying as getting killed? I tend to care more about people who are closer to me, so by induction I will probably care about my copy more than any other human, but I still alieve the experience of getting killed to be fundamentally different and fundamentally more terrifying than the experience of my copy getting killed.
From the linked post:
If I understand correctly, the argument of timeless identity is that your copy is you in absolutely any meaningful sense, and therefore prioritizing one copy (original) over the others isn’t just wrong, but even meaningless, and cannot be defined very well. I’m totally not buying that on gut level, but at the same time I don’t see any strong logical arguments against it, even if I operate with 100% selfish 0% altruistic ethics.
I don’t quite get this part—can you elaborate?
What’s about the thought experiment with erasing memories though? I doesn’t physically violate causality, but from the experience perspective it does—suddenly the person loses a chunk of their experience, and they’re basically replaced with an earlier version of themselves, even though the universe has moved on. This experience may not be very pleasant, but it doesn’t seem to be nearly as bad as getting cake and death in the Earth-Mars experiment. Yet it’s hard to distinguish them on the logical level.
The first one—they’re just a close relative :)
TDT says to treat the world as a causal diagram that has as its input your decision algorithm, and outputs (among other things) whether you’re a copy (at least, iff your decision changes how many copies of you there are). So you should literally evaluate the choices as if your action controlled whether or not you are a copy.
As to erasing memories—yeah I’m not sure either, but I’m learning towards it being somewhere between “almost a causal descendant” and “about as bad as being killed and a copy from earlier being saved.”
OK, I’ll have to read deeper into TDT to understand why that happens, currently that seems counterintuitive as heck.