Heh, that was really just me trying to come up with a justification for shoe-horning a theory of identity into a graph formalism so that Konig’s Lemma applied :-)
If I were to try to make a more serious argument it would go something like this.
Defining identity, whether two entities are ‘the same person’ is hard. People have different intuitions. But most people would say that ‘your mind now’ and ‘your mind a few moments later’ are do constitute the same person. So we can define a directed graph with verticies as mind states (mind states would probably have been better than ‘observer moments’) with outgoing edges leading to mind states a few moments later.
That is kind of what I meant by “moment-by-moment” identity. By itself it is a local but not global definition of identity. The transitive closure of that relation gives you a global definition of identity. I haven’t thought about whether its a good one.
In the ordinary course of events these graphs aren’t very interesting, they’re just chains coming to a halt upon death.
But if you were to clone a mind-state and put it into two different environments, they that would give you a vertex with out-degree greater than one.
So mind-uploading would not break such a thing, and in fact without being able to clone a mind-state, the whole graph-based model is not very interesting.
Also, you could have two mind states that lead to the same successor mind state—for example where two different mind states only differ on a few memories, which are then forgotten. The possibility of splitting and merging gives you a general (directed) graph structured identity.
(On a side-note, I think generally people treat splitting and merging of mind states in a way that is way too symmetrical. Splitting seems far easier—trivial once you can digitize a mind-state. Merging would be like a complex software version control problem, and you’d need very carefully apply selective amnesia to achieve it.)
So, if we say “immortality” is having an identity graph with an infinite number of mind-states all connected through the “moment-by-moment identity” relation (stay with me here), and mind states only have a finite number of successor states, then there must be at least one infinite path, and therefore “eternal existence in linear time”.
So, the graph model of identity sort of works, but I feel it doesn’t quite get to the real meat of identity. I think the key is in how two vertices of the identity graph are linked and what it means for them to be linked. Because I don’t think the premise that a person is the same person they were a few moments ago is necessarily justified, and in some situations it doesn’t meld with intuition. For example, a person’s brain is a complex machine; imagine it were (using some extremely advanced technology) modified seriously while a person was still conscious. So, it’s being modified all the time as one learns new information, has new experiences, takes new substances, etc, but let’s imagine it was very dramatically modified. So much so that over the course of a few minutes, one person who once had the personality and memories of, say, you, ended up having the rough personality and memories of Barack Obama. Could it really be said that it’s still the same identity?
Why is an uploaded mind necessarily linked by an edge to the original mind? If the uploaded mind is less than perfect (and it probably will be; even if it’s off by one neuron, one bit, one atom) and you can still link that with an edge to the original mind, what’s to say you couldn’t link a very, very dodgy ‘clone’ mind, like for example the mind of a completely different human, via an edge, to the original mind/vertex?
Some other notes: firstly, an exact clone of a mind is the same mind. This pretty much makes sense. So you can get away from issues like ‘if I clone your mind, but then torture the clone, do you feel it?’ Well, if you’ve modified the state of the cloned mind by torturing it, it can no longer be said to be the same mind, and we would both presumably agree that me cloning your mind in a far away world and then torturing the clone does not make you experience anything.
Can you elaborate on the concept of a connection through “moment-to-moment identity”? Would for example “mind uploading” break such a thing?
Heh, that was really just me trying to come up with a justification for shoe-horning a theory of identity into a graph formalism so that Konig’s Lemma applied :-)
If I were to try to make a more serious argument it would go something like this.
Defining identity, whether two entities are ‘the same person’ is hard. People have different intuitions. But most people would say that ‘your mind now’ and ‘your mind a few moments later’ are do constitute the same person. So we can define a directed graph with verticies as mind states (mind states would probably have been better than ‘observer moments’) with outgoing edges leading to mind states a few moments later.
That is kind of what I meant by “moment-by-moment” identity. By itself it is a local but not global definition of identity. The transitive closure of that relation gives you a global definition of identity. I haven’t thought about whether its a good one.
In the ordinary course of events these graphs aren’t very interesting, they’re just chains coming to a halt upon death. But if you were to clone a mind-state and put it into two different environments, they that would give you a vertex with out-degree greater than one.
So mind-uploading would not break such a thing, and in fact without being able to clone a mind-state, the whole graph-based model is not very interesting.
Also, you could have two mind states that lead to the same successor mind state—for example where two different mind states only differ on a few memories, which are then forgotten. The possibility of splitting and merging gives you a general (directed) graph structured identity.
(On a side-note, I think generally people treat splitting and merging of mind states in a way that is way too symmetrical. Splitting seems far easier—trivial once you can digitize a mind-state. Merging would be like a complex software version control problem, and you’d need very carefully apply selective amnesia to achieve it.)
So, if we say “immortality” is having an identity graph with an infinite number of mind-states all connected through the “moment-by-moment identity” relation (stay with me here), and mind states only have a finite number of successor states, then there must be at least one infinite path, and therefore “eternal existence in linear time”.
Rather contrived, I know.
So, the graph model of identity sort of works, but I feel it doesn’t quite get to the real meat of identity. I think the key is in how two vertices of the identity graph are linked and what it means for them to be linked. Because I don’t think the premise that a person is the same person they were a few moments ago is necessarily justified, and in some situations it doesn’t meld with intuition. For example, a person’s brain is a complex machine; imagine it were (using some extremely advanced technology) modified seriously while a person was still conscious. So, it’s being modified all the time as one learns new information, has new experiences, takes new substances, etc, but let’s imagine it was very dramatically modified. So much so that over the course of a few minutes, one person who once had the personality and memories of, say, you, ended up having the rough personality and memories of Barack Obama. Could it really be said that it’s still the same identity?
Why is an uploaded mind necessarily linked by an edge to the original mind? If the uploaded mind is less than perfect (and it probably will be; even if it’s off by one neuron, one bit, one atom) and you can still link that with an edge to the original mind, what’s to say you couldn’t link a very, very dodgy ‘clone’ mind, like for example the mind of a completely different human, via an edge, to the original mind/vertex?
Some other notes: firstly, an exact clone of a mind is the same mind. This pretty much makes sense. So you can get away from issues like ‘if I clone your mind, but then torture the clone, do you feel it?’ Well, if you’ve modified the state of the cloned mind by torturing it, it can no longer be said to be the same mind, and we would both presumably agree that me cloning your mind in a far away world and then torturing the clone does not make you experience anything.