My strong suspicion is that there isn’t anything in the territory that could make either of these true or false. That it’s a mental illusion, a belief that points to nothing fundamental but is just a byproduct of a model with an extraneous parameter.
I’m not sure that’s coherent. If we perceive it, there must be an entity or a process in the territory corresponding to it, which, in turn, makes it true (or false, in case the referent is missing—it seems like I could be mistaken about whether I’m a continuation of a particular person (for example, if I wake up tomorrow believing that I’m you)).
There are some objective means by which we can test properties associated with “X is a continuation of Y”, such as finding out whether X can answer questions consistently with having been Y. Those aspects are in the territory, but I don’t think they capture anybody’s concept of personal identity usefully.
If you can imagine doing every possible test and still not knowing the answer, that seems likely to be a sign that the question is referring to a distinction that is only in a model and isn’t in the territory at all.
That seems to be the case here. Most of humanity has a model of personal identity that works well for their ordinary experience: for each person X_now, and for each time t > now, there is exactly one person X_t such that X_t is a continuation of X_now and reversed for birth/conception < t < now. Many even extrapolate it past bodily destruction.
This relation has nice properties such as its symmetric extension (X continues Y or Y continues X) being an equivalence relation that partitions the world of people into distinct equivalence classes.
It is a less useful model in the presence of probably physically possible but currently implausible circumstances such as minds that can be copied, edited, overwritten, or merged. If a question is based on such a restricted model applied to the wider space of possibilities, it isn’t likely to have any answer.
I see, for me, if an ontology says that there is no fact on the matter as to whether X is a continuation of Y, it means the ontology has to be discarded and replaced by another one, but I can see how some people would be unbothered by their next experience being undefined, as long as it doesn’t happen too often.
I’m not sure that’s coherent. If we perceive it, there must be an entity or a process in the territory corresponding to it, which, in turn, makes it true (or false, in case the referent is missing—it seems like I could be mistaken about whether I’m a continuation of a particular person (for example, if I wake up tomorrow believing that I’m you)).
There are some objective means by which we can test properties associated with “X is a continuation of Y”, such as finding out whether X can answer questions consistently with having been Y. Those aspects are in the territory, but I don’t think they capture anybody’s concept of personal identity usefully.
With strong enough models of the mind we may even be able to objectively predict whether X considers themself to be a continuation of Y. This still does not mean that personal identity is in the territory, any more than the blegg/rube distinction is in the territory in https://www.lesswrong.com/posts/yA4gF5KrboK2m2Xu7/how-an-algorithm-feels-from-inside.
If you can imagine doing every possible test and still not knowing the answer, that seems likely to be a sign that the question is referring to a distinction that is only in a model and isn’t in the territory at all.
That seems to be the case here. Most of humanity has a model of personal identity that works well for their ordinary experience: for each person X_now, and for each time t > now, there is exactly one person X_t such that X_t is a continuation of X_now and reversed for birth/conception < t < now. Many even extrapolate it past bodily destruction.
This relation has nice properties such as its symmetric extension (X continues Y or Y continues X) being an equivalence relation that partitions the world of people into distinct equivalence classes.
It is a less useful model in the presence of probably physically possible but currently implausible circumstances such as minds that can be copied, edited, overwritten, or merged. If a question is based on such a restricted model applied to the wider space of possibilities, it isn’t likely to have any answer.
I see, for me, if an ontology says that there is no fact on the matter as to whether X is a continuation of Y, it means the ontology has to be discarded and replaced by another one, but I can see how some people would be unbothered by their next experience being undefined, as long as it doesn’t happen too often.