Sure.
Specifying my position more precisely will take a fair number of words, but OK, here goes.
There are three entities under discussion here: A = Dave at T1, sitting down in the copier. B = Dave at T2, standing up from the copier. C = Copy-of-Dave at T2, standing up from the copier. ...and the question at hand is which of these entities, if any, is me. (Yes? Or is that a different question than the one you are interested in)
Well, OK. Let’s start with A… why do I believe A is me? Well, I don’t, really. I mean, I have never sat down at an identity-copying machine. But I’m positing that A is me in this thought experiment, and asking what follows from that.
Now, consider B… why do I believe B is me? Well, in part because I expect B and A to be very similar, even if not quite identical.
But is that a fair assumption in this thought experiment? It might be that the experience of knowing C exists would cause profound alterations in my psyche, such that B believes (based on his memories of being A) that A was a very different person, and A would agree if he were somehow granted knowledge of what it was like to be B. I’m told having a child sometimes creates these kinds of profound changes in self-image, and it would not surprise me too much if having a duplicate sometimes did the same thing. More mundanely, it might be that the experience of being scanned for copy causes alterations in my mind, brain, or body such that B isn’t me even if A is. Heck, it’s possible that I’m not really the same person I was before my stroke… there are certainly differences. It’s even more possible that I’m not really the person I was at age 2… I have less in common with that entity than I do with you.
Thinking about it, it seems that there’s a complex cluster of features that I treat as evidence of identity being preserved from one moment to another, none of which is either sufficient or necessary in isolation. Sharing memories is one such feature. Being in the same location is another. Having the same macroscopic physical composition (e.g. DNA) is a third. Having the same personality is a fourth. (Many of these are themselves complex clusters of neither-necessary-nor-sufficient features.)
For convenience, I will label the comparison operation that relies on that set of features to judge similarity F(x,y). That is, what F(A,B) denotes is comparing A and B, determining how closely they match along the various referenced dimensions, weighting the results based on how important that dimension is and degree of match, comparing those weighted results to various thresholds, and ultimately coming out at the other end with a “family resemblance” judgment: A and B are either hashed into the same bucket, or they aren’t.
So, OK. B gets up from the machine, and I expect that while B may be quite different from A, F(B,A) will still sort them both into the same bucket. On that basis, I conclude that B is me, and I therefore expect that I will get up from the machine.
If instead I assume that F(B,A) sorts them into different buckets, then the possibility that I don’t get up from that machine starts to seem reasonable… B gets up, but B isn’t me. I just don’t expect that to happen, because I have lots of experiences of sitting down and getting up from chairs.
But of course those experiences aren’t probitive. Sure, my memories of the person who sat down at my desk this morning match my sense of who I am right now, but that doesn’t preclude the possibility that those memories are different from what they were before i sat down, and I just don’t remember how I was then. Heck, I might be a Boltzman brain.
I can’t disprove any of those ideas, but neither is there any evidence supporting them; there’s no reason for those hypotheses to be promoted for consideration in the first place. Ultimately, I believe that I’m the same person I was this morning because it’s simplest to assume so; and I believe that if I wake up tomorrow I’ll be the same person then as well for the same reason. If someone wants me to seriously consider the possibility that these assumptions are false, it’s up to them to provide evidence of it.
Now let’s consider C. Up to a point, C is basically in the same case as B: C gets up from the machine, and I expect that while C may be quite different from A, F(C,A) will still sort them both into the same bucket. As with B, on that basis I expect that I will get up from the machine (a second time).
If instead I assume that F(C,A) sorts them into different buckets, the possibility that I don’t get up from that machine a second time starts to seem reasonable… C gets up, but C isn’t me.
So, sure. If the duplication process is poor enough that evaluating the key cluster of properties for C gives radically different results than for A, then I conclude that A and C aren’t the same person. If A is me, then I sit down at the machine but I don’t get up from it.
And, yes, my expectations about the reliability of the duplication process governs things like how I split my wealth, etc.
None of this strikes me as particularly confusing or controversial, though working out exactly what F() comprises is an interesting cognitive science problem.
Oh, and just to be clear, since you brought up quantum-identity: quantum-identity is irrelevant here. If it turns out that my quantum identity has not been preserved over the last 42 years of my existence, that doesn’t noticeably alter my confidence that I’ve been me during that time.
I’m a bit embarrassed to have made you write all that out in long form. Because it doesn’t really answer my question: all the complexity is hidden in the F function, which we don’t know.
You suggest F is to be empirically derived by (in the future) observing other people in the same situations. That’s a good strategy for dealing with other people, but should I update towards having the same F as everyone else? As Eliezer said, I’m not perfectly convinced, and I don’t feel perfectly safe, because I don’t understand the problem that is purportedly being solved, even though I seem to understand the solution.
Given that the cognitive mechanism for computing that two perceptions are of the same concept is a complex evolved system, I find it about as likely that your mechanism for doing so is significantly different from mine as that you digest food in a significantly different way, or that you use a different fundamental principle for extracting information about your surroundings from the light that strikes your body.
But, OK, let’s suppose for the sake of the argument that it’s true… I have F1(), and you have F2(), and as a consequence one of us might have two experiences E1 and E2 and compute the existence of two agents A1 and A2, while the other has analogous experiences but computes the existence of only one agent A1.
So, OK, we disagree about whether A1 has had both experiences. For example, we disagree about whether I have gotten up from the copier twice, vs. I have gotten up from the copier once and someone else who remembers being me and is similar to me in some ways but isn’t actually me got up from the copier once.
So what? Why is it important that we agree?
What might underlie such a concern is the idea that there really is some fact of the matter as to whether I got up once, or twice, or not at all, over and above the specification of what entities got up and what their properties are, in which case one (or both) of us might be wrong, and we don’t want to be wrong. Is that the issue here?
I wasn’t thinking of F like that, but rather like a behavior or value that we can influence by choosing. In that sense, I spoke of ‘updating’ my F (the way I’d update a belief or change a behavior).
Your model is that F is similar across humans because it’s a mostly hardcoded, complex, shared pattern recognition mechanism. I think that description is true but for people who don’t grow up used to cloning or uploading or teleporting, who first encounter it as adults and have to adjust their F to handle the new situation, initial reactions will be more varied than that model suggests.
Some will take every clone, even to different substrates, to be the same as the original for all practical purposes. Others may refuse to acknowledge specific kinds of cloning as people (rejecting patternism), or attach special value to the original, or have doubts about cloning themselves.
What might underlie such a concern is the idea that there really is some fact of the matter
Yes. I fear that there may be, because I do not fully understand the matter of consciousness and expectations of personal experience.
The only nearly (but still not entirely) full and consistent explanation of it that I know of, is the one that rejects the continousness of conscious experience over time, and says each moment is experienced separately (each by a different experiencer, or all moments in the universe by the same experiencer, it makes no difference), it’s just that every experienced moment comes with memories that create the illusion of being connected to the previous moment of that mind-pattern.
This completely discards the notion of personal identity. I know some people believe in this, but I don’t, and don’t really want to if there’s a way to escape this repugnant conclusion without going against the truth.
So as long as there’s an open question, it’s a very important one. I want to be very sure of what I’m doing before I let myself be cloned.
Sure, if we’re concerned that I have individual consciousness which arises in some way we don’t understand, such that I might conclude that C is me on the basis of various observable facts when in reality C lacks that essential me-consciousness (either because C possesses someone-else-consciousness, or because C possess no consciousness at all and is instead a p-zombie, or for some other reason), then I can understand being very concerned about the possibility that C might get treated as though it were me when it really isn’t.
I am not in fact concerned about that, but I agree that if you are concerned about it, none of what I’m saying legitimately addresses that concern. (As far as I can tell, neither can anything else, but that’s a different question.)
Of course, similar issues arise when trying to address the concern that five minutes from now my consciousness might mysteriously be replaced by someone-else-consciousness, or might simply expire or move elsewhere, leaving me a p-zombie. Or the concern that this happened five minutes ago and I didn’t notice.
If you told me that as long as that remained an open question it was important, and you wanted to be very sure about it before you let your body (or mine!) live another five minutes, I’d be very concerned on a practical level.
As it stands, since there isn’t actually a cloning machine available for you to refuse the use of, it doesn’t really matter for practical purposes.
This completely discards the notion of personal identity.
This strikes me as a strange thing to say, given what you’ve said elsewhere about accepting that your personal identity—the referent for “I”—is a collection of agents that is neither coherent nor unique nor consistent. For my own part I agree with what you said there, which suggests that a notion of personal identity can be preserved even if my brain doesn’t turn out to house a single unique coherent consciousness, and I disagree with what you say here, which suggests that it can’t.
neither can anything else, but that’s a different question
Fully answering or dissolving the question—why is there subjective experience and qualia at all? - would I think address my concerns. It would also help if I could either construct a notion of identity through time which somehow tied into subjective experience, or else if it was conclusively proven (by logical argument, presumably) that such a notion can’t exist and that the “illusion of memory” is all there is.
For my own part I agree with what you said there, which suggests that a notion of personal identity can be preserved even if my brain doesn’t turn out to house a single unique coherent consciousness, and I disagree with what you say here, which suggests that it can’t.
As I said, I don’t personally endorse this view (which rejects personal identity). I don’t endorse it mostly because it is to me a repugnant conclusion. But I don’t know of a good model that predicts subjective experience meaningfully and doesn’t conflict with anything else. So I mentioned that model, for completeness.
FWIW, I reject the conclusion that the “illusion of memory” is all there is to our judgment of preserved identity, as it doesn’t seem to fit my observations. We don’t suddenly perceive Sam as no longer being Sam when he loses his memory (although equally clearly memory is a factor). As I said originally, it seems clear to me that there are a lot of factors like this, and we perform some aggregating computation across all of them to make a judgment about whether two experiences are of the same thing.
What I do say is that our judgment of preserved identity, which is a computation (what I labelled F(x) above) that takes a number of factors into account, is all there is… there is no mysterious essence of personal identity that must be captured over and above the factors that contribute to that computation.
As for what factors those are, that’s a question for cognitive science, which is making progress in answering it. Physical similarity is clearly relevant, although we clearly accept identity being preserved across changes in appearance… indeed, we can be induced to do so in situations where very small variations would prevent that acceptance, as with color phi. Gradualness of change is clearly relevant, though again not absolute. Similarity of behavior at some level of description is relevant, although there are multiple levels available and it’s possible for judgments to conflict here. Etc.
Various things can happen that cause individual judgments to differ. My mom might get Alzheimers and no longer recognize me as the same person she gave birth to, while I continue to identify myself that way. I might get amnesia and no longer recognize myself as the same person my mom gave birth to, while she continues to identify herself that way. Someone else might have a psychotic break and begin to identify themselves as Dave, while neither I nor my mom do. Etc. When that happens, we sometimes allow the judgments of others to substitute for our own judgments (e.g., “Well, I don’t remember being this Dave person and I don’t really feel like I am, but you all say that I am and I’ll accept that.”) to varying degrees.
I was midway through writing a response, and I had to explain the “illusion of memory” and why it matters. And then I thought about it. And I think I dissolved the confusion I had about it. I now realize it’s true but adds up to normality and therefore doesn’t lead to a repugnant conclusion.
I think you may have misunderstood what the “illusion” is. It’s not about recognizing others. It’s about recognizing oneself: specifically, self-identifying as an entity that exists over time (although it changes gradually over time). I self-identify like that, so do most other people.
The “illusion”—which was a poor name because there is no real illusion once properly understood—is: on the level of physics there is no tag that stays attached to my self (body or whatever) during its evolution through time. All that physically exists is a succession of time-instants in each of which there is an instance of myself. But why do I connect that set of instances together rather than some other set? The proximate reason is not that it is a set of similar instances, because I am not some mind that dwells outside time and can compare instances for similarity. The proximate reason is that each instant-self has memories of being all the previous selves. If it had different memories, it would identify differently. (“Memories” take time to be “read” in the brain, so I guess this includes the current brain “state” beyond memories. I am using a computer simile here; I am not aware of how the brain really works on this level.)
So memory, which exists in each instant of time, creates an “illusion” of a self that moves through time instead of an infinite sequence of logically-unconnected instances. And the repugnant conclusion (I thought) was that there really was no self beyond the instant, and therefore things that I valued which were not located strictly in the present were not in some sense “mine”; I could as well value having been happy yesterday as someone else having been happy yesterday, because all that was left of it today was memories. In particular, reality could have no value beyond that which false memories could provide, including e.g. false knowledge.
However, now I am able to see that this does in fact add up to normality. Not just that it must do so (like all things) but the way it actually does so. Just as I have extension in space, I have extension in time. Neither of these things makes me an ontologically fundamental entity, but that doesn’t prevent me from thinking of myself as an entity, a self, and being happy with that. Nature is not mysterious.
Unfortunately, I still feel some mystery and lack of understanding regarding the nature of conscious experience. But given that it exists, I have now updated towards “patternism”. I will take challenges like the Big Universe more seriously, and I would more readily agree to be uploaded or clones than I would have this morning.
Thank you for having this drawn-out conversation with me so I could come to these conclusions!
Sure. Specifying my position more precisely will take a fair number of words, but OK, here goes.
There are three entities under discussion here:
A = Dave at T1, sitting down in the copier.
B = Dave at T2, standing up from the copier.
C = Copy-of-Dave at T2, standing up from the copier.
...and the question at hand is which of these entities, if any, is me. (Yes? Or is that a different question than the one you are interested in)
Well, OK. Let’s start with A… why do I believe A is me?
Well, I don’t, really. I mean, I have never sat down at an identity-copying machine.
But I’m positing that A is me in this thought experiment, and asking what follows from that.
Now, consider B… why do I believe B is me?
Well, in part because I expect B and A to be very similar, even if not quite identical.
But is that a fair assumption in this thought experiment?
It might be that the experience of knowing C exists would cause profound alterations in my psyche, such that B believes (based on his memories of being A) that A was a very different person, and A would agree if he were somehow granted knowledge of what it was like to be B. I’m told having a child sometimes creates these kinds of profound changes in self-image, and it would not surprise me too much if having a duplicate sometimes did the same thing.
More mundanely, it might be that the experience of being scanned for copy causes alterations in my mind, brain, or body such that B isn’t me even if A is.
Heck, it’s possible that I’m not really the same person I was before my stroke… there are certainly differences. It’s even more possible that I’m not really the person I was at age 2… I have less in common with that entity than I do with you.
Thinking about it, it seems that there’s a complex cluster of features that I treat as evidence of identity being preserved from one moment to another, none of which is either sufficient or necessary in isolation. Sharing memories is one such feature. Being in the same location is another. Having the same macroscopic physical composition (e.g. DNA) is a third. Having the same personality is a fourth. (Many of these are themselves complex clusters of neither-necessary-nor-sufficient features.)
For convenience, I will label the comparison operation that relies on that set of features to judge similarity F(x,y). That is, what F(A,B) denotes is comparing A and B, determining how closely they match along the various referenced dimensions, weighting the results based on how important that dimension is and degree of match, comparing those weighted results to various thresholds, and ultimately coming out at the other end with a “family resemblance” judgment: A and B are either hashed into the same bucket, or they aren’t.
So, OK. B gets up from the machine, and I expect that while B may be quite different from A, F(B,A) will still sort them both into the same bucket. On that basis, I conclude that B is me, and I therefore expect that I will get up from the machine.
If instead I assume that F(B,A) sorts them into different buckets, then the possibility that I don’t get up from that machine starts to seem reasonable… B gets up, but B isn’t me.
I just don’t expect that to happen, because I have lots of experiences of sitting down and getting up from chairs.
But of course those experiences aren’t probitive. Sure, my memories of the person who sat down at my desk this morning match my sense of who I am right now, but that doesn’t preclude the possibility that those memories are different from what they were before i sat down, and I just don’t remember how I was then. Heck, I might be a Boltzman brain.
I can’t disprove any of those ideas, but neither is there any evidence supporting them; there’s no reason for those hypotheses to be promoted for consideration in the first place. Ultimately, I believe that I’m the same person I was this morning because it’s simplest to assume so; and I believe that if I wake up tomorrow I’ll be the same person then as well for the same reason. If someone wants me to seriously consider the possibility that these assumptions are false, it’s up to them to provide evidence of it.
Now let’s consider C.
Up to a point, C is basically in the same case as B: C gets up from the machine, and I expect that while C may be quite different from A, F(C,A) will still sort them both into the same bucket. As with B, on that basis I expect that I will get up from the machine (a second time).
If instead I assume that F(C,A) sorts them into different buckets, the possibility that I don’t get up from that machine a second time starts to seem reasonable… C gets up, but C isn’t me.
So, sure. If the duplication process is poor enough that evaluating the key cluster of properties for C gives radically different results than for A, then I conclude that A and C aren’t the same person. If A is me, then I sit down at the machine but I don’t get up from it.
And, yes, my expectations about the reliability of the duplication process governs things like how I split my wealth, etc.
None of this strikes me as particularly confusing or controversial, though working out exactly what F() comprises is an interesting cognitive science problem.
Oh, and just to be clear, since you brought up quantum-identity: quantum-identity is irrelevant here. If it turns out that my quantum identity has not been preserved over the last 42 years of my existence, that doesn’t noticeably alter my confidence that I’ve been me during that time.
I’m a bit embarrassed to have made you write all that out in long form. Because it doesn’t really answer my question: all the complexity is hidden in the F function, which we don’t know.
You suggest F is to be empirically derived by (in the future) observing other people in the same situations. That’s a good strategy for dealing with other people, but should I update towards having the same F as everyone else? As Eliezer said, I’m not perfectly convinced, and I don’t feel perfectly safe, because I don’t understand the problem that is purportedly being solved, even though I seem to understand the solution.
Given that the cognitive mechanism for computing that two perceptions are of the same concept is a complex evolved system, I find it about as likely that your mechanism for doing so is significantly different from mine as that you digest food in a significantly different way, or that you use a different fundamental principle for extracting information about your surroundings from the light that strikes your body.
But, OK, let’s suppose for the sake of the argument that it’s true… I have F1(), and you have F2(), and as a consequence one of us might have two experiences E1 and E2 and compute the existence of two agents A1 and A2, while the other has analogous experiences but computes the existence of only one agent A1.
So, OK, we disagree about whether A1 has had both experiences. For example, we disagree about whether I have gotten up from the copier twice, vs. I have gotten up from the copier once and someone else who remembers being me and is similar to me in some ways but isn’t actually me got up from the copier once.
So what? Why is it important that we agree?
What might underlie such a concern is the idea that there really is some fact of the matter as to whether I got up once, or twice, or not at all, over and above the specification of what entities got up and what their properties are, in which case one (or both) of us might be wrong, and we don’t want to be wrong. Is that the issue here?
I wasn’t thinking of F like that, but rather like a behavior or value that we can influence by choosing. In that sense, I spoke of ‘updating’ my F (the way I’d update a belief or change a behavior).
Your model is that F is similar across humans because it’s a mostly hardcoded, complex, shared pattern recognition mechanism. I think that description is true but for people who don’t grow up used to cloning or uploading or teleporting, who first encounter it as adults and have to adjust their F to handle the new situation, initial reactions will be more varied than that model suggests.
Some will take every clone, even to different substrates, to be the same as the original for all practical purposes. Others may refuse to acknowledge specific kinds of cloning as people (rejecting patternism), or attach special value to the original, or have doubts about cloning themselves.
Yes. I fear that there may be, because I do not fully understand the matter of consciousness and expectations of personal experience.
The only nearly (but still not entirely) full and consistent explanation of it that I know of, is the one that rejects the continousness of conscious experience over time, and says each moment is experienced separately (each by a different experiencer, or all moments in the universe by the same experiencer, it makes no difference), it’s just that every experienced moment comes with memories that create the illusion of being connected to the previous moment of that mind-pattern.
This completely discards the notion of personal identity. I know some people believe in this, but I don’t, and don’t really want to if there’s a way to escape this repugnant conclusion without going against the truth.
So as long as there’s an open question, it’s a very important one. I want to be very sure of what I’m doing before I let myself be cloned.
Ah, OK.
Sure, if we’re concerned that I have individual consciousness which arises in some way we don’t understand, such that I might conclude that C is me on the basis of various observable facts when in reality C lacks that essential me-consciousness (either because C possesses someone-else-consciousness, or because C possess no consciousness at all and is instead a p-zombie, or for some other reason), then I can understand being very concerned about the possibility that C might get treated as though it were me when it really isn’t.
I am not in fact concerned about that, but I agree that if you are concerned about it, none of what I’m saying legitimately addresses that concern. (As far as I can tell, neither can anything else, but that’s a different question.)
Of course, similar issues arise when trying to address the concern that five minutes from now my consciousness might mysteriously be replaced by someone-else-consciousness, or might simply expire or move elsewhere, leaving me a p-zombie. Or the concern that this happened five minutes ago and I didn’t notice.
If you told me that as long as that remained an open question it was important, and you wanted to be very sure about it before you let your body (or mine!) live another five minutes, I’d be very concerned on a practical level.
As it stands, since there isn’t actually a cloning machine available for you to refuse the use of, it doesn’t really matter for practical purposes.
This strikes me as a strange thing to say, given what you’ve said elsewhere about accepting that your personal identity—the referent for “I”—is a collection of agents that is neither coherent nor unique nor consistent. For my own part I agree with what you said there, which suggests that a notion of personal identity can be preserved even if my brain doesn’t turn out to house a single unique coherent consciousness, and I disagree with what you say here, which suggests that it can’t.
Fully answering or dissolving the question—why is there subjective experience and qualia at all? - would I think address my concerns. It would also help if I could either construct a notion of identity through time which somehow tied into subjective experience, or else if it was conclusively proven (by logical argument, presumably) that such a notion can’t exist and that the “illusion of memory” is all there is.
As I said, I don’t personally endorse this view (which rejects personal identity). I don’t endorse it mostly because it is to me a repugnant conclusion. But I don’t know of a good model that predicts subjective experience meaningfully and doesn’t conflict with anything else. So I mentioned that model, for completeness.
FWIW, I reject the conclusion that the “illusion of memory” is all there is to our judgment of preserved identity, as it doesn’t seem to fit my observations. We don’t suddenly perceive Sam as no longer being Sam when he loses his memory (although equally clearly memory is a factor). As I said originally, it seems clear to me that there are a lot of factors like this, and we perform some aggregating computation across all of them to make a judgment about whether two experiences are of the same thing.
What I do say is that our judgment of preserved identity, which is a computation (what I labelled F(x) above) that takes a number of factors into account, is all there is… there is no mysterious essence of personal identity that must be captured over and above the factors that contribute to that computation.
As for what factors those are, that’s a question for cognitive science, which is making progress in answering it. Physical similarity is clearly relevant, although we clearly accept identity being preserved across changes in appearance… indeed, we can be induced to do so in situations where very small variations would prevent that acceptance, as with color phi. Gradualness of change is clearly relevant, though again not absolute. Similarity of behavior at some level of description is relevant, although there are multiple levels available and it’s possible for judgments to conflict here. Etc.
Various things can happen that cause individual judgments to differ. My mom might get Alzheimers and no longer recognize me as the same person she gave birth to, while I continue to identify myself that way. I might get amnesia and no longer recognize myself as the same person my mom gave birth to, while she continues to identify herself that way. Someone else might have a psychotic break and begin to identify themselves as Dave, while neither I nor my mom do. Etc. When that happens, we sometimes allow the judgments of others to substitute for our own judgments (e.g., “Well, I don’t remember being this Dave person and I don’t really feel like I am, but you all say that I am and I’ll accept that.”) to varying degrees.
I was midway through writing a response, and I had to explain the “illusion of memory” and why it matters. And then I thought about it. And I think I dissolved the confusion I had about it. I now realize it’s true but adds up to normality and therefore doesn’t lead to a repugnant conclusion.
I think you may have misunderstood what the “illusion” is. It’s not about recognizing others. It’s about recognizing oneself: specifically, self-identifying as an entity that exists over time (although it changes gradually over time). I self-identify like that, so do most other people.
The “illusion”—which was a poor name because there is no real illusion once properly understood—is: on the level of physics there is no tag that stays attached to my self (body or whatever) during its evolution through time. All that physically exists is a succession of time-instants in each of which there is an instance of myself. But why do I connect that set of instances together rather than some other set? The proximate reason is not that it is a set of similar instances, because I am not some mind that dwells outside time and can compare instances for similarity. The proximate reason is that each instant-self has memories of being all the previous selves. If it had different memories, it would identify differently. (“Memories” take time to be “read” in the brain, so I guess this includes the current brain “state” beyond memories. I am using a computer simile here; I am not aware of how the brain really works on this level.)
So memory, which exists in each instant of time, creates an “illusion” of a self that moves through time instead of an infinite sequence of logically-unconnected instances. And the repugnant conclusion (I thought) was that there really was no self beyond the instant, and therefore things that I valued which were not located strictly in the present were not in some sense “mine”; I could as well value having been happy yesterday as someone else having been happy yesterday, because all that was left of it today was memories. In particular, reality could have no value beyond that which false memories could provide, including e.g. false knowledge.
However, now I am able to see that this does in fact add up to normality. Not just that it must do so (like all things) but the way it actually does so. Just as I have extension in space, I have extension in time. Neither of these things makes me an ontologically fundamental entity, but that doesn’t prevent me from thinking of myself as an entity, a self, and being happy with that. Nature is not mysterious.
Unfortunately, I still feel some mystery and lack of understanding regarding the nature of conscious experience. But given that it exists, I have now updated towards “patternism”. I will take challenges like the Big Universe more seriously, and I would more readily agree to be uploaded or clones than I would have this morning.
Thank you for having this drawn-out conversation with me so I could come to these conclusions!
You’re welcome.