Given that the cognitive mechanism for computing that two perceptions are of the same concept is a complex evolved system, I find it about as likely that your mechanism for doing so is significantly different from mine as that you digest food in a significantly different way, or that you use a different fundamental principle for extracting information about your surroundings from the light that strikes your body.
But, OK, let’s suppose for the sake of the argument that it’s true… I have F1(), and you have F2(), and as a consequence one of us might have two experiences E1 and E2 and compute the existence of two agents A1 and A2, while the other has analogous experiences but computes the existence of only one agent A1.
So, OK, we disagree about whether A1 has had both experiences. For example, we disagree about whether I have gotten up from the copier twice, vs. I have gotten up from the copier once and someone else who remembers being me and is similar to me in some ways but isn’t actually me got up from the copier once.
So what? Why is it important that we agree?
What might underlie such a concern is the idea that there really is some fact of the matter as to whether I got up once, or twice, or not at all, over and above the specification of what entities got up and what their properties are, in which case one (or both) of us might be wrong, and we don’t want to be wrong. Is that the issue here?
I wasn’t thinking of F like that, but rather like a behavior or value that we can influence by choosing. In that sense, I spoke of ‘updating’ my F (the way I’d update a belief or change a behavior).
Your model is that F is similar across humans because it’s a mostly hardcoded, complex, shared pattern recognition mechanism. I think that description is true but for people who don’t grow up used to cloning or uploading or teleporting, who first encounter it as adults and have to adjust their F to handle the new situation, initial reactions will be more varied than that model suggests.
Some will take every clone, even to different substrates, to be the same as the original for all practical purposes. Others may refuse to acknowledge specific kinds of cloning as people (rejecting patternism), or attach special value to the original, or have doubts about cloning themselves.
What might underlie such a concern is the idea that there really is some fact of the matter
Yes. I fear that there may be, because I do not fully understand the matter of consciousness and expectations of personal experience.
The only nearly (but still not entirely) full and consistent explanation of it that I know of, is the one that rejects the continousness of conscious experience over time, and says each moment is experienced separately (each by a different experiencer, or all moments in the universe by the same experiencer, it makes no difference), it’s just that every experienced moment comes with memories that create the illusion of being connected to the previous moment of that mind-pattern.
This completely discards the notion of personal identity. I know some people believe in this, but I don’t, and don’t really want to if there’s a way to escape this repugnant conclusion without going against the truth.
So as long as there’s an open question, it’s a very important one. I want to be very sure of what I’m doing before I let myself be cloned.
Sure, if we’re concerned that I have individual consciousness which arises in some way we don’t understand, such that I might conclude that C is me on the basis of various observable facts when in reality C lacks that essential me-consciousness (either because C possesses someone-else-consciousness, or because C possess no consciousness at all and is instead a p-zombie, or for some other reason), then I can understand being very concerned about the possibility that C might get treated as though it were me when it really isn’t.
I am not in fact concerned about that, but I agree that if you are concerned about it, none of what I’m saying legitimately addresses that concern. (As far as I can tell, neither can anything else, but that’s a different question.)
Of course, similar issues arise when trying to address the concern that five minutes from now my consciousness might mysteriously be replaced by someone-else-consciousness, or might simply expire or move elsewhere, leaving me a p-zombie. Or the concern that this happened five minutes ago and I didn’t notice.
If you told me that as long as that remained an open question it was important, and you wanted to be very sure about it before you let your body (or mine!) live another five minutes, I’d be very concerned on a practical level.
As it stands, since there isn’t actually a cloning machine available for you to refuse the use of, it doesn’t really matter for practical purposes.
This completely discards the notion of personal identity.
This strikes me as a strange thing to say, given what you’ve said elsewhere about accepting that your personal identity—the referent for “I”—is a collection of agents that is neither coherent nor unique nor consistent. For my own part I agree with what you said there, which suggests that a notion of personal identity can be preserved even if my brain doesn’t turn out to house a single unique coherent consciousness, and I disagree with what you say here, which suggests that it can’t.
neither can anything else, but that’s a different question
Fully answering or dissolving the question—why is there subjective experience and qualia at all? - would I think address my concerns. It would also help if I could either construct a notion of identity through time which somehow tied into subjective experience, or else if it was conclusively proven (by logical argument, presumably) that such a notion can’t exist and that the “illusion of memory” is all there is.
For my own part I agree with what you said there, which suggests that a notion of personal identity can be preserved even if my brain doesn’t turn out to house a single unique coherent consciousness, and I disagree with what you say here, which suggests that it can’t.
As I said, I don’t personally endorse this view (which rejects personal identity). I don’t endorse it mostly because it is to me a repugnant conclusion. But I don’t know of a good model that predicts subjective experience meaningfully and doesn’t conflict with anything else. So I mentioned that model, for completeness.
FWIW, I reject the conclusion that the “illusion of memory” is all there is to our judgment of preserved identity, as it doesn’t seem to fit my observations. We don’t suddenly perceive Sam as no longer being Sam when he loses his memory (although equally clearly memory is a factor). As I said originally, it seems clear to me that there are a lot of factors like this, and we perform some aggregating computation across all of them to make a judgment about whether two experiences are of the same thing.
What I do say is that our judgment of preserved identity, which is a computation (what I labelled F(x) above) that takes a number of factors into account, is all there is… there is no mysterious essence of personal identity that must be captured over and above the factors that contribute to that computation.
As for what factors those are, that’s a question for cognitive science, which is making progress in answering it. Physical similarity is clearly relevant, although we clearly accept identity being preserved across changes in appearance… indeed, we can be induced to do so in situations where very small variations would prevent that acceptance, as with color phi. Gradualness of change is clearly relevant, though again not absolute. Similarity of behavior at some level of description is relevant, although there are multiple levels available and it’s possible for judgments to conflict here. Etc.
Various things can happen that cause individual judgments to differ. My mom might get Alzheimers and no longer recognize me as the same person she gave birth to, while I continue to identify myself that way. I might get amnesia and no longer recognize myself as the same person my mom gave birth to, while she continues to identify herself that way. Someone else might have a psychotic break and begin to identify themselves as Dave, while neither I nor my mom do. Etc. When that happens, we sometimes allow the judgments of others to substitute for our own judgments (e.g., “Well, I don’t remember being this Dave person and I don’t really feel like I am, but you all say that I am and I’ll accept that.”) to varying degrees.
I was midway through writing a response, and I had to explain the “illusion of memory” and why it matters. And then I thought about it. And I think I dissolved the confusion I had about it. I now realize it’s true but adds up to normality and therefore doesn’t lead to a repugnant conclusion.
I think you may have misunderstood what the “illusion” is. It’s not about recognizing others. It’s about recognizing oneself: specifically, self-identifying as an entity that exists over time (although it changes gradually over time). I self-identify like that, so do most other people.
The “illusion”—which was a poor name because there is no real illusion once properly understood—is: on the level of physics there is no tag that stays attached to my self (body or whatever) during its evolution through time. All that physically exists is a succession of time-instants in each of which there is an instance of myself. But why do I connect that set of instances together rather than some other set? The proximate reason is not that it is a set of similar instances, because I am not some mind that dwells outside time and can compare instances for similarity. The proximate reason is that each instant-self has memories of being all the previous selves. If it had different memories, it would identify differently. (“Memories” take time to be “read” in the brain, so I guess this includes the current brain “state” beyond memories. I am using a computer simile here; I am not aware of how the brain really works on this level.)
So memory, which exists in each instant of time, creates an “illusion” of a self that moves through time instead of an infinite sequence of logically-unconnected instances. And the repugnant conclusion (I thought) was that there really was no self beyond the instant, and therefore things that I valued which were not located strictly in the present were not in some sense “mine”; I could as well value having been happy yesterday as someone else having been happy yesterday, because all that was left of it today was memories. In particular, reality could have no value beyond that which false memories could provide, including e.g. false knowledge.
However, now I am able to see that this does in fact add up to normality. Not just that it must do so (like all things) but the way it actually does so. Just as I have extension in space, I have extension in time. Neither of these things makes me an ontologically fundamental entity, but that doesn’t prevent me from thinking of myself as an entity, a self, and being happy with that. Nature is not mysterious.
Unfortunately, I still feel some mystery and lack of understanding regarding the nature of conscious experience. But given that it exists, I have now updated towards “patternism”. I will take challenges like the Big Universe more seriously, and I would more readily agree to be uploaded or clones than I would have this morning.
Thank you for having this drawn-out conversation with me so I could come to these conclusions!
Given that the cognitive mechanism for computing that two perceptions are of the same concept is a complex evolved system, I find it about as likely that your mechanism for doing so is significantly different from mine as that you digest food in a significantly different way, or that you use a different fundamental principle for extracting information about your surroundings from the light that strikes your body.
But, OK, let’s suppose for the sake of the argument that it’s true… I have F1(), and you have F2(), and as a consequence one of us might have two experiences E1 and E2 and compute the existence of two agents A1 and A2, while the other has analogous experiences but computes the existence of only one agent A1.
So, OK, we disagree about whether A1 has had both experiences. For example, we disagree about whether I have gotten up from the copier twice, vs. I have gotten up from the copier once and someone else who remembers being me and is similar to me in some ways but isn’t actually me got up from the copier once.
So what? Why is it important that we agree?
What might underlie such a concern is the idea that there really is some fact of the matter as to whether I got up once, or twice, or not at all, over and above the specification of what entities got up and what their properties are, in which case one (or both) of us might be wrong, and we don’t want to be wrong. Is that the issue here?
I wasn’t thinking of F like that, but rather like a behavior or value that we can influence by choosing. In that sense, I spoke of ‘updating’ my F (the way I’d update a belief or change a behavior).
Your model is that F is similar across humans because it’s a mostly hardcoded, complex, shared pattern recognition mechanism. I think that description is true but for people who don’t grow up used to cloning or uploading or teleporting, who first encounter it as adults and have to adjust their F to handle the new situation, initial reactions will be more varied than that model suggests.
Some will take every clone, even to different substrates, to be the same as the original for all practical purposes. Others may refuse to acknowledge specific kinds of cloning as people (rejecting patternism), or attach special value to the original, or have doubts about cloning themselves.
Yes. I fear that there may be, because I do not fully understand the matter of consciousness and expectations of personal experience.
The only nearly (but still not entirely) full and consistent explanation of it that I know of, is the one that rejects the continousness of conscious experience over time, and says each moment is experienced separately (each by a different experiencer, or all moments in the universe by the same experiencer, it makes no difference), it’s just that every experienced moment comes with memories that create the illusion of being connected to the previous moment of that mind-pattern.
This completely discards the notion of personal identity. I know some people believe in this, but I don’t, and don’t really want to if there’s a way to escape this repugnant conclusion without going against the truth.
So as long as there’s an open question, it’s a very important one. I want to be very sure of what I’m doing before I let myself be cloned.
Ah, OK.
Sure, if we’re concerned that I have individual consciousness which arises in some way we don’t understand, such that I might conclude that C is me on the basis of various observable facts when in reality C lacks that essential me-consciousness (either because C possesses someone-else-consciousness, or because C possess no consciousness at all and is instead a p-zombie, or for some other reason), then I can understand being very concerned about the possibility that C might get treated as though it were me when it really isn’t.
I am not in fact concerned about that, but I agree that if you are concerned about it, none of what I’m saying legitimately addresses that concern. (As far as I can tell, neither can anything else, but that’s a different question.)
Of course, similar issues arise when trying to address the concern that five minutes from now my consciousness might mysteriously be replaced by someone-else-consciousness, or might simply expire or move elsewhere, leaving me a p-zombie. Or the concern that this happened five minutes ago and I didn’t notice.
If you told me that as long as that remained an open question it was important, and you wanted to be very sure about it before you let your body (or mine!) live another five minutes, I’d be very concerned on a practical level.
As it stands, since there isn’t actually a cloning machine available for you to refuse the use of, it doesn’t really matter for practical purposes.
This strikes me as a strange thing to say, given what you’ve said elsewhere about accepting that your personal identity—the referent for “I”—is a collection of agents that is neither coherent nor unique nor consistent. For my own part I agree with what you said there, which suggests that a notion of personal identity can be preserved even if my brain doesn’t turn out to house a single unique coherent consciousness, and I disagree with what you say here, which suggests that it can’t.
Fully answering or dissolving the question—why is there subjective experience and qualia at all? - would I think address my concerns. It would also help if I could either construct a notion of identity through time which somehow tied into subjective experience, or else if it was conclusively proven (by logical argument, presumably) that such a notion can’t exist and that the “illusion of memory” is all there is.
As I said, I don’t personally endorse this view (which rejects personal identity). I don’t endorse it mostly because it is to me a repugnant conclusion. But I don’t know of a good model that predicts subjective experience meaningfully and doesn’t conflict with anything else. So I mentioned that model, for completeness.
FWIW, I reject the conclusion that the “illusion of memory” is all there is to our judgment of preserved identity, as it doesn’t seem to fit my observations. We don’t suddenly perceive Sam as no longer being Sam when he loses his memory (although equally clearly memory is a factor). As I said originally, it seems clear to me that there are a lot of factors like this, and we perform some aggregating computation across all of them to make a judgment about whether two experiences are of the same thing.
What I do say is that our judgment of preserved identity, which is a computation (what I labelled F(x) above) that takes a number of factors into account, is all there is… there is no mysterious essence of personal identity that must be captured over and above the factors that contribute to that computation.
As for what factors those are, that’s a question for cognitive science, which is making progress in answering it. Physical similarity is clearly relevant, although we clearly accept identity being preserved across changes in appearance… indeed, we can be induced to do so in situations where very small variations would prevent that acceptance, as with color phi. Gradualness of change is clearly relevant, though again not absolute. Similarity of behavior at some level of description is relevant, although there are multiple levels available and it’s possible for judgments to conflict here. Etc.
Various things can happen that cause individual judgments to differ. My mom might get Alzheimers and no longer recognize me as the same person she gave birth to, while I continue to identify myself that way. I might get amnesia and no longer recognize myself as the same person my mom gave birth to, while she continues to identify herself that way. Someone else might have a psychotic break and begin to identify themselves as Dave, while neither I nor my mom do. Etc. When that happens, we sometimes allow the judgments of others to substitute for our own judgments (e.g., “Well, I don’t remember being this Dave person and I don’t really feel like I am, but you all say that I am and I’ll accept that.”) to varying degrees.
I was midway through writing a response, and I had to explain the “illusion of memory” and why it matters. And then I thought about it. And I think I dissolved the confusion I had about it. I now realize it’s true but adds up to normality and therefore doesn’t lead to a repugnant conclusion.
I think you may have misunderstood what the “illusion” is. It’s not about recognizing others. It’s about recognizing oneself: specifically, self-identifying as an entity that exists over time (although it changes gradually over time). I self-identify like that, so do most other people.
The “illusion”—which was a poor name because there is no real illusion once properly understood—is: on the level of physics there is no tag that stays attached to my self (body or whatever) during its evolution through time. All that physically exists is a succession of time-instants in each of which there is an instance of myself. But why do I connect that set of instances together rather than some other set? The proximate reason is not that it is a set of similar instances, because I am not some mind that dwells outside time and can compare instances for similarity. The proximate reason is that each instant-self has memories of being all the previous selves. If it had different memories, it would identify differently. (“Memories” take time to be “read” in the brain, so I guess this includes the current brain “state” beyond memories. I am using a computer simile here; I am not aware of how the brain really works on this level.)
So memory, which exists in each instant of time, creates an “illusion” of a self that moves through time instead of an infinite sequence of logically-unconnected instances. And the repugnant conclusion (I thought) was that there really was no self beyond the instant, and therefore things that I valued which were not located strictly in the present were not in some sense “mine”; I could as well value having been happy yesterday as someone else having been happy yesterday, because all that was left of it today was memories. In particular, reality could have no value beyond that which false memories could provide, including e.g. false knowledge.
However, now I am able to see that this does in fact add up to normality. Not just that it must do so (like all things) but the way it actually does so. Just as I have extension in space, I have extension in time. Neither of these things makes me an ontologically fundamental entity, but that doesn’t prevent me from thinking of myself as an entity, a self, and being happy with that. Nature is not mysterious.
Unfortunately, I still feel some mystery and lack of understanding regarding the nature of conscious experience. But given that it exists, I have now updated towards “patternism”. I will take challenges like the Big Universe more seriously, and I would more readily agree to be uploaded or clones than I would have this morning.
Thank you for having this drawn-out conversation with me so I could come to these conclusions!
You’re welcome.