Hold on—those are important articles to read, and they do move you toward a resolution of that problem. But I don’t think they fully dissolve/answer the exact question daedalus2u is asking.
For example, EY has written this article, grappling with but ultimately not resolving the question of whether you should care about “other copies” of you, why you are not indifferent between yourself vs. someone else jumping off a cliff, etc.
I don’t deny that the existing articles do resolve some of the problems daedulus2u is posing, but they don’t cover everything he asked.
SilasBarta, yes, I was thinking about purely classical entities, the kind of computers that we would make now out of classical components. You can make an identical copy of a classical object. If you accept substrate independence for entities, then you can’t “dissolve” the question.
If Ebborians are classical entities, then exact copies are possible. An Ebborian can split and become two entities and accumulate two different sets of experiences. What if those two Ebborians then transfer memory files such that they now have identical experiences? (I appreciate this is not possible with biological entities because memories are not stored as discrete files).
Turing Machines are purely classical entities. They are all equivalent, except for the data fed into them. If humans can be represented by a TM, then all humans are identical except for the data fed into the TM that is simulating them. Where is this wrong?
Turing Machines are purely classical entities. They are all equivalent, except for the data fed into them. If humans can be represented by a TM, then all humans are identical except for the data fed into the TM that is simulating them. Where is this wrong?
It’s no more wrong than saying that all books are identical except for the differing number and arrangement of letters. It’s also no more useful.
Except human entities are a dynamic object, unlike a static object like a book. Books are not considered to be “alive”, or “self-aware”.
If two humans can both be represented by TM with different tapes, then one human can be turned into another human by feeding one tape in backwards then feeding in the other tape frontwards. If one human can be turned into another by a purely mechanical process, how does the “life”, or “entity identity”, or “consciousness change” as that transformation is occurring?
I don’t have an answer, I suspect that the problem is tied up in our conceptualization of what consciousness and identity actually is.
My own feeling is that consciousness is an illusion, and that illusion is what produces the illusion of identity continuity over a person’s lifetime. Presumably there is an “identity module”, and that “identity module” is what self-identifies an individual as “the same” individual over time (not complete one-to-one correspondence between entities which we know does not happen), even as the individual changes. If that is correct, then change the “identity module” and you change the self-perception of identity.
I don’t see why the TM issue is essential to your confusion. If you are not a dualist then the fact that two human brains differ only in the precise arrangement of the same types of atoms present in very similar numbers and proportions raises the same questions.
I am not a dualist. I used the TM to avoid issues of quantum mechanics. TM equivalent is not compatible with a dualist view either.
Only a part of what the brain does is conscious. The visual cortex isn’t conscious. The processing of signals from the retina is not under conscious control. That is why optical illusions work, the signal processing happens a certain way, and that certain way cannot be changed even when consciously it is known that what is seen is counterfactual.
There are many aspects of brain information processing that are like this. Sound processing is like this; where sounds are decoded and pattern matched to communication symbols.
Since we know that the entity instantiating itself in our brain is not identical with the entity that was there a day ago, a week ago, a year ago, and will not be identical to the entity that will be there next year, why do we perceive there to be continuity of consciousness?
Is that an illusion of continuity the same as the way the visual cortex fills in the blind spot on the retina? Is that an illusion of continuity the same as pareidolia?
I suspect that the question of consciousness isn’t so much why we experience consciousness, but why we experience a continuity of consciousness when we know there is no continuity.
If Ebborians are classical entities, then exact copies are possible. An Ebborian can split and become two entities and accumulate two different sets of experiences. What if those two Ebborians then transfer memory files such that they now have identical experiences? (I appreciate this is not possible with biological entities because memories are not stored as discrete files).
You may be interested that I probed a similar question regarding how “qualia” come into play with this post about when two (classical) beings trade experiences.
Hold on—those are important articles to read, and they do move you toward a resolution of that problem. But I don’t think they fully dissolve/answer the exact question daedalus2u is asking.
For example, EY has written this article, grappling with but ultimately not resolving the question of whether you should care about “other copies” of you, why you are not indifferent between yourself vs. someone else jumping off a cliff, etc.
I don’t deny that the existing articles do resolve some of the problems daedulus2u is posing, but they don’t cover everything he asked.
Unless I’ve missed something?
SilasBarta, yes, I was thinking about purely classical entities, the kind of computers that we would make now out of classical components. You can make an identical copy of a classical object. If you accept substrate independence for entities, then you can’t “dissolve” the question.
If Ebborians are classical entities, then exact copies are possible. An Ebborian can split and become two entities and accumulate two different sets of experiences. What if those two Ebborians then transfer memory files such that they now have identical experiences? (I appreciate this is not possible with biological entities because memories are not stored as discrete files).
Turing Machines are purely classical entities. They are all equivalent, except for the data fed into them. If humans can be represented by a TM, then all humans are identical except for the data fed into the TM that is simulating them. Where is this wrong?
It’s no more wrong than saying that all books are identical except for the differing number and arrangement of letters. It’s also no more useful.
Except human entities are a dynamic object, unlike a static object like a book. Books are not considered to be “alive”, or “self-aware”.
If two humans can both be represented by TM with different tapes, then one human can be turned into another human by feeding one tape in backwards then feeding in the other tape frontwards. If one human can be turned into another by a purely mechanical process, how does the “life”, or “entity identity”, or “consciousness change” as that transformation is occurring?
I don’t have an answer, I suspect that the problem is tied up in our conceptualization of what consciousness and identity actually is.
My own feeling is that consciousness is an illusion, and that illusion is what produces the illusion of identity continuity over a person’s lifetime. Presumably there is an “identity module”, and that “identity module” is what self-identifies an individual as “the same” individual over time (not complete one-to-one correspondence between entities which we know does not happen), even as the individual changes. If that is correct, then change the “identity module” and you change the self-perception of identity.
I don’t see why the TM issue is essential to your confusion. If you are not a dualist then the fact that two human brains differ only in the precise arrangement of the same types of atoms present in very similar numbers and proportions raises the same questions.
I am not a dualist. I used the TM to avoid issues of quantum mechanics. TM equivalent is not compatible with a dualist view either.
Only a part of what the brain does is conscious. The visual cortex isn’t conscious. The processing of signals from the retina is not under conscious control. That is why optical illusions work, the signal processing happens a certain way, and that certain way cannot be changed even when consciously it is known that what is seen is counterfactual.
There are many aspects of brain information processing that are like this. Sound processing is like this; where sounds are decoded and pattern matched to communication symbols.
Since we know that the entity instantiating itself in our brain is not identical with the entity that was there a day ago, a week ago, a year ago, and will not be identical to the entity that will be there next year, why do we perceive there to be continuity of consciousness?
Is that an illusion of continuity the same as the way the visual cortex fills in the blind spot on the retina? Is that an illusion of continuity the same as pareidolia?
I suspect that the question of consciousness isn’t so much why we experience consciousness, but why we experience a continuity of consciousness when we know there is no continuity.
You may be interested that I probed a similar question regarding how “qualia” come into play with this post about when two (classical) beings trade experiences.