The brain is changing over time. It is likely that there is not a single atom in your 2021-brain that was present in your 2011-brain.
If you agree that the natural replacements haven’t killed you (2011-you and 2021-you are the same conscious agent), then it’s possible to transfer your mind to a machine in a similar manner. Because you’ve already survived a mind uploading into a new brain.
Gradual mind uploading (e.g. by gradually replacing neurons with emulated replicas) circumvents the philosophical problems attributed to non-gradual methods.
Personally, although I prefer gradual uploading, I would agree to a non-gradual method too, as I don’t see the philosophical problems as important. As per the Newton’s Flaming Laser Sword:
if a question, even in principle, can’t be resolved by an experiment, then it is not worth considering.
If a machine behaves like me, it is me. Whatever we share some unmeasurable sameness—is of no importance for me.
The brain is but a computing device. You give it inputs, and it returns outputs. There is nothing beyond that. For all practical purposes, if two devices have the same inputs→outputs mapping, you can replace one of them with another.
As Dennett put it, everyone is a philosophicalzombie.
There are a lot of interesting points here, but I disagree (or am hesitant to agree) with most of them.
If you agree that the natural replacements haven’t killed you (2011-you and 2021-you are the same conscious agent), then it’s possible to transfer your mind to a machine in a similar manner. Because you’ve already survived a mind uploading into a new brain.
Of course, I’m not disputing whether mind-uploading is theoretically possible. It seems likely that it is, although it will probably be extremely complex. There’s something to be said about the substrate independence of computation and, separately, consciousness. No, my brain today does not contain the same atoms as my brain from ten years ago. However, certain properties of the atoms (including the states of their constituent parts) may be conserved such as spin, charge, entanglement, or some yet undiscovered state of matter. So long as we are unaware of the constraints on these properties that are necessary for consciousness (or even whether these properties are relevant to consciousness), we cannot know with certainty that we have uploaded a conscious mind.
If a machine behaves like me, it is me. Whatever we share some unmeasurable sameness—is of no importance for me.
The brain is but a computing device. You give it inputs, and it returns outputs. There is nothing beyond that. For all practical purposes, if two devices have the same inputs→outputs mapping, you can replace one of them with another.
These statements are ringing some loud alarm bells for me. It seems that you are rejecting consciousness itself. I suppose you could do that, but I don’t think any reasonable person would agree with you. To truly gauge whether you believe you are conscious or not, ask yourself, “have I ever experienced pain?” If you believe the answer to that is “yes,” then at least you should be convinced that you are conscious.
What you are suggesting at the end there is that WBE = mind uploading. I’m not sure many people would agree with that assertion.
No, my brain today does not contain the same atoms as my brain from ten years ago. However, certain properties of the atoms (including the states of their constituent parts) may be conserved such as spin, charge, entanglement, or some yet undiscovered state of matter. So long as we are unaware of the constraints on these properties that are necessary for consciousness (or even whether these properties are relevant to consciousness), we cannot know with certainty that we have uploaded a conscious mind.
Can we know with certainty that the same properties were preserved between 2011-brain and 2021-brain?
It seems to me that this can’t be verified by any experiment, and thus must be cut off by the Newton’s Flaming Laser Sword.
It seems that you are rejecting consciousness itself.
As far as I know, it is impossible to experimentally verify if some entity posses consciousness (partly because how fuzzy are its definitions). It is a strong indicator that consciousness is one of those abstractions that don’t correspond to any real phenomenon.
“have I ever experienced pain?”
If certain kinds of damage are inflicted upon my body, my brain generates an output typical for a human in pain. The reaction can be experimentally verified. It also has a reasonable biological explanation, and a clear mechanism of functioning. Thus, I have no doubt that pain does exist, and I’ve experienced it.
I can’t say the same about any introspection-based observations that can’t be experimentally verified. The human brain is a notoriously unreliable computing device which is known to produce many falsehoods about the world and (especially!) about itself.
Can we know with certainty that the same properties were preserved between 2011-brain and 2021-brain?
No, we cannot. Just as we cannot know with certainty whether a mind-upload is conscious. Just because we presume that our 2021 brain is a related conscious agent to our 2011 brain, and granting the fact that we cannot verify the properties that enabled the conscious connection between the two brains, does not mean that the properties do not exist.
It seems to me that this can’t be verified by any experiment, and thus must be cut off by the Newton’s Flaming Laser Sword.
Perhaps we presently have no way of testing whether some matter is conscious or not. This is not equivalent to saying that, in principle, the conscious state of some matter cannot be tested. We may one day make progress toward the hard problem of consciousness and be able to perform these experiments. Imagine making this argument throughout history before microscopes, telescopes, and hadron colliders. We can now sheath Newton’s Flaming Laser Sword.
I can’t say the same about any introspection-based observations that can’t be experimentally verified.
I believe this hedges on an epistemic question about whether we can have have knowledge of anything using our observations alone. I think even a skeptic would say that she has consciousness, as the fact that one is conscious may be the only thing that one can know with certainty about themself. You don’t need to verify any specific introspective observation. The act of introspection itself should be enough for someone to verify that they are conscious.
The human brain is a notoriously unreliable computing device which is known to produce many falsehoods about the world and (especially!) about itself.
This claim refers to the reliability of the human brain to verify the truth value of certain propositions or indentify specific and individuable experiences. Knowing whether oneself is conscious is not strictly a matter of verifying a proposition, nor identifying an individuable experience. It’s only about verifying whether one has any experience whatsoever, which should be possible. Whether I believe your claim to consciousness or not is a different problem.
The brain is changing over time. It is likely that there is not a single atom in your 2021-brain that was present in your 2011-brain.
If you agree that the natural replacements haven’t killed you (2011-you and 2021-you are the same conscious agent), then it’s possible to transfer your mind to a machine in a similar manner. Because you’ve already survived a mind uploading into a new brain.
Gradual mind uploading (e.g. by gradually replacing neurons with emulated replicas) circumvents the philosophical problems attributed to non-gradual methods.
Personally, although I prefer gradual uploading, I would agree to a non-gradual method too, as I don’t see the philosophical problems as important. As per the Newton’s Flaming Laser Sword:
if a question, even in principle, can’t be resolved by an experiment, then it is not worth considering.
If a machine behaves like me, it is me. Whatever we share some unmeasurable sameness—is of no importance for me.
The brain is but a computing device. You give it inputs, and it returns outputs. There is nothing beyond that. For all practical purposes, if two devices have the same inputs→outputs mapping, you can replace one of them with another.
As Dennett put it, everyone is a philosophical zombie.
There are a lot of interesting points here, but I disagree (or am hesitant to agree) with most of them.
Of course, I’m not disputing whether mind-uploading is theoretically possible. It seems likely that it is, although it will probably be extremely complex. There’s something to be said about the substrate independence of computation and, separately, consciousness. No, my brain today does not contain the same atoms as my brain from ten years ago. However, certain properties of the atoms (including the states of their constituent parts) may be conserved such as spin, charge, entanglement, or some yet undiscovered state of matter. So long as we are unaware of the constraints on these properties that are necessary for consciousness (or even whether these properties are relevant to consciousness), we cannot know with certainty that we have uploaded a conscious mind.
These statements are ringing some loud alarm bells for me. It seems that you are rejecting consciousness itself. I suppose you could do that, but I don’t think any reasonable person would agree with you. To truly gauge whether you believe you are conscious or not, ask yourself, “have I ever experienced pain?” If you believe the answer to that is “yes,” then at least you should be convinced that you are conscious.
What you are suggesting at the end there is that WBE = mind uploading. I’m not sure many people would agree with that assertion.
Can we know with certainty that the same properties were preserved between 2011-brain and 2021-brain?
It seems to me that this can’t be verified by any experiment, and thus must be cut off by the Newton’s Flaming Laser Sword.
As far as I know, it is impossible to experimentally verify if some entity posses consciousness (partly because how fuzzy are its definitions). It is a strong indicator that consciousness is one of those abstractions that don’t correspond to any real phenomenon.
If certain kinds of damage are inflicted upon my body, my brain generates an output typical for a human in pain. The reaction can be experimentally verified. It also has a reasonable biological explanation, and a clear mechanism of functioning. Thus, I have no doubt that pain does exist, and I’ve experienced it.
I can’t say the same about any introspection-based observations that can’t be experimentally verified. The human brain is a notoriously unreliable computing device which is known to produce many falsehoods about the world and (especially!) about itself.
No, we cannot. Just as we cannot know with certainty whether a mind-upload is conscious. Just because we presume that our 2021 brain is a related conscious agent to our 2011 brain, and granting the fact that we cannot verify the properties that enabled the conscious connection between the two brains, does not mean that the properties do not exist.
Perhaps we presently have no way of testing whether some matter is conscious or not. This is not equivalent to saying that, in principle, the conscious state of some matter cannot be tested. We may one day make progress toward the hard problem of consciousness and be able to perform these experiments. Imagine making this argument throughout history before microscopes, telescopes, and hadron colliders. We can now sheath Newton’s Flaming Laser Sword.
I believe this hedges on an epistemic question about whether we can have have knowledge of anything using our observations alone. I think even a skeptic would say that she has consciousness, as the fact that one is conscious may be the only thing that one can know with certainty about themself. You don’t need to verify any specific introspective observation. The act of introspection itself should be enough for someone to verify that they are conscious.
This claim refers to the reliability of the human brain to verify the truth value of certain propositions or indentify specific and individuable experiences. Knowing whether oneself is conscious is not strictly a matter of verifying a proposition, nor identifying an individuable experience. It’s only about verifying whether one has any experience whatsoever, which should be possible. Whether I believe your claim to consciousness or not is a different problem.