I would argue that for all practical purposes it doesn’t matter if computational functionalism is right or wrong.
Pursuing mind uploading is a good idea regardless of that, as it has benefits not related to perfectly recreating someone in silico (e.g. advancing neuroscience).
If the digital version of RomanS is good enough[1], it will indeed be me, even if the digital version is running on a billiard-ball computer (the internal workings of which are completely different from the workings of the brain).
The second part is the most controversial, but it’s actually easy to prove:
Memorize a long sequence of numbers, and write down a hash sum of it.
Ensure no one saw the sequence of numbers except you.
Do a honest mind uploading (no attempts to extract the numbers from your brain etc).
Observe how the digital version correctly recalls the numbers, as checked by the hash sum.
According to the experiment’s conditions, only you know the numbers. Therefore, the digital version is you.
And if it’s you, then it has all the same important properties of you, including “consciousness” (if such a thing exists).
The are some scenarios where such a setup may fail (e.g. some important property of the mind is somehow generated by one special neuron which must be perfectly recreated). But I can’t think of any such scenario that is realistic.
My general position is that
the concepts of consciousness, qualia etc are too loosely defined to be of any use (including the use in any reasonable discussion). Just discard it as yet another phlogiston.
thus, the task of “transferring consciousness to a machine” is ill-defined. Instead, mind uploading is about building a digital machine that behaves like you. It doesn’t matter what is happening inside, as long as the digital version is passing a sufficiently good battery of behavioral tests.
there is a gradual distinction between you and not-you. E.g. an atoms-level sim may be 99% you, a neurons-level sim − 90% you, a LLM trained on your texts − 80% you. The measure is the percentage of the same answers given to a sufficiently long and diverse questionnaire.
a human mind in its fullness can be recreated in silico even by a LLM (trained on sufficient amounts of the mind inputs and outputs). Perfectly recreating the brain (or even recreating it at all) would be nice, but it is unnecessary for mind uploading. Just build an AI that is sufficiently similar to you in behavior.
This position can be called black-box CF, in addition to your practical and theoretical CF.
I would argue that for all practical purposes it doesn’t matter if computational functionalism is right or wrong.
Pursuing mind uploading is a good idea regardless of that, as it has benefits not related to perfectly recreating someone in silico (e.g. advancing neuroscience).
If the digital version of RomanS is good enough[1], it will indeed be me, even if the digital version is running on a billiard-ball computer (the internal workings of which are completely different from the workings of the brain).
The second part is the most controversial, but it’s actually easy to prove:
Memorize a long sequence of numbers, and write down a hash sum of it.
Ensure no one saw the sequence of numbers except you.
Do a honest mind uploading (no attempts to extract the numbers from your brain etc).
Observe how the digital version correctly recalls the numbers, as checked by the hash sum.
According to the experiment’s conditions, only you know the numbers. Therefore, the digital version is you.
And if it’s you, then it has all the same important properties of you, including “consciousness” (if such a thing exists).
The are some scenarios where such a setup may fail (e.g. some important property of the mind is somehow generated by one special neuron which must be perfectly recreated). But I can’t think of any such scenario that is realistic.
My general position is that
the concepts of consciousness, qualia etc are too loosely defined to be of any use (including the use in any reasonable discussion). Just discard it as yet another phlogiston.
thus, the task of “transferring consciousness to a machine” is ill-defined. Instead, mind uploading is about building a digital machine that behaves like you. It doesn’t matter what is happening inside, as long as the digital version is passing a sufficiently good battery of behavioral tests.
there is a gradual distinction between you and not-you. E.g. an atoms-level sim may be 99% you, a neurons-level sim − 90% you, a LLM trained on your texts − 80% you. The measure is the percentage of the same answers given to a sufficiently long and diverse questionnaire.
a human mind in its fullness can be recreated in silico even by a LLM (trained on sufficient amounts of the mind inputs and outputs). Perfectly recreating the brain (or even recreating it at all) would be nice, but it is unnecessary for mind uploading. Just build an AI that is sufficiently similar to you in behavior.
This position can be called black-box CF, in addition to your practical and theoretical CF.
As defined by a reasonable set of quality and similarity criteria, beforehand