Worth noting that this argument doesn’t necessarily require humans to be:
numerous
animated (i.e. not frozen in a cryonics process)
acting in real world (i.e. not confined into a “Matrix”).
Thus, the AI may decide to keep only a selection of humans, confined in a virtual world, with the rest being frozen.
Moreover, even the perfect Friendly AI may decide to do the same, to prevent further human deaths.
In general, an evil AI may choose such strategies that allow her to plausibly deny her non-Friendliness.
“Thousands of humans die every day. Thus, I froze the entire humanity to prevent that, until I solve their mortality. The fact that they now can’t switch me off is just a nice bonus”.
I would argue that for all practical purposes it doesn’t matter if computational functionalism is right or wrong.
Pursuing mind uploading is a good idea regardless of that, as it has benefits not related to perfectly recreating someone in silico (e.g. advancing neuroscience).
If the digital version of RomanS is good enough[1], it will indeed be me, even if the digital version is running on a billiard-ball computer (the internal workings of which are completely different from the workings of the brain).
The second part is the most controversial, but it’s actually easy to prove:
Memorize a long sequence of numbers, and write down a hash sum of it.
Ensure no one saw the sequence of numbers except you.
Do a honest mind uploading (no attempts to extract the numbers from your brain etc).
Observe how the digital version correctly recalls the numbers, as checked by the hash sum.
According to the experiment’s conditions, only you know the numbers. Therefore, the digital version is you.
And if it’s you, then it has all the same important properties of you, including “consciousness” (if such a thing exists).
The are some scenarios where such a setup may fail (e.g. some important property of the mind is somehow generated by one special neuron which must be perfectly recreated). But I can’t think of any such scenario that is realistic.
My general position on the topic can be called “black-box CF” (in addition to your practical and theoretical CF). I would summarize it as follows:
The human brain is designed by biological evolution to survive and procreate. You’re a survival-procreation machine. As there is clearly no God, there is also no soul, or any other magic inside your brain. The difference between you and another such machine is the training set you observed during your lifetime (and some minor architecture differences caused by genetic differences).
the concepts of consciousness, qualia etc are too loosely defined to be of any use (including the use in any reasonable discussion). Just discard it as yet another phlogiston.
thus, the task of “transferring consciousness to a machine” is ill-defined. Instead, mind uploading is about building a digital machine that behaves like you. It doesn’t matter what is happening inside, as long as the digital version is passing a sufficiently good battery of behavioral tests.
there is a gradual distinction between you and not-you. E.g. an atoms-level sim may be 99% you, a neurons-level sim − 90% you, a LLM trained on your texts − 80% you. The measure is the percentage of the same answers given to a sufficiently long and diverse questionnaire.
a human mind in its fullness can be recreated in silico even by a LLM (trained on sufficient amounts of the mind inputs and outputs). Perfectly recreating the brain (or even recreating it at all) would be nice, but it is unnecessary for mind uploading. Just build an AI that is sufficiently similar to you in behavior.
As defined by a reasonable set of quality and similarity criteria, beforehand