So they won’t transcend if we do nothing but run them in copies of their ancestral environments. But how likely is that? They will instead become tools in our software toolbox (see below).
Unlike an AI, an upload can’t easily understand its own code, so self-improving is going to be that much more difficult.
The argument for uploads first is not that by uploading humans, we have solved the problem of Friendliness. The uploads still have to solve that problem. The argument is that the odds are better if the first human-level faster-than-human intelligences are copies of humans rather than nonhuman AIs.
But guaranteeing fidelity in your copy is itself a problem comparable to the problem of Friendliness. It would be incredibly easy for us to miss that (e.g.) a particular neuronal chemical response is of cognitive and not just physiological significance, leave it out of the uploading protocol, and thereby create “copies” which systematically deviate from human cognition in some way, whether subtle or blatant.
By only simulating individual parts of the brain, it becomes less likely that the upload will transcend.
The classic recipe for unsafe self-enhancing AI is that you assemble a collection of software tools, and use them to build better tools, and eventually you delegate even that tool-improving function. The significance of partial uploads is that they can give a big boost to this process.
So they won’t transcend if we do nothing but run them in copies of their ancestral environments. But how likely is that? They will instead become tools in our software toolbox (see below).
The argument for uploads first is not that by uploading humans, we have solved the problem of Friendliness. The uploads still have to solve that problem. The argument is that the odds are better if the first human-level faster-than-human intelligences are copies of humans rather than nonhuman AIs.
But guaranteeing fidelity in your copy is itself a problem comparable to the problem of Friendliness. It would be incredibly easy for us to miss that (e.g.) a particular neuronal chemical response is of cognitive and not just physiological significance, leave it out of the uploading protocol, and thereby create “copies” which systematically deviate from human cognition in some way, whether subtle or blatant.
The classic recipe for unsafe self-enhancing AI is that you assemble a collection of software tools, and use them to build better tools, and eventually you delegate even that tool-improving function. The significance of partial uploads is that they can give a big boost to this process.