I would expect exocortex-based approaches to uploading (as discussed in the paper) to come earlier than uploads based on destructive scanning: exocortexes could be installed in living humans, and are a natural outgrowth of various brain prosthetic technologies that are already being developed for medical purposes. There’s going to be stigma, but far less stigma than in the thought of cutting apart a dead person’s brain and scanning it to a purely VR environment. Indeed, while destructive uploading is a sharp and discrete transition, with exocortexes the border between a baseline human and an upload might become rather fuzzy.
This might relatively quickly lead to various upload/AGI-hybrids that could initially outperform both “pure” uploads and AGIs. Of course, they’d still be bottlenecked by a human cognitive architecture, so eventually an AGI would outperform them. There are also the various communication and co-operation advantages that exocortex-enabled mind coalescence gives you, which might help in trying to detect and control at least early AGIs. But it still seems worth looking at.
Shulman & Salamon, Whole brain emulation as a platform for creating safe AGI.
Shulman, Whole brain emulation and the evolution of superorganisms
The stuff Carl & Nick have been passing back and forth recently. (I suspect you’ve seen this, Stuart?)
The post-Summit2011 workshop report I wish I had more time to write.
Also (shameless plug) Sotala & Valpola, Coalescing Minds: Brain uploading-related group mind scenarios.
I would expect exocortex-based approaches to uploading (as discussed in the paper) to come earlier than uploads based on destructive scanning: exocortexes could be installed in living humans, and are a natural outgrowth of various brain prosthetic technologies that are already being developed for medical purposes. There’s going to be stigma, but far less stigma than in the thought of cutting apart a dead person’s brain and scanning it to a purely VR environment. Indeed, while destructive uploading is a sharp and discrete transition, with exocortexes the border between a baseline human and an upload might become rather fuzzy.
This might relatively quickly lead to various upload/AGI-hybrids that could initially outperform both “pure” uploads and AGIs. Of course, they’d still be bottlenecked by a human cognitive architecture, so eventually an AGI would outperform them. There are also the various communication and co-operation advantages that exocortex-enabled mind coalescence gives you, which might help in trying to detect and control at least early AGIs. But it still seems worth looking at.