I’d rate the chance that early upload techniques miss some necessary components of sapience as reasonably high, but that’s a technical problem rather than a philosophical one. My confidence in uploading in principle, on the other hand, is roughly equivalent to my confidence in reductionism: which is to say pretty damn high, although not quite one or one minus epsilon. Specifically: for all possible upload techniques to generate a discontinuity in a way that, say, sleep doesn’t, it seems to me that not only do minds need to involve some kind of irreducible secret sauce, but also that that needs to be bound to substrate in a non-transferable way, which would be rather surprising. Some kind of delicate QM nonsense might fit the bill, but that veers dangerously close to woo.
The most parsimonious explanation seems to be that, yes, it involves a discontinuity in consciousness, but so do all sorts of phenomena that we don’t bother to note or even notice. Which is a somewhat disquieting thought, but one I’ll have to live with.
Actually, http://lesswrong.com/lw/7ve/paper_draft_coalescing_minds_brain/ seems to discuss a way of upload being non-destructive transition. We know that brain can learn to use implanted neurons under some very special conditions now; so maybe you could first learn to use an artificial mind-holder (without a mind yet) as a minor supplement and then learn to use it more and more until death of your original brain is just a flesh wound. Maybe not—but it does seem to be a technological problem.
Yeah, I was assuming a destructive upload for simplicity’s sake. Processes similar to the one you outline don’t generate an obvious discontinuity, so I imagine they’d seem less intuitively scary; still, a strong Searlean viewpoint probably wouldn’t accept them.
I’d rate the chance that early upload techniques miss some necessary components of sapience as reasonably high, but that’s a technical problem rather than a philosophical one. My confidence in uploading in principle, on the other hand, is roughly equivalent to my confidence in reductionism: which is to say pretty damn high, although not quite one or one minus epsilon. Specifically: for all possible upload techniques to generate a discontinuity in a way that, say, sleep doesn’t, it seems to me that not only do minds need to involve some kind of irreducible secret sauce, but also that that needs to be bound to substrate in a non-transferable way, which would be rather surprising. Some kind of delicate QM nonsense might fit the bill, but that veers dangerously close to woo.
The most parsimonious explanation seems to be that, yes, it involves a discontinuity in consciousness, but so do all sorts of phenomena that we don’t bother to note or even notice. Which is a somewhat disquieting thought, but one I’ll have to live with.
Actually, http://lesswrong.com/lw/7ve/paper_draft_coalescing_minds_brain/ seems to discuss a way of upload being non-destructive transition. We know that brain can learn to use implanted neurons under some very special conditions now; so maybe you could first learn to use an artificial mind-holder (without a mind yet) as a minor supplement and then learn to use it more and more until death of your original brain is just a flesh wound. Maybe not—but it does seem to be a technological problem.
Yeah, I was assuming a destructive upload for simplicity’s sake. Processes similar to the one you outline don’t generate an obvious discontinuity, so I imagine they’d seem less intuitively scary; still, a strong Searlean viewpoint probably wouldn’t accept them.