Risk avoidance. I’m uncomfortable with taking the position that creating a second copy and destroying the original is the original simply because if it isn’t then the original is now dead.
Yes, but how do you conclude that a risk exists? Two philosophical positions don’t mean fifty-fifty chances that one is correct; intuition is literally the only evidence for one of the alternatives here to the best of my knowledge, and we already know that human intuitions can go badly off the rails when confronted with problems related to anthropomorphism.
Granted, we can’t yet trace down human thoughts and motivations to the neuron level, but we’ll certainly be able to by the time we’re able to destructively scan people into simulations; if there’s any secret sauce involved, we’ll by then know it’s there if not exactly what it is. If dualism turns out to win by then I’ll gladly admit I was wrong; but if any evidence hasn’t shown up by that time, it sounds an awful lot like all there is to fall back on is the failure mode in “But There’s Still A Chance, Right?”.
I read that earlier, and it doesn’t answer the question. If you believe that the second copy in your scenario is different from the first copy in some deep existential sense at the time of division (equivalently, that personhood corresponds to something other than unique brain state), you’ve already assumed a conclusion to all questions along these lines—and in fact gone past all questions of risk of death and into certainty.
But you haven’t provided any reasoning for that belief: you’ve just outlined the consequences of it from several different angles.
Risk avoidance. I’m uncomfortable with taking the position that creating a second copy and destroying the original is the original simply because if it isn’t then the original is now dead.
Yes, but how do you conclude that a risk exists? Two philosophical positions don’t mean fifty-fifty chances that one is correct; intuition is literally the only evidence for one of the alternatives here to the best of my knowledge, and we already know that human intuitions can go badly off the rails when confronted with problems related to anthropomorphism.
Granted, we can’t yet trace down human thoughts and motivations to the neuron level, but we’ll certainly be able to by the time we’re able to destructively scan people into simulations; if there’s any secret sauce involved, we’ll by then know it’s there if not exactly what it is. If dualism turns out to win by then I’ll gladly admit I was wrong; but if any evidence hasn’t shown up by that time, it sounds an awful lot like all there is to fall back on is the failure mode in “But There’s Still A Chance, Right?”.
Here’s why I conclude a risk exists: http://lesswrong.com/lw/b9/welcome_to_less_wrong/5huo?context=1#5huo
I read that earlier, and it doesn’t answer the question. If you believe that the second copy in your scenario is different from the first copy in some deep existential sense at the time of division (equivalently, that personhood corresponds to something other than unique brain state), you’ve already assumed a conclusion to all questions along these lines—and in fact gone past all questions of risk of death and into certainty.
But you haven’t provided any reasoning for that belief: you’ve just outlined the consequences of it from several different angles.