Perhaps it was discussed in more depth before I join LW but I think far, far more cautiousness should be exercised at considering an upload could ever be you.
If you can reduce personhood to information representable in bits, it also means each and any part of it is changeable and replacable, thus there is no lasting essence of individualhood. (My former Buddhist training is really kicking in here, although it is possible I am looking it up in a cache.) Thus there are infinite amount of potential lumps of information, each of which are “more you” and “less you” depending on the difference. Basically from the second you thought a new thought or seen something new, you are not the same you anymore.
Fortunately, our lack of infinite brain plasticity protects us now from every experience radically rewiring what we are, we have an illusion of unchanging selfhood more or less due to this lack of plasticity.
Uploads are infinitely plastic. Probably nobody will care about keeping you intact just for the sentimental and nostalgic value of being attached to your former, meat-based, unplastic self. You will be changed so radically that it will not be you in any meaningful sense. Also there is no promise they will bother about uploading many meat minds. They may as well figure uploading one Really Nice Person and making a hundred billion copies delivers more utility.
And quite frankly if we give up all our last shreds of illusionary attachment to having souls, I am not sure we will care about utility anyway. I think I find it hard to care about whether a mere algorithm feels joy or suffering. After all a mere algorithm can put the label “joy” or “suffering” on anything. For an algorithm, what is even the difference between “real” suffering and simply putting the word, the label, the referent “suffering” on certain things? I need the illusion of some scrap of a not-literally-supernatural-but-it-feels-so type of soul to know the difference between suffering and “suffering”. A software function that basically goes print(“OUCH! Augh! Nooo!...”) does not actually suffer, and I think the “actualness” is where the supernaturalistic illusion is necessary.
Otherwise, we would just engineer out the ability to suffer from the upload, and/or find the function that takes experiences as an input, judges them, and emits joy as an output, and change it so that it always emits joy. We would from that point on not care about the world.
Perhaps it was discussed in more depth before I join LW but I think far, far more cautiousness should be exercised at considering an upload could ever be you.
If you can reduce personhood to information representable in bits, it also means each and any part of it is changeable and replacable, thus there is no lasting essence of individualhood. (My former Buddhist training is really kicking in here, although it is possible I am looking it up in a cache.) Thus there are infinite amount of potential lumps of information, each of which are “more you” and “less you” depending on the difference. Basically from the second you thought a new thought or seen something new, you are not the same you anymore.
Fortunately, our lack of infinite brain plasticity protects us now from every experience radically rewiring what we are, we have an illusion of unchanging selfhood more or less due to this lack of plasticity.
Uploads are infinitely plastic. Probably nobody will care about keeping you intact just for the sentimental and nostalgic value of being attached to your former, meat-based, unplastic self. You will be changed so radically that it will not be you in any meaningful sense. Also there is no promise they will bother about uploading many meat minds. They may as well figure uploading one Really Nice Person and making a hundred billion copies delivers more utility.
And quite frankly if we give up all our last shreds of illusionary attachment to having souls, I am not sure we will care about utility anyway. I think I find it hard to care about whether a mere algorithm feels joy or suffering. After all a mere algorithm can put the label “joy” or “suffering” on anything. For an algorithm, what is even the difference between “real” suffering and simply putting the word, the label, the referent “suffering” on certain things? I need the illusion of some scrap of a not-literally-supernatural-but-it-feels-so type of soul to know the difference between suffering and “suffering”. A software function that basically goes print(“OUCH! Augh! Nooo!...”) does not actually suffer, and I think the “actualness” is where the supernaturalistic illusion is necessary.
Otherwise, we would just engineer out the ability to suffer from the upload, and/or find the function that takes experiences as an input, judges them, and emits joy as an output, and change it so that it always emits joy. We would from that point on not care about the world.
I am ignoring here all the problems with the concept of an upload (or an “em” in Hanson’s terminology) -- that’s a separate subject altogether.
For the record, I don’t subscribe to the Hansonian view of a society of ems.