Isn’t suicide always an option? When it comes to imagining immortality, I’m like Han Solo, but limits are conceivable and boredom might become insurmountable.
The real question is whether intelligence has a ceiling at all—if not, then even millions of years wouldn’t be a problem.
Charlie Brooker’s Black Mirror tv show played with the punishment idea—a mind uploaded to cube experiencing subjectively hundreds of years in a virtual kitchen with a virtual garden, as punishment for murder (the murder was committed in the kitchen). In real time, the cube is just left on casually overnight by the “gaoler” for amusement. Hellish scenario.
(In another episode- or it might be the same one? - a version of the same kind of “punishment”—except just a featureless white space for a few years—is also used to “tame” a copy of a person’s mind that’s trained to be a boring virtual assistant for the person.)
Perhaps it was discussed in more depth before I join LW but I think far, far more cautiousness should be exercised at considering an upload could ever be you.
If you can reduce personhood to information representable in bits, it also means each and any part of it is changeable and replacable, thus there is no lasting essence of individualhood. (My former Buddhist training is really kicking in here, although it is possible I am looking it up in a cache.) Thus there are infinite amount of potential lumps of information, each of which are “more you” and “less you” depending on the difference. Basically from the second you thought a new thought or seen something new, you are not the same you anymore.
Fortunately, our lack of infinite brain plasticity protects us now from every experience radically rewiring what we are, we have an illusion of unchanging selfhood more or less due to this lack of plasticity.
Uploads are infinitely plastic. Probably nobody will care about keeping you intact just for the sentimental and nostalgic value of being attached to your former, meat-based, unplastic self. You will be changed so radically that it will not be you in any meaningful sense. Also there is no promise they will bother about uploading many meat minds. They may as well figure uploading one Really Nice Person and making a hundred billion copies delivers more utility.
And quite frankly if we give up all our last shreds of illusionary attachment to having souls, I am not sure we will care about utility anyway. I think I find it hard to care about whether a mere algorithm feels joy or suffering. After all a mere algorithm can put the label “joy” or “suffering” on anything. For an algorithm, what is even the difference between “real” suffering and simply putting the word, the label, the referent “suffering” on certain things? I need the illusion of some scrap of a not-literally-supernatural-but-it-feels-so type of soul to know the difference between suffering and “suffering”. A software function that basically goes print(“OUCH! Augh! Nooo!...”) does not actually suffer, and I think the “actualness” is where the supernaturalistic illusion is necessary.
Otherwise, we would just engineer out the ability to suffer from the upload, and/or find the function that takes experiences as an input, judges them, and emits joy as an output, and change it so that it always emits joy. We would from that point on not care about the world.
Oh, true for the “uploaded prisoner” scenario, I was just thinking of someone who’d deliberately uploaded themselves and wasn’t restricted—clearly suicide would be possible for them.
But even for the “uploaded prisoner”, given sufficient time it would be possible—there’s no absolute impermeability to information anywhere, is there? And where there’s information flow, control is surely ultimately possible? (The image that just popped into my head was something like, training mice via. flashing lights to gnaw the wires :) )
But that reminds me of the problem of trying to isolate an AI once built.
I was just thinking of someone who’d deliberately uploaded themselves and wasn’t restricted—clearly suicide would be possible for them.
That is not self-evident to me at all. If you don’t control the hardware (and the backups), how exactly would that work? As a parallel, imagine youself as sole mind, without a body. How will your sole mind kill itself?
And where there’s information flow, control is surely ultimately possible?
Huh? Of course not. Information is information and control is control. Don’t forget that as you accumulate infomation, so do your jailers.
Isn’t suicide always an option? When it comes to imagining immortality, I’m like Han Solo, but limits are conceivable and boredom might become insurmountable.
The real question is whether intelligence has a ceiling at all—if not, then even millions of years wouldn’t be a problem.
Charlie Brooker’s Black Mirror tv show played with the punishment idea—a mind uploaded to cube experiencing subjectively hundreds of years in a virtual kitchen with a virtual garden, as punishment for murder (the murder was committed in the kitchen). In real time, the cube is just left on casually overnight by the “gaoler” for amusement. Hellish scenario.
(In another episode- or it might be the same one? - a version of the same kind of “punishment”—except just a featureless white space for a few years—is also used to “tame” a copy of a person’s mind that’s trained to be a boring virtual assistant for the person.)
The truly worrying scenarios are the ones which disallow escape of any kind, including suicide.
In an advanced society, anyone that wanted to do that could come up with a lot of ways to make it happen.
Not if you’re an upload.
Perhaps it was discussed in more depth before I join LW but I think far, far more cautiousness should be exercised at considering an upload could ever be you.
If you can reduce personhood to information representable in bits, it also means each and any part of it is changeable and replacable, thus there is no lasting essence of individualhood. (My former Buddhist training is really kicking in here, although it is possible I am looking it up in a cache.) Thus there are infinite amount of potential lumps of information, each of which are “more you” and “less you” depending on the difference. Basically from the second you thought a new thought or seen something new, you are not the same you anymore.
Fortunately, our lack of infinite brain plasticity protects us now from every experience radically rewiring what we are, we have an illusion of unchanging selfhood more or less due to this lack of plasticity.
Uploads are infinitely plastic. Probably nobody will care about keeping you intact just for the sentimental and nostalgic value of being attached to your former, meat-based, unplastic self. You will be changed so radically that it will not be you in any meaningful sense. Also there is no promise they will bother about uploading many meat minds. They may as well figure uploading one Really Nice Person and making a hundred billion copies delivers more utility.
And quite frankly if we give up all our last shreds of illusionary attachment to having souls, I am not sure we will care about utility anyway. I think I find it hard to care about whether a mere algorithm feels joy or suffering. After all a mere algorithm can put the label “joy” or “suffering” on anything. For an algorithm, what is even the difference between “real” suffering and simply putting the word, the label, the referent “suffering” on certain things? I need the illusion of some scrap of a not-literally-supernatural-but-it-feels-so type of soul to know the difference between suffering and “suffering”. A software function that basically goes print(“OUCH! Augh! Nooo!...”) does not actually suffer, and I think the “actualness” is where the supernaturalistic illusion is necessary.
Otherwise, we would just engineer out the ability to suffer from the upload, and/or find the function that takes experiences as an input, judges them, and emits joy as an output, and change it so that it always emits joy. We would from that point on not care about the world.
I am ignoring here all the problems with the concept of an upload (or an “em” in Hanson’s terminology) -- that’s a separate subject altogether.
For the record, I don’t subscribe to the Hansonian view of a society of ems.
Oh, true for the “uploaded prisoner” scenario, I was just thinking of someone who’d deliberately uploaded themselves and wasn’t restricted—clearly suicide would be possible for them.
But even for the “uploaded prisoner”, given sufficient time it would be possible—there’s no absolute impermeability to information anywhere, is there? And where there’s information flow, control is surely ultimately possible? (The image that just popped into my head was something like, training mice via. flashing lights to gnaw the wires :) )
But that reminds me of the problem of trying to isolate an AI once built.
That is not self-evident to me at all. If you don’t control the hardware (and the backups), how exactly would that work? As a parallel, imagine youself as sole mind, without a body. How will your sole mind kill itself?
Huh? Of course not. Information is information and control is control. Don’t forget that as you accumulate infomation, so do your jailers.