“Immortal But Damned to Hell on Earth”
With such long periods of time in play (if we succeed), the improbable hellish scenarios which might befall us become increasingly probable.
With the probability of death never quite reaching 0, despite advanced science, death might yet be inevitable.
But the same applies also to a hellish life in the meanwhile. And the longer the life, the more likely the survivors will envy the dead. Is there any safety in this universe? What’s the best we can do?
Imagine that for the last thousand years governments had the ability to send to eternal hell anyone whose body they controlled. How big would hell now be?
Edit: Imagine that it was possible to create a hell ICBM that would strike a large area sending everyone in it to eternal hell. Given this technology, I bet the U.S. and Russia would have numerous launch-on-warning hell ICBMs aimed at each other.
Actually this sounds worse than hell. In the traditional christian hell, you know that your life has meaning and purpose as one of god’s creatures, so at least you have a transcendent reason for your suffering.
By contrast, the Atlantic article describes a nihilistic hell where your experience has no redeeming features whatsoever.
Robert Ettinger wrote a story in 1948 called “The Penultimate Trump” in which the main character, the first cryonicist, awakes seemingly trumphantly only to be sent to Hell in punishment for his crimes.
Didn’t in one of Iain Banks’ books some civilizations run Hells, that is, computer simulations into which they uploaded the sinners (from their point of view) and made eternal hell real for them?
https://en.wikipedia.org/wiki/Surface_Detail
See also Rebecca Roache’s discussion of the topic: http://blog.practicalethics.ox.ac.uk/2013/08/enhanced-punishment-can-technology-make-life-sentences-longer/
(Kind of relevant because Ross Ulbricht was just sentenced to maximum charges—two concurrent life sentences and so on—and he’s young enough with many favorable demographic traits, 31/white/well-educated/intelligent/fit, that he could easily live another 50 years to 2065, and who knows what will happen by then?)
Indeed, Surface Detail was an excellent book, one of the best Culture novels, imo.
Yes, technology which made immortality possible could also making torturing/punishing people forever possible, but this does not mean that death is good, rather it means that its important for people to have empathy, and that we need to evolve away from retributive justice.
I usually find articles like this from the deathists annoying, and this wasn’t an exception.
If we get only one thing right, it should be right to exit.
Learn to recognize Pascal mugging (the scenario you are describing is an instance of it) and ignore it.
It’s not just a Mugging. It’s also a model that takes no account of agents trying to alter the chances of Hells. Sure, if Hell has a finite probability at any given time, then eventually it should happen, except that an agent is deliberately exerting continual optimization pressure to push that probability down over time.
P(Hell) exists, but its derivative over time is negative.
I’m familiar with Pascal’s wager. Is mugging just its application for manipulative purposes?
For me, the convincing counter to Pascal is the recognition of an infinity of third possible explanations. No reason to subscribe to one possible God in mindspace than another in the face of no evidence.
But this situation is different. That counter doesn’t work, and no others have come immediately to my mind. Do you know of one?
Pascal’s mugging. You are doing a similar thing to yourself: focusing on one possible scary outcome to the neglect of other, equally probable ones. E.g. eternal bliss, creating multiple clones of yourself, all enjoying eternal happy life beyond your wildest dreams. The odds are tiny and unknown, but both risks and rewards are potentially huge. You have no frame of reference, no intuition and no brain capacity to make a rational choice in this case. So don’t bother.
I don’t think that we should worry about this specific scenario. Any society advanced enough to develop mind uploading technology would have excellent understanding of the brain, consciousness and the structure of thought. In this circumstances retributive punishment would seem be totally useless as they could just change properties of the perpetrator brain to make him non-violent. and eliminate the cause of any anti-social behaviour.
It might be a cultural thing though, as america seems to be quite obsessed with retribution. I absolutely refuse to believe any advanced society with mind uploading technology would be so petty to use this in such horrible way . At that point I expect they would treat bad behaviour as a software bug.
Isn’t suicide always an option? When it comes to imagining immortality, I’m like Han Solo, but limits are conceivable and boredom might become insurmountable.
The real question is whether intelligence has a ceiling at all—if not, then even millions of years wouldn’t be a problem.
Charlie Brooker’s Black Mirror tv show played with the punishment idea—a mind uploaded to cube experiencing subjectively hundreds of years in a virtual kitchen with a virtual garden, as punishment for murder (the murder was committed in the kitchen). In real time, the cube is just left on casually overnight by the “gaoler” for amusement. Hellish scenario.
(In another episode- or it might be the same one? - a version of the same kind of “punishment”—except just a featureless white space for a few years—is also used to “tame” a copy of a person’s mind that’s trained to be a boring virtual assistant for the person.)
The truly worrying scenarios are the ones which disallow escape of any kind, including suicide.
In an advanced society, anyone that wanted to do that could come up with a lot of ways to make it happen.
Not if you’re an upload.
Perhaps it was discussed in more depth before I join LW but I think far, far more cautiousness should be exercised at considering an upload could ever be you.
If you can reduce personhood to information representable in bits, it also means each and any part of it is changeable and replacable, thus there is no lasting essence of individualhood. (My former Buddhist training is really kicking in here, although it is possible I am looking it up in a cache.) Thus there are infinite amount of potential lumps of information, each of which are “more you” and “less you” depending on the difference. Basically from the second you thought a new thought or seen something new, you are not the same you anymore.
Fortunately, our lack of infinite brain plasticity protects us now from every experience radically rewiring what we are, we have an illusion of unchanging selfhood more or less due to this lack of plasticity.
Uploads are infinitely plastic. Probably nobody will care about keeping you intact just for the sentimental and nostalgic value of being attached to your former, meat-based, unplastic self. You will be changed so radically that it will not be you in any meaningful sense. Also there is no promise they will bother about uploading many meat minds. They may as well figure uploading one Really Nice Person and making a hundred billion copies delivers more utility.
And quite frankly if we give up all our last shreds of illusionary attachment to having souls, I am not sure we will care about utility anyway. I think I find it hard to care about whether a mere algorithm feels joy or suffering. After all a mere algorithm can put the label “joy” or “suffering” on anything. For an algorithm, what is even the difference between “real” suffering and simply putting the word, the label, the referent “suffering” on certain things? I need the illusion of some scrap of a not-literally-supernatural-but-it-feels-so type of soul to know the difference between suffering and “suffering”. A software function that basically goes print(“OUCH! Augh! Nooo!...”) does not actually suffer, and I think the “actualness” is where the supernaturalistic illusion is necessary.
Otherwise, we would just engineer out the ability to suffer from the upload, and/or find the function that takes experiences as an input, judges them, and emits joy as an output, and change it so that it always emits joy. We would from that point on not care about the world.
I am ignoring here all the problems with the concept of an upload (or an “em” in Hanson’s terminology) -- that’s a separate subject altogether.
For the record, I don’t subscribe to the Hansonian view of a society of ems.
Oh, true for the “uploaded prisoner” scenario, I was just thinking of someone who’d deliberately uploaded themselves and wasn’t restricted—clearly suicide would be possible for them.
But even for the “uploaded prisoner”, given sufficient time it would be possible—there’s no absolute impermeability to information anywhere, is there? And where there’s information flow, control is surely ultimately possible? (The image that just popped into my head was something like, training mice via. flashing lights to gnaw the wires :) )
But that reminds me of the problem of trying to isolate an AI once built.
That is not self-evident to me at all. If you don’t control the hardware (and the backups), how exactly would that work? As a parallel, imagine youself as sole mind, without a body. How will your sole mind kill itself?
Huh? Of course not. Information is information and control is control. Don’t forget that as you accumulate infomation, so do your jailers.
Is there a situation so terrible you could never adapt to it?