If you wake up not too severely damaged and in a decent environment (possibly with all kinds of wonderful improvements) where your life wil be better than non existence you will have a lot more time for living. If not you can always kill yourself.
If you get yourself frozen only for revival upon major life extension breakthroughs as well as unfreezing damage repair etc the important possibilities for the revival are probability of happy revival vs probability of unhappy revival where you can’t kill yourself.
I’m not aware of there ever having been any actual supervillains. I’m aware people are enslaved and forbidden from killing themselves but almost never are they actually prevented from doing so. Who cares about their slaves little enough to forbid them from killing themselves but enough to diligently enforce the rule (unless you are short on slaves which anyone with the resources to revive you to enslave you wouldn’t be)
Having to kill yourself would suck but it puts a comparitively low cap on your max loss in the vast majority of scenarios. I’m not sure it can even be called a loss as it replaces having to die of old age or illness in the scenario where you don’t freeze yourself.
Also you are probably underestimating the extent to which advancements over the years would improve your quality of life.
While the possibility of the bad scenarios does reduce the expected value of freezing it’s on a different order of magnitude to the potential benefits because the vast majority of the bad scenarious can be opted out of.
I’m not aware of there ever having been any actual supervillains. I’m aware people are enslaved and forbidden from killing themselves but almost never are they actually prevented from doing so.
One thing behaviorally close to actual supervillains is bureaucracy.
So the realistic antiutopian scenario is that you are revived by employees of some future Department of Historical Care. Personally, those people don’t care about you at all; just are just another prehistorical ape for them. All they want is to have their salaries, with as little work as possible.
They don’t care about costs of your revival, because those costs are paid by state; by taxes of citizens who get some epsilon warm fuzzies for saving prehistorical people. They don’t care about your pain, because emotionally you mean nothing for them; they emotionally don’t even consider you human. But they do care about your life—because their salaries depend on how many revived prehistorical people will survive. So their highest priority is to prevent your suicide; and they can use the technology of future for this; for example they can prevent you any movement and feed you intravenously.
People outside the Department of Historical Care will not save you, because they honestly don’t care about you. They get some warm fuzzies from knowing that you are alive (and imagining how grateful you must be for this), but they have no desire to meet with you personally. It’s a future, where they have things much more interesting than you; for example genetically engineered pokemons, artificial intelligences, etc.
Not if you don’t have courage to do such things. Not if you wake up damaged and unable to access / use suicidal weapons. Not if you wake up as a subject of medical experiments. Being a slave isn’t the only horrible outcome that could happen.
Prisoners are generally prevented from killing themselves, as are the insane. What if the society of the future simply thinks it’s wrong for you to kill yourself and won’t let you do it?
There’s a general category of waking up to find yourself in a low-status situation. This would include slavery, torture, imprisonment (we don’t know what they’ll consider to be a crime), and the one I think is most likely—that you’ll simply never be able to catch up. If you’re going to be you, you’re going to have a mind which was shaped by very different circumstances from the people in the future. Life might be well worth living or intermittently well worth living, but you will never be a full member of the society.
Is there any science fiction about fairly distinct cohorts of people from different times in a high-longevity and/or cryonics society?
If you’re revived via whole brain emulation (dramatically easier, and thus more likely, than trying to convert a hundred kilos of flaccid, poisoned cell edifices into a living person), then you could easily be prevented from killing yourself.
That said, whole brain emulation ought to be experimentally feasible, in what, fifteen years? At a consumer price point in 40? (Assuming the general trend of Moore’s law stays constant). That’s little enough time that I think the probability of such a dytopian future is not incredibly large. Especially since Alcor et all can move around if the laws start to get draconian. So it doesn’t just require an evil empire—it requires a global evil empire.
The real risk is that Alcor will fold before that happens, and (for some reason) won’t plastinate the brains they have on ice. In which case, you’re back in the same boat you started in.
Most of the sensible people seem to be saying that the relevant neural features can be observed at a 5nm x 5nm x 5nm spatial resolution, if supplemented with some gross immunostaining to record specific gene expressions and chemical concentrations. We already have SEM setups that can scan vitrified tissue at around that resolution, they’re just (several) orders of magnitude too slow. Outfitting them to do immunostaining and optical scanning would be relatively trivial. Since multi-beam SEMS are expected to dramatically increase the scan rate in the next couple of years, and since you could get excellent economies of scale for scanning on parallel machines, I do not expect the scanners themselves to be the bottleneck technology.
The other possible bottleneck is the actual neuroscience, since we’ve got a number of blind spots in the details of how large-scale neural machinery operates. We don’t know all the factors we would need to stain for, we don’t know all of the details of how synaptic morphology correlates with statistical behavior, and we don’t know how much detail we need in our neural models to preserve the integrity of the whole (though we have some solid guesses). We also do not, to the best of my knowledge, have reliable computational models of glial cells at this point. There are also a few factors of questionable importance, like passive neurotransmitter diffusion and electrical induction that need further study to decide how (if at all) to account for them in our models. However, progress in this area is very rapid. The Blue Brain project alone has made extremely strong progress in just a few years. I would be surprised if it took more than fifteen years to solve the remaining open questions.
Large scale image processing and data analytics, for parsing the scan images, is a sufficiently mature science that it’s not my primary point of concern. What could really screw it up is if Moore’s law craps out in ten years like Gordon Moore has predicted, and none of the replacement technologies are advanced enough to pick up the slack.
WRONG! If they’re able to re-animate preserved people, what makes you think they won’t be able to prevent suicide?
What if they don’t believe in a right to die? There’s no guarantee that you’ll be able to die, if you wake up in a world where cryo revival actually worked.
Or, if I woke up disabled or in an R2D2 robot body, how would I actually go about killing myself? I mean, you can say “roll off a cliff” but if there are no cliffs nearby, or the thing is made out of titanium?
There is no guarantee I’d be able to die in that scenario.
Also you are probably underestimating the extent to which advancements over the years would improve your quality of life.
I think you’re underestimating the extent to which advancements may cause catastrophes. We made all these chemicals and machines, now the environment is being destroyed. We made x-ray machines, the first techs to use them used to x-ray their hands to see if the machine was on in the morning—you can imagine what resulted. We’ve learned a lot about science in the last 100 years, great, but now we have nuclear bombs. We may make AI, and there are about 10,000 ways for that to go wrong. I don’t assume technological advancement will lead to a utopia. I hope it does. But to assume that it will is a bad idea. I’d be very interested to see a thorough and well thought out prediction of whether we’ll have a utopia or dystopia in the future, or something that’s neither. I’m really not sure.
Worse : a sensible system would in fact not ONLY give you a “robot body made of titanium” but would maintain multiple backup copies in vaults (and for security reasons, not all of the physical vault locations would be known to you, or anyone) and would use systems to constantly stream updated memory state data to these backup records. (stored as incremental backups, of course)
More than likely, the outcome for “successfully” committing suicide would be to wake up again and face some form of negative consequences for your actions. Suicide could actually be prosecuted as a crime.
If you wake up not too severely damaged and in a decent environment (possibly with all kinds of wonderful improvements) where your life wil be better than non existence you will have a lot more time for living. If not you can always kill yourself.
If you get yourself frozen only for revival upon major life extension breakthroughs as well as unfreezing damage repair etc the important possibilities for the revival are probability of happy revival vs probability of unhappy revival where you can’t kill yourself.
I’m not aware of there ever having been any actual supervillains. I’m aware people are enslaved and forbidden from killing themselves but almost never are they actually prevented from doing so. Who cares about their slaves little enough to forbid them from killing themselves but enough to diligently enforce the rule (unless you are short on slaves which anyone with the resources to revive you to enslave you wouldn’t be)
Having to kill yourself would suck but it puts a comparitively low cap on your max loss in the vast majority of scenarios. I’m not sure it can even be called a loss as it replaces having to die of old age or illness in the scenario where you don’t freeze yourself.
Also you are probably underestimating the extent to which advancements over the years would improve your quality of life.
While the possibility of the bad scenarios does reduce the expected value of freezing it’s on a different order of magnitude to the potential benefits because the vast majority of the bad scenarious can be opted out of.
One thing behaviorally close to actual supervillains is bureaucracy.
So the realistic antiutopian scenario is that you are revived by employees of some future Department of Historical Care. Personally, those people don’t care about you at all; just are just another prehistorical ape for them. All they want is to have their salaries, with as little work as possible.
They don’t care about costs of your revival, because those costs are paid by state; by taxes of citizens who get some epsilon warm fuzzies for saving prehistorical people. They don’t care about your pain, because emotionally you mean nothing for them; they emotionally don’t even consider you human. But they do care about your life—because their salaries depend on how many revived prehistorical people will survive. So their highest priority is to prevent your suicide; and they can use the technology of future for this; for example they can prevent you any movement and feed you intravenously.
People outside the Department of Historical Care will not save you, because they honestly don’t care about you. They get some warm fuzzies from knowing that you are alive (and imagining how grateful you must be for this), but they have no desire to meet with you personally. It’s a future, where they have things much more interesting than you; for example genetically engineered pokemons, artificial intelligences, etc.
And you might have to keep replaying the more interesting (that is, painful) parts of history.
Not if you don’t have courage to do such things. Not if you wake up damaged and unable to access / use suicidal weapons. Not if you wake up as a subject of medical experiments. Being a slave isn’t the only horrible outcome that could happen.
Prisoners are generally prevented from killing themselves, as are the insane. What if the society of the future simply thinks it’s wrong for you to kill yourself and won’t let you do it?
There’s a general category of waking up to find yourself in a low-status situation. This would include slavery, torture, imprisonment (we don’t know what they’ll consider to be a crime), and the one I think is most likely—that you’ll simply never be able to catch up. If you’re going to be you, you’re going to have a mind which was shaped by very different circumstances from the people in the future. Life might be well worth living or intermittently well worth living, but you will never be a full member of the society.
Is there any science fiction about fairly distinct cohorts of people from different times in a high-longevity and/or cryonics society?
If you’re revived via whole brain emulation (dramatically easier, and thus more likely, than trying to convert a hundred kilos of flaccid, poisoned cell edifices into a living person), then you could easily be prevented from killing yourself.
That said, whole brain emulation ought to be experimentally feasible, in what, fifteen years? At a consumer price point in 40? (Assuming the general trend of Moore’s law stays constant). That’s little enough time that I think the probability of such a dytopian future is not incredibly large. Especially since Alcor et all can move around if the laws start to get draconian. So it doesn’t just require an evil empire—it requires a global evil empire.
The real risk is that Alcor will fold before that happens, and (for some reason) won’t plastinate the brains they have on ice. In which case, you’re back in the same boat you started in.
Maybe, but scanning a vitrified brain with such a high resolution that a copy would feel more or less like the same person might take a bit longer.
Most of the sensible people seem to be saying that the relevant neural features can be observed at a 5nm x 5nm x 5nm spatial resolution, if supplemented with some gross immunostaining to record specific gene expressions and chemical concentrations. We already have SEM setups that can scan vitrified tissue at around that resolution, they’re just (several) orders of magnitude too slow. Outfitting them to do immunostaining and optical scanning would be relatively trivial. Since multi-beam SEMS are expected to dramatically increase the scan rate in the next couple of years, and since you could get excellent economies of scale for scanning on parallel machines, I do not expect the scanners themselves to be the bottleneck technology.
The other possible bottleneck is the actual neuroscience, since we’ve got a number of blind spots in the details of how large-scale neural machinery operates. We don’t know all the factors we would need to stain for, we don’t know all of the details of how synaptic morphology correlates with statistical behavior, and we don’t know how much detail we need in our neural models to preserve the integrity of the whole (though we have some solid guesses). We also do not, to the best of my knowledge, have reliable computational models of glial cells at this point. There are also a few factors of questionable importance, like passive neurotransmitter diffusion and electrical induction that need further study to decide how (if at all) to account for them in our models. However, progress in this area is very rapid. The Blue Brain project alone has made extremely strong progress in just a few years. I would be surprised if it took more than fifteen years to solve the remaining open questions.
Large scale image processing and data analytics, for parsing the scan images, is a sufficiently mature science that it’s not my primary point of concern. What could really screw it up is if Moore’s law craps out in ten years like Gordon Moore has predicted, and none of the replacement technologies are advanced enough to pick up the slack.
WRONG! If they’re able to re-animate preserved people, what makes you think they won’t be able to prevent suicide?
What if they don’t believe in a right to die? There’s no guarantee that you’ll be able to die, if you wake up in a world where cryo revival actually worked.
Or, if I woke up disabled or in an R2D2 robot body, how would I actually go about killing myself? I mean, you can say “roll off a cliff” but if there are no cliffs nearby, or the thing is made out of titanium?
There is no guarantee I’d be able to die in that scenario.
I think you’re underestimating the extent to which advancements may cause catastrophes. We made all these chemicals and machines, now the environment is being destroyed. We made x-ray machines, the first techs to use them used to x-ray their hands to see if the machine was on in the morning—you can imagine what resulted. We’ve learned a lot about science in the last 100 years, great, but now we have nuclear bombs. We may make AI, and there are about 10,000 ways for that to go wrong. I don’t assume technological advancement will lead to a utopia. I hope it does. But to assume that it will is a bad idea. I’d be very interested to see a thorough and well thought out prediction of whether we’ll have a utopia or dystopia in the future, or something that’s neither. I’m really not sure.
Worse : a sensible system would in fact not ONLY give you a “robot body made of titanium” but would maintain multiple backup copies in vaults (and for security reasons, not all of the physical vault locations would be known to you, or anyone) and would use systems to constantly stream updated memory state data to these backup records. (stored as incremental backups, of course)
More than likely, the outcome for “successfully” committing suicide would be to wake up again and face some form of negative consequences for your actions. Suicide could actually be prosecuted as a crime.