This depends heavily on assumptions. Consider this : the oldest cryonics patients have survived more than 30 years. The loss per decade for reasonably well funded cryonics organizations is currently 0.
If you check a chart of causes of death, the overwhelming majority of causes are ones where a cryonics team could be there.
You would have to choose a legal method of suicide in some of these cases, however (like voluntarily dehydrating yourself to the point of death), or your brain would deteriorate from progressive disease to the point of probably being non-viable for a future revival.
As for long term risks : ultimately these depend on your perception of risks to human civilization and the chance of ultimately developing a form of nanotechnology that could scan your frozen brain and create an emulation at minimal cost. I personally don’t think there are many probable causes that could cause civilization to fail, and I think the developement of the nanotechnology to be almost certain. There is no future world I can imagine where eventually a commercial or governmental enitity would not have extreme levels of motivation to develop the technology, due to the incredible advantages it would grant.
This is my personal bias, perhaps, but let’s look at this a bit more rationally.
a. How could a civilization ending event actually happen? Are nuclear escalations the most likely outcome or are the exchanges ending at a city or 2 nuked more probable?
b. What could stop a civilization from developing molecular tools with self replication? Living cells are an existence proof that the tools are possible, and developing the tools would give the entity that possessed them incredible power and wealth.
c. Cryonics organizations have already survived 30 years. Maybe they need to survive 90 or 120 more. They have more money and resources today, decreasing the probability of failure with each year. What is the chance that they will not be able to survive the rest of the needed time? In another 20 years, they might have hardened facilities in the desert with backup power and liquid nitrogen production.
And so on. This is a complicated question, but I have an educated hunch that the risks of failure for cryonics are lower than many of the estimates might show. I suspect that many of the estimates are made by people who suffer from biases towards excessive skepticism, and/or are motivated to find a way to not spend hundreds of thousands of dollars, preferring shorter term gains.
The civilization-ending risks are the most worrying from my point of view. Basically, I see a couple of scenarios:
Technology never gets anywhere near the point where we can revive frozen brains. Industrial civilization collapses first through a combination of resource constraints, environmental damage, and devastating wars; most likely, these all happen together and feed off each other. This doesn’t immediately cause human extinction, but the probability of a future industrial civilization arising from the ruins is very low, because all the easily-extracted fossil fuels, ores etc. have already gone.
Technology continues to advance to a point where revival is becoming distinctly feasible, but such advanced tech also comes with very high and increasing existential risks. For instance genetically-engineered plagues, molecular nanotechnology used as a weapon, strong but unfriendly AI. There is low probability of avoiding all these risks.
It’s a nasty dilemma really, and cryonic revival can only happen if we somehow avoid both horns.
That’s on top of a separate concern that cryo as currently practised simply comes too late to avoid truly irreversible brain damage (what is sometimes called “information theoretic death”). If critical information about a person’s mind has already been lost before freezing then no future technology, however advanced, can restore that mind. I don’t know enough about how minds are stored in brains to answer that concern, but I’m not confident. Freezing immediately on point of bodily death (or shortly before) looks much more likely to work, but it happens to be illegal.
How, precisely, would this happen? We aren’t writing sci-fi here. There’s dozens of countries on this planet with world class R&D occurring each and every day. The key technology needed to revive frozen brains is the development of nanoscale machine tools that are versatile enough to aid in manufacturing more copies of themselves. This sort of technology would change many industries, and in the short term would give the developers of the tech (assuming they had some means of keeping control of it) enormous economic and military advantages.
a. Economic—these tools would be cheap in mass quantities because they can be used to make themselves. Nearly any manufactured good made today could probably be duplicated, and it would not require the elaborate and complex manufacturing chains that it takes today. Also, the products would be very close to atomically perfect, so there would be little need for quality control.
b. Military—high end weapons are some of the most expensive to manufacture products available, for a myriad of reasons. (I mean jets, drones, tanks, etc). Nanoscale printers would drop the price to rock bottom for each additional copy of a weapon.
A civilization armed with these tools of course would not be worried about resources or environmental damage. a. There are a lot of resources not feasible today because we can’t manufacture mining robots at rock bottom prices and send them to go after these low yield resources.
b. We suffer from a lack of energy because solar panels and high end batteries have high manufacturing costs. (the raw materials are mostly very cheap). Same goes for nuclear reactors. c. We cannot reverse environmental damage because we cannot afford to manufacture square miles worth of machinery to reverse the damage. (mostly C02 and other greenhouse gas capturing plants, but also robots to clean up various messes)
I say we revive people as soon as possible as computer simulations to give us a form of friendly AI that we can more or less trust. These people could be emulated at high speed and duplicated many times and used to counter the other risks.
I agree with you entirely on the irreversible brain damage. I think this problem can be fixed with systematic efforts to solve it (and a legal work around or a change to the laws) but this requires resources that Alcor and CI lack at the moment.
“Horn 1” of the dilemma is a Limits to Growth style crisis. It’s perfectly possible that such a limits-crisis arrives before the technology needed to expand the limits shows up to save us. (The early signs would be a major recession which never seems to end, and funding for speculative ideas like nano-machines doesn’t last.) Or another analogy would be crossing a desert with a small, leaky bottle of water and an ever-growing thirst. On the edge of the desert there is a huge lake, and the traveller reaching it will never be thirsty again. But it’s still possible to die before reaching the lake.
I see you think that the technology will arrive in time, which is a legitimate view, but then that also creates big risks of a catastrophe (we reach the lake, and it is poisonous… oops). This is “Horn 2”.
My own experience with exciting new technologies is a bit jaded, and my probability assessment for Horn 1 has been moving upward over the years. Radically-new technology always take much longer to deploy than first expected, development never proceeds at the preferred pace of the engineers, and there can be maddeningly-long delays in getting anyone to deploy, even when the tech has been proven to work. The people who provide the money usually have their own agenda and it slows everything down. Space technology is one example here. Nanotechnology looks like another (huge excitement and major funding, but almost none of it going into development of true nano-machines along Drexler’s lines.)
I suspect that many of the estimates are made by people who suffer from biases towards excessive skepticism, and/or are motivated to find a way to not spend hundreds of thousands of dollars, preferring shorter term gains.
The two estimates I linked to are both from people who have signed up for it; the second one is Robin Hanson’s. On the spreadsheet, as far as I know the only estimate from someone who has not signed up is mine.
This depends heavily on assumptions. Consider this : the oldest cryonics patients have survived more than 30 years. The loss per decade for reasonably well funded cryonics organizations is currently 0.
If you check a chart of causes of death, the overwhelming majority of causes are ones where a cryonics team could be there.
You would have to choose a legal method of suicide in some of these cases, however (like voluntarily dehydrating yourself to the point of death), or your brain would deteriorate from progressive disease to the point of probably being non-viable for a future revival.
As for long term risks : ultimately these depend on your perception of risks to human civilization and the chance of ultimately developing a form of nanotechnology that could scan your frozen brain and create an emulation at minimal cost. I personally don’t think there are many probable causes that could cause civilization to fail, and I think the developement of the nanotechnology to be almost certain. There is no future world I can imagine where eventually a commercial or governmental enitity would not have extreme levels of motivation to develop the technology, due to the incredible advantages it would grant.
This is my personal bias, perhaps, but let’s look at this a bit more rationally.
a. How could a civilization ending event actually happen? Are nuclear escalations the most likely outcome or are the exchanges ending at a city or 2 nuked more probable? b. What could stop a civilization from developing molecular tools with self replication? Living cells are an existence proof that the tools are possible, and developing the tools would give the entity that possessed them incredible power and wealth. c. Cryonics organizations have already survived 30 years. Maybe they need to survive 90 or 120 more. They have more money and resources today, decreasing the probability of failure with each year. What is the chance that they will not be able to survive the rest of the needed time? In another 20 years, they might have hardened facilities in the desert with backup power and liquid nitrogen production.
And so on. This is a complicated question, but I have an educated hunch that the risks of failure for cryonics are lower than many of the estimates might show. I suspect that many of the estimates are made by people who suffer from biases towards excessive skepticism, and/or are motivated to find a way to not spend hundreds of thousands of dollars, preferring shorter term gains.
The civilization-ending risks are the most worrying from my point of view. Basically, I see a couple of scenarios:
Technology never gets anywhere near the point where we can revive frozen brains. Industrial civilization collapses first through a combination of resource constraints, environmental damage, and devastating wars; most likely, these all happen together and feed off each other. This doesn’t immediately cause human extinction, but the probability of a future industrial civilization arising from the ruins is very low, because all the easily-extracted fossil fuels, ores etc. have already gone.
Technology continues to advance to a point where revival is becoming distinctly feasible, but such advanced tech also comes with very high and increasing existential risks. For instance genetically-engineered plagues, molecular nanotechnology used as a weapon, strong but unfriendly AI. There is low probability of avoiding all these risks.
It’s a nasty dilemma really, and cryonic revival can only happen if we somehow avoid both horns.
That’s on top of a separate concern that cryo as currently practised simply comes too late to avoid truly irreversible brain damage (what is sometimes called “information theoretic death”). If critical information about a person’s mind has already been lost before freezing then no future technology, however advanced, can restore that mind. I don’t know enough about how minds are stored in brains to answer that concern, but I’m not confident. Freezing immediately on point of bodily death (or shortly before) looks much more likely to work, but it happens to be illegal.
How, precisely, would this happen? We aren’t writing sci-fi here. There’s dozens of countries on this planet with world class R&D occurring each and every day. The key technology needed to revive frozen brains is the development of nanoscale machine tools that are versatile enough to aid in manufacturing more copies of themselves. This sort of technology would change many industries, and in the short term would give the developers of the tech (assuming they had some means of keeping control of it) enormous economic and military advantages.
a. Economic—these tools would be cheap in mass quantities because they can be used to make themselves. Nearly any manufactured good made today could probably be duplicated, and it would not require the elaborate and complex manufacturing chains that it takes today. Also, the products would be very close to atomically perfect, so there would be little need for quality control. b. Military—high end weapons are some of the most expensive to manufacture products available, for a myriad of reasons. (I mean jets, drones, tanks, etc). Nanoscale printers would drop the price to rock bottom for each additional copy of a weapon.
A civilization armed with these tools of course would not be worried about resources or environmental damage.
a. There are a lot of resources not feasible today because we can’t manufacture mining robots at rock bottom prices and send them to go after these low yield resources. b. We suffer from a lack of energy because solar panels and high end batteries have high manufacturing costs. (the raw materials are mostly very cheap). Same goes for nuclear reactors.
c. We cannot reverse environmental damage because we cannot afford to manufacture square miles worth of machinery to reverse the damage. (mostly C02 and other greenhouse gas capturing plants, but also robots to clean up various messes)
I say we revive people as soon as possible as computer simulations to give us a form of friendly AI that we can more or less trust. These people could be emulated at high speed and duplicated many times and used to counter the other risks.
I agree with you entirely on the irreversible brain damage. I think this problem can be fixed with systematic efforts to solve it (and a legal work around or a change to the laws) but this requires resources that Alcor and CI lack at the moment.
“Horn 1” of the dilemma is a Limits to Growth style crisis. It’s perfectly possible that such a limits-crisis arrives before the technology needed to expand the limits shows up to save us. (The early signs would be a major recession which never seems to end, and funding for speculative ideas like nano-machines doesn’t last.) Or another analogy would be crossing a desert with a small, leaky bottle of water and an ever-growing thirst. On the edge of the desert there is a huge lake, and the traveller reaching it will never be thirsty again. But it’s still possible to die before reaching the lake.
I see you think that the technology will arrive in time, which is a legitimate view, but then that also creates big risks of a catastrophe (we reach the lake, and it is poisonous… oops). This is “Horn 2”.
My own experience with exciting new technologies is a bit jaded, and my probability assessment for Horn 1 has been moving upward over the years. Radically-new technology always take much longer to deploy than first expected, development never proceeds at the preferred pace of the engineers, and there can be maddeningly-long delays in getting anyone to deploy, even when the tech has been proven to work. The people who provide the money usually have their own agenda and it slows everything down. Space technology is one example here. Nanotechnology looks like another (huge excitement and major funding, but almost none of it going into development of true nano-machines along Drexler’s lines.)
The two estimates I linked to are both from people who have signed up for it; the second one is Robin Hanson’s. On the spreadsheet, as far as I know the only estimate from someone who has not signed up is mine.