I think only a tiny minority of lesswrong readers, believe in cryopreservation. If people genuinely believed in it then they would not wait until they were dying to preserve themselves, since the cumulative risk of death or serious mental debilitation before cryopreservation would be significant, the consequence is loss of (almost) eternal life,
Humans are not totally rational creatures. There are a lot of people who like the idea of cryonics but never sign up until it is very late. This isn’t a sign of a lack of “belief”(although Aris correctly notes below that that term isn’t well-defined) but rather a question of people simply going through the necessary effort. Many humans have ugh fields around paperwork, or don’t want to send strong weirdness signals, or are worried about extreme negative reactions from their family members. Moreover, there’s no such thing as “almost” eternal life. 10^30 is about as far from infinity as 1 is. What does however matter is that there are serious problems with the claim that one would get infinite utility from cryonics.
If people were actually trying to preserve themselves early then there would be a legal debate. There is none (unless I’m mistaken).
There have been some actually extremely tragic cases involving people with serious terminal illnesse such as cancer having to wait until they died (sometimes with additional brain damage as a result). This is because the cryonics organizations are extremely weak and small. They don’t want to risk their situation by being caught up in the American euthanasia debate.
What is the real probability? I think given the lack of success of humans in making long term predictions suggests that we should admit we simply don’t know. Cryopreservation might work. I wouldn’t stake my life, or my money on it, and I think there are more important jobs to do first.
This is one of the weakest arguments against cryonics. First of all, some human predictions have been quite accurate. The main weakness comes from the fact that almost every single two-bit futurist feels a need to make predictions, almost every single one of which goes for narrative plausibility and thus has massive issues with burdensome details and the conjunction fallacy.
In looking at any specific technology we can examine it in detail and try to make predictions about when it will function. If you actually think that humans really bad at making predictions, then the you shouldn’t just say “we simply don’t now” instead you should adjust your prediction to be less confident, closer to 50%. This means that if you assign a low probability to cryonics working you should update towards giving it an increased chance of being successful.
“The main weakness comes from the fact that almost every single two-bit futurist feels a need to make predictions, almost every single one of which goes for narrative plausibility and thus has massive issues with burdensome details and the conjunction fallacy.”—no. The most intelligent and able forecasters are incapable of making predictions (many of them worked in the field of AI). Your argument about updating my probability upwards because I don’t understand the future is fascinating. Can you explain why I can’t use the precise same argument to say there is a 50% chance that Arizona will be destroyed by a super-bomb on January 1st 2018?
The most intelligent and able forecasters are incapable of making predictions
Yes. precisely because they suffer from the biases mentioned. Sure predicting the future is really tough. But it isn’t helped by the presence of severe biases. It is important to realize that intelligent doesn’t mean one is less likely to be subject to cognitive biases. Nor, does being an expert in a specific area render one immune- look at the classic conjunction fallacy study with the USSR invading Poland. It is true that even taking that into account predicting the future is really hard. But if one looks for signs of the obvious biases then most predictions problems show up immediately.
Your argument about updating my probability upwards because I don’t understand the future is fascinating. Can you explain why I can’t use the precise same argument to say there is a 50% chance that Arizona will be destroyed by a super-bomb on January 1st 2018?
Well, you should move your uncertainty in the direction of 50% probably. But there’s no reason to say exactly 50%. That’s stupid. Your starting estimate for probability of such an event happening is really small, so the overconfidence adjustment won’t be that large and will likely still keep the probability negligible after the adjustment.
This isn’t like cryonics at all. First, the relevant forecast time for cryonics working is a much longer period and it extends much farther into the future than 2018. That means the uncertainty from prediction the future has a much larger impact. Also, people are actively working on the relevant technologies and have clear motivations to do so. I don’t in contrast even know what exactly a “super-bomb” is or why someone would feel a need to use it to destroy Arizona.
So the adjustments for predictive uncertainty and general overconfidence should move cryonics a lot closer to 50% than it should for your super-bomb example.
Humans are not totally rational creatures. There are a lot of people who like the idea of cryonics but never sign up until it is very late. This isn’t a sign of a lack of “belief”(although Aris correctly notes below that that term isn’t well-defined) but rather a question of people simply going through the necessary effort. Many humans have ugh fields around paperwork, or don’t want to send strong weirdness signals, or are worried about extreme negative reactions from their family members. Moreover, there’s no such thing as “almost” eternal life. 10^30 is about as far from infinity as 1 is. What does however matter is that there are serious problems with the claim that one would get infinite utility from cryonics.
There have been some actually extremely tragic cases involving people with serious terminal illnesse such as cancer having to wait until they died (sometimes with additional brain damage as a result). This is because the cryonics organizations are extremely weak and small. They don’t want to risk their situation by being caught up in the American euthanasia debate.
This is one of the weakest arguments against cryonics. First of all, some human predictions have been quite accurate. The main weakness comes from the fact that almost every single two-bit futurist feels a need to make predictions, almost every single one of which goes for narrative plausibility and thus has massive issues with burdensome details and the conjunction fallacy.
In looking at any specific technology we can examine it in detail and try to make predictions about when it will function. If you actually think that humans really bad at making predictions, then the you shouldn’t just say “we simply don’t now” instead you should adjust your prediction to be less confident, closer to 50%. This means that if you assign a low probability to cryonics working you should update towards giving it an increased chance of being successful.
“The main weakness comes from the fact that almost every single two-bit futurist feels a need to make predictions, almost every single one of which goes for narrative plausibility and thus has massive issues with burdensome details and the conjunction fallacy.”—no. The most intelligent and able forecasters are incapable of making predictions (many of them worked in the field of AI). Your argument about updating my probability upwards because I don’t understand the future is fascinating. Can you explain why I can’t use the precise same argument to say there is a 50% chance that Arizona will be destroyed by a super-bomb on January 1st 2018?
Yes. precisely because they suffer from the biases mentioned. Sure predicting the future is really tough. But it isn’t helped by the presence of severe biases. It is important to realize that intelligent doesn’t mean one is less likely to be subject to cognitive biases. Nor, does being an expert in a specific area render one immune- look at the classic conjunction fallacy study with the USSR invading Poland. It is true that even taking that into account predicting the future is really hard. But if one looks for signs of the obvious biases then most predictions problems show up immediately.
Well, you should move your uncertainty in the direction of 50% probably. But there’s no reason to say exactly 50%. That’s stupid. Your starting estimate for probability of such an event happening is really small, so the overconfidence adjustment won’t be that large and will likely still keep the probability negligible after the adjustment.
This isn’t like cryonics at all. First, the relevant forecast time for cryonics working is a much longer period and it extends much farther into the future than 2018. That means the uncertainty from prediction the future has a much larger impact. Also, people are actively working on the relevant technologies and have clear motivations to do so. I don’t in contrast even know what exactly a “super-bomb” is or why someone would feel a need to use it to destroy Arizona.
So the adjustments for predictive uncertainty and general overconfidence should move cryonics a lot closer to 50% than it should for your super-bomb example.