I think only a tiny minority of lesswrong readers, believe in cryopreservation. If people genuinely believed in it then they would not wait until they were dying to preserve themselves, since the cumulative risk of death or serious mental debilitation before cryopreservation would be significant, the consequence is loss of (almost) eternal life, while by early cryopreservation all they have to lose is their current, finite life, in the “unlikely” event that they are not successfully reanimated. If people were actually trying to preserve themselves early then there would be a legal debate. There is none (unless I’m mistaken).
Further evidence against this argument is the tiny sums that people are willing to pay. How much would you pay for eternal life? More or less than $8,219 (which is the present value of an annual payment of $300 in perpetuity?). Sounds too cheap to be genuine, too expensive to waste my money on. If I genuinely believed in cryopreservation I would be spending my net worth, which for most americans over 75 years old is > $150k. For less wrong readers, I would guess the median net worth at age 75 would be > $1m.
What is the real probability? I think given the lack of success of humans in making long term predictions suggests that we should admit we simply don’t know. Cryopreservation might work. I wouldn’t stake my life, or my money on it, and I think there are more important jobs to do first.
I think only a tiny minority of lesswrong readers, believe in cryopreservation. If people genuinely believed in it then they would not wait until they were dying to preserve themselves, since the cumulative risk of death or serious mental debilitation before cryopreservation would be significant, the consequence is loss of (almost) eternal life,
Humans are not totally rational creatures. There are a lot of people who like the idea of cryonics but never sign up until it is very late. This isn’t a sign of a lack of “belief”(although Aris correctly notes below that that term isn’t well-defined) but rather a question of people simply going through the necessary effort. Many humans have ugh fields around paperwork, or don’t want to send strong weirdness signals, or are worried about extreme negative reactions from their family members. Moreover, there’s no such thing as “almost” eternal life. 10^30 is about as far from infinity as 1 is. What does however matter is that there are serious problems with the claim that one would get infinite utility from cryonics.
If people were actually trying to preserve themselves early then there would be a legal debate. There is none (unless I’m mistaken).
There have been some actually extremely tragic cases involving people with serious terminal illnesse such as cancer having to wait until they died (sometimes with additional brain damage as a result). This is because the cryonics organizations are extremely weak and small. They don’t want to risk their situation by being caught up in the American euthanasia debate.
What is the real probability? I think given the lack of success of humans in making long term predictions suggests that we should admit we simply don’t know. Cryopreservation might work. I wouldn’t stake my life, or my money on it, and I think there are more important jobs to do first.
This is one of the weakest arguments against cryonics. First of all, some human predictions have been quite accurate. The main weakness comes from the fact that almost every single two-bit futurist feels a need to make predictions, almost every single one of which goes for narrative plausibility and thus has massive issues with burdensome details and the conjunction fallacy.
In looking at any specific technology we can examine it in detail and try to make predictions about when it will function. If you actually think that humans really bad at making predictions, then the you shouldn’t just say “we simply don’t now” instead you should adjust your prediction to be less confident, closer to 50%. This means that if you assign a low probability to cryonics working you should update towards giving it an increased chance of being successful.
“The main weakness comes from the fact that almost every single two-bit futurist feels a need to make predictions, almost every single one of which goes for narrative plausibility and thus has massive issues with burdensome details and the conjunction fallacy.”—no. The most intelligent and able forecasters are incapable of making predictions (many of them worked in the field of AI). Your argument about updating my probability upwards because I don’t understand the future is fascinating. Can you explain why I can’t use the precise same argument to say there is a 50% chance that Arizona will be destroyed by a super-bomb on January 1st 2018?
The most intelligent and able forecasters are incapable of making predictions
Yes. precisely because they suffer from the biases mentioned. Sure predicting the future is really tough. But it isn’t helped by the presence of severe biases. It is important to realize that intelligent doesn’t mean one is less likely to be subject to cognitive biases. Nor, does being an expert in a specific area render one immune- look at the classic conjunction fallacy study with the USSR invading Poland. It is true that even taking that into account predicting the future is really hard. But if one looks for signs of the obvious biases then most predictions problems show up immediately.
Your argument about updating my probability upwards because I don’t understand the future is fascinating. Can you explain why I can’t use the precise same argument to say there is a 50% chance that Arizona will be destroyed by a super-bomb on January 1st 2018?
Well, you should move your uncertainty in the direction of 50% probably. But there’s no reason to say exactly 50%. That’s stupid. Your starting estimate for probability of such an event happening is really small, so the overconfidence adjustment won’t be that large and will likely still keep the probability negligible after the adjustment.
This isn’t like cryonics at all. First, the relevant forecast time for cryonics working is a much longer period and it extends much farther into the future than 2018. That means the uncertainty from prediction the future has a much larger impact. Also, people are actively working on the relevant technologies and have clear motivations to do so. I don’t in contrast even know what exactly a “super-bomb” is or why someone would feel a need to use it to destroy Arizona.
So the adjustments for predictive uncertainty and general overconfidence should move cryonics a lot closer to 50% than it should for your super-bomb example.
I think only a tiny minority of lesswrong readers, believe in cryopreservation.If people genuinely believed in it then they would not wait until they were dying to preserve themselves
I think you need to define your usage of the term “believe in” slightly better. Belief for what percentages of cryo success rate qualify for “belief in cryopreservation”?
If you’re talking about percentages over 90% -- indeed I doubt that a significant number of lesswrong readers would have nearly that much certainty in cryo success.
But for any percentages below that, your arguments become weak to the point of meaningless—for at that point it becomes reasonable to use cryopreservation as a last resort, and hope for advancements in technology that’ll make cryopreservation surer—while still insuring yourself in case you end up in a position that you don’t have the luxury of waiting any more.
Belief is pretty unambiguous—being sure of (100% probability, like cogito ergo sum), or a strong trust (not nearly 90% probability is not belief). So it seems we are in agreement, you don’t believe in it, and neither do most less wrong readers. I agree that based on that argument, whether the probability is 10^-1000 or 75%, is still up for debate.
If that’s your definition of belief then it may not be that relevant. If I there’s a game where someone roles a pair of fair six-sided dice and will give me five dollars if I can guess their sum, my best strategy is to guess 7 even though I don’t by your definition believe that 7 will turn up. In this context this becomes a less than helfpul notation.
Also, if this is what you meant, I’m a bit confused by why you brought it up. Many prominent cryonics proponents give estimates well below 90%. So what point were you trying to make?
I think only a tiny minority of lesswrong readers, believe in cryopreservation. If people genuinely believed in it then they would not wait until they were dying to preserve themselves, since the cumulative risk of death or serious mental debilitation before cryopreservation would be significant, the consequence is loss of (almost) eternal life, while by early cryopreservation all they have to lose is their current, finite life, in the “unlikely” event that they are not successfully reanimated. If people were actually trying to preserve themselves early then there would be a legal debate. There is none (unless I’m mistaken).
Further evidence against this argument is the tiny sums that people are willing to pay. How much would you pay for eternal life? More or less than $8,219 (which is the present value of an annual payment of $300 in perpetuity?). Sounds too cheap to be genuine, too expensive to waste my money on. If I genuinely believed in cryopreservation I would be spending my net worth, which for most americans over 75 years old is > $150k. For less wrong readers, I would guess the median net worth at age 75 would be > $1m.
What is the real probability? I think given the lack of success of humans in making long term predictions suggests that we should admit we simply don’t know. Cryopreservation might work. I wouldn’t stake my life, or my money on it, and I think there are more important jobs to do first.
Humans are not totally rational creatures. There are a lot of people who like the idea of cryonics but never sign up until it is very late. This isn’t a sign of a lack of “belief”(although Aris correctly notes below that that term isn’t well-defined) but rather a question of people simply going through the necessary effort. Many humans have ugh fields around paperwork, or don’t want to send strong weirdness signals, or are worried about extreme negative reactions from their family members. Moreover, there’s no such thing as “almost” eternal life. 10^30 is about as far from infinity as 1 is. What does however matter is that there are serious problems with the claim that one would get infinite utility from cryonics.
There have been some actually extremely tragic cases involving people with serious terminal illnesse such as cancer having to wait until they died (sometimes with additional brain damage as a result). This is because the cryonics organizations are extremely weak and small. They don’t want to risk their situation by being caught up in the American euthanasia debate.
This is one of the weakest arguments against cryonics. First of all, some human predictions have been quite accurate. The main weakness comes from the fact that almost every single two-bit futurist feels a need to make predictions, almost every single one of which goes for narrative plausibility and thus has massive issues with burdensome details and the conjunction fallacy.
In looking at any specific technology we can examine it in detail and try to make predictions about when it will function. If you actually think that humans really bad at making predictions, then the you shouldn’t just say “we simply don’t now” instead you should adjust your prediction to be less confident, closer to 50%. This means that if you assign a low probability to cryonics working you should update towards giving it an increased chance of being successful.
“The main weakness comes from the fact that almost every single two-bit futurist feels a need to make predictions, almost every single one of which goes for narrative plausibility and thus has massive issues with burdensome details and the conjunction fallacy.”—no. The most intelligent and able forecasters are incapable of making predictions (many of them worked in the field of AI). Your argument about updating my probability upwards because I don’t understand the future is fascinating. Can you explain why I can’t use the precise same argument to say there is a 50% chance that Arizona will be destroyed by a super-bomb on January 1st 2018?
Yes. precisely because they suffer from the biases mentioned. Sure predicting the future is really tough. But it isn’t helped by the presence of severe biases. It is important to realize that intelligent doesn’t mean one is less likely to be subject to cognitive biases. Nor, does being an expert in a specific area render one immune- look at the classic conjunction fallacy study with the USSR invading Poland. It is true that even taking that into account predicting the future is really hard. But if one looks for signs of the obvious biases then most predictions problems show up immediately.
Well, you should move your uncertainty in the direction of 50% probably. But there’s no reason to say exactly 50%. That’s stupid. Your starting estimate for probability of such an event happening is really small, so the overconfidence adjustment won’t be that large and will likely still keep the probability negligible after the adjustment.
This isn’t like cryonics at all. First, the relevant forecast time for cryonics working is a much longer period and it extends much farther into the future than 2018. That means the uncertainty from prediction the future has a much larger impact. Also, people are actively working on the relevant technologies and have clear motivations to do so. I don’t in contrast even know what exactly a “super-bomb” is or why someone would feel a need to use it to destroy Arizona.
So the adjustments for predictive uncertainty and general overconfidence should move cryonics a lot closer to 50% than it should for your super-bomb example.
I think you need to define your usage of the term “believe in” slightly better. Belief for what percentages of cryo success rate qualify for “belief in cryopreservation”?
If you’re talking about percentages over 90% -- indeed I doubt that a significant number of lesswrong readers would have nearly that much certainty in cryo success.
But for any percentages below that, your arguments become weak to the point of meaningless—for at that point it becomes reasonable to use cryopreservation as a last resort, and hope for advancements in technology that’ll make cryopreservation surer—while still insuring yourself in case you end up in a position that you don’t have the luxury of waiting any more.
Belief is pretty unambiguous—being sure of (100% probability, like cogito ergo sum), or a strong trust (not nearly 90% probability is not belief). So it seems we are in agreement, you don’t believe in it, and neither do most less wrong readers. I agree that based on that argument, whether the probability is 10^-1000 or 75%, is still up for debate.
If that’s your definition of belief then it may not be that relevant. If I there’s a game where someone roles a pair of fair six-sided dice and will give me five dollars if I can guess their sum, my best strategy is to guess 7 even though I don’t by your definition believe that 7 will turn up. In this context this becomes a less than helfpul notation.
Also, if this is what you meant, I’m a bit confused by why you brought it up. Many prominent cryonics proponents give estimates well below 90%. So what point were you trying to make?