“The main weakness comes from the fact that almost every single two-bit futurist feels a need to make predictions, almost every single one of which goes for narrative plausibility and thus has massive issues with burdensome details and the conjunction fallacy.”—no. The most intelligent and able forecasters are incapable of making predictions (many of them worked in the field of AI). Your argument about updating my probability upwards because I don’t understand the future is fascinating. Can you explain why I can’t use the precise same argument to say there is a 50% chance that Arizona will be destroyed by a super-bomb on January 1st 2018?
The most intelligent and able forecasters are incapable of making predictions
Yes. precisely because they suffer from the biases mentioned. Sure predicting the future is really tough. But it isn’t helped by the presence of severe biases. It is important to realize that intelligent doesn’t mean one is less likely to be subject to cognitive biases. Nor, does being an expert in a specific area render one immune- look at the classic conjunction fallacy study with the USSR invading Poland. It is true that even taking that into account predicting the future is really hard. But if one looks for signs of the obvious biases then most predictions problems show up immediately.
Your argument about updating my probability upwards because I don’t understand the future is fascinating. Can you explain why I can’t use the precise same argument to say there is a 50% chance that Arizona will be destroyed by a super-bomb on January 1st 2018?
Well, you should move your uncertainty in the direction of 50% probably. But there’s no reason to say exactly 50%. That’s stupid. Your starting estimate for probability of such an event happening is really small, so the overconfidence adjustment won’t be that large and will likely still keep the probability negligible after the adjustment.
This isn’t like cryonics at all. First, the relevant forecast time for cryonics working is a much longer period and it extends much farther into the future than 2018. That means the uncertainty from prediction the future has a much larger impact. Also, people are actively working on the relevant technologies and have clear motivations to do so. I don’t in contrast even know what exactly a “super-bomb” is or why someone would feel a need to use it to destroy Arizona.
So the adjustments for predictive uncertainty and general overconfidence should move cryonics a lot closer to 50% than it should for your super-bomb example.
“The main weakness comes from the fact that almost every single two-bit futurist feels a need to make predictions, almost every single one of which goes for narrative plausibility and thus has massive issues with burdensome details and the conjunction fallacy.”—no. The most intelligent and able forecasters are incapable of making predictions (many of them worked in the field of AI). Your argument about updating my probability upwards because I don’t understand the future is fascinating. Can you explain why I can’t use the precise same argument to say there is a 50% chance that Arizona will be destroyed by a super-bomb on January 1st 2018?
Yes. precisely because they suffer from the biases mentioned. Sure predicting the future is really tough. But it isn’t helped by the presence of severe biases. It is important to realize that intelligent doesn’t mean one is less likely to be subject to cognitive biases. Nor, does being an expert in a specific area render one immune- look at the classic conjunction fallacy study with the USSR invading Poland. It is true that even taking that into account predicting the future is really hard. But if one looks for signs of the obvious biases then most predictions problems show up immediately.
Well, you should move your uncertainty in the direction of 50% probably. But there’s no reason to say exactly 50%. That’s stupid. Your starting estimate for probability of such an event happening is really small, so the overconfidence adjustment won’t be that large and will likely still keep the probability negligible after the adjustment.
This isn’t like cryonics at all. First, the relevant forecast time for cryonics working is a much longer period and it extends much farther into the future than 2018. That means the uncertainty from prediction the future has a much larger impact. Also, people are actively working on the relevant technologies and have clear motivations to do so. I don’t in contrast even know what exactly a “super-bomb” is or why someone would feel a need to use it to destroy Arizona.
So the adjustments for predictive uncertainty and general overconfidence should move cryonics a lot closer to 50% than it should for your super-bomb example.