Is there any evidence that calibrated forecasters would be good at estimating high odds of extinction, if our actual odds are high? How could you ever even know? For instance, notions like <if we’re still alive, that means the odds of extinction must have been low> run afoul of philosophical issues like anthropics.
And as for 99%: this is wackadoodle wildly extreme, and probably off by a factor of roughly ~1,000x in odds format. If I assume the post’s implied probability is actually closer to 99%, then it seems egregious. You mention these >25% figures are not that out of place for MIRI, but what does that tell us? This domain probably isn’t that special, and humans would need to be calibrated forecasters for me to care much about their forecasts.
So I don’t understand this reasoning at all. You’re presuming that the odds of extinction are orders of magnitude lower than 99% (or whatever Yudkowsky’s actual assumed probability is), fine. But if your argument for why is that other forecasters don’t agree, so what? Maybe they’re just too optimistic. What would it even mean to be well-calibrated about x-risk forecasts?
If we were talking about stock market predictions instead, and you had evidence that your calibrated forecasters were earning more profit than Yudkowsky, I could understand this reasoning and would agree with it. But for me this logic doesn’t transfer to x-risks at all, and I’m confused that you think it does.
This domain probably isn’t that special
Here I strongly disagree. Forecasting x-risks is rife with special problems. Besides the (very important) anthropics confounders, how do you profit from a successful prediction of doom if there’s no-one left to calculate your brier score, or to pay out on a prediction market? And forecasters are worse at estimating extreme probabilities (<1% and >99%), and at longer-term predictions. Etc.
Regarding the insult thing: I agree that section can be interpreted as insulting, although it doesn’t have to be. Again, the post doesn’t directly address individuals, so one could just decide not to feel addressed by it.
But I’ll drop this point; I don’t think it’s all that cruxy, nor fruitful to argue about.
Is there any evidence that calibrated forecasters would be good at estimating high odds of extinction, if our actual odds are high? How could you ever even know? For instance, notions like <if we’re still alive, that means the odds of extinction must have been low> run afoul of philosophical issues like anthropics.
So I don’t understand this reasoning at all. You’re presuming that the odds of extinction are orders of magnitude lower than 99% (or whatever Yudkowsky’s actual assumed probability is), fine. But if your argument for why is that other forecasters don’t agree, so what? Maybe they’re just too optimistic. What would it even mean to be well-calibrated about x-risk forecasts?
If we were talking about stock market predictions instead, and you had evidence that your calibrated forecasters were earning more profit than Yudkowsky, I could understand this reasoning and would agree with it. But for me this logic doesn’t transfer to x-risks at all, and I’m confused that you think it does.
Here I strongly disagree. Forecasting x-risks is rife with special problems. Besides the (very important) anthropics confounders, how do you profit from a successful prediction of doom if there’s no-one left to calculate your brier score, or to pay out on a prediction market? And forecasters are worse at estimating extreme probabilities (<1% and >99%), and at longer-term predictions. Etc.
Regarding the insult thing: I agree that section can be interpreted as insulting, although it doesn’t have to be. Again, the post doesn’t directly address individuals, so one could just decide not to feel addressed by it.
But I’ll drop this point; I don’t think it’s all that cruxy, nor fruitful to argue about.