You’re right about the definition of fearmongering then. I think he clearly tries to make people worried, and I often find it unreasonable. But I don’t expect everyone to think he meets the “unreasonable” criterion.
On the second quote in your top comment: indeed, most scored forecasters with a good track record don’t give 25% risk of extinction, say, before e.g. 2200.
And as for 99%: this is wackadoodle wildly extreme, and probably off by a factor of roughly ~1,000x in odds format. If I assume the post’s implied probability is actually closer to 99%, then it seems egregious. You mention these >25% figures are not that out of place for MIRI, but what does that tell us? This domain probably isn’t that special, and humans would need to be calibrated forecasters for me to care much about their forecasts.
Here are some claims I stand by:
I genuinely think the pictured painted by that post (and estimates near 99%) are overstating the odds of extinction soon by a factor of roughly ~1,000x. (For intuition, that’s similar to going from 10% to 99%.)
I genuinely think these extreme figures are largely coming from people who haven’t demonstrated calibrated forecasting, which would make it additionally suspicious in any other domain, and should here too.
I genuinely think Eliezer does something harmful by overstating the odds, by an amount that isn’t reasonable.
I genuinely think it’s bad of him to criticize other proper-scored forecasts without being transparent about his own, so a fair comparison could be made.
On insults
This part I’ve moved to the bottom of this comment because I think it’s less central to the claim I’m making. For the criteria for “insulting” or sneering, well, a bunch of people (including me) found it like that. Some people I heard from described it as infuriating that he was saying these things without being transparent about his own forecasts. And yes, the following does seem to imply other people aren’t sane nor self-respecting:
To be a slightly better Bayesian is to spend your entire life watching others slowly update in excruciatingly predictable directions that you jumped ahead of 6 years earlier so that your remaining life could be a random epistemic walk like a sane person with self-respect.
Putting aside whether or not you think I have an axe to grind, don’t you see how some people would see that as insulting or sneering?
Is there any evidence that calibrated forecasters would be good at estimating high odds of extinction, if our actual odds are high? How could you ever even know? For instance, notions like <if we’re still alive, that means the odds of extinction must have been low> run afoul of philosophical issues like anthropics.
And as for 99%: this is wackadoodle wildly extreme, and probably off by a factor of roughly ~1,000x in odds format. If I assume the post’s implied probability is actually closer to 99%, then it seems egregious. You mention these >25% figures are not that out of place for MIRI, but what does that tell us? This domain probably isn’t that special, and humans would need to be calibrated forecasters for me to care much about their forecasts.
So I don’t understand this reasoning at all. You’re presuming that the odds of extinction are orders of magnitude lower than 99% (or whatever Yudkowsky’s actual assumed probability is), fine. But if your argument for why is that other forecasters don’t agree, so what? Maybe they’re just too optimistic. What would it even mean to be well-calibrated about x-risk forecasts?
If we were talking about stock market predictions instead, and you had evidence that your calibrated forecasters were earning more profit than Yudkowsky, I could understand this reasoning and would agree with it. But for me this logic doesn’t transfer to x-risks at all, and I’m confused that you think it does.
This domain probably isn’t that special
Here I strongly disagree. Forecasting x-risks is rife with special problems. Besides the (very important) anthropics confounders, how do you profit from a successful prediction of doom if there’s no-one left to calculate your brier score, or to pay out on a prediction market? And forecasters are worse at estimating extreme probabilities (<1% and >99%), and at longer-term predictions. Etc.
Regarding the insult thing: I agree that section can be interpreted as insulting, although it doesn’t have to be. Again, the post doesn’t directly address individuals, so one could just decide not to feel addressed by it.
But I’ll drop this point; I don’t think it’s all that cruxy, nor fruitful to argue about.
You’re right about the definition of fearmongering then. I think he clearly tries to make people worried, and I often find it unreasonable. But I don’t expect everyone to think he meets the “unreasonable” criterion.
On the second quote in your top comment: indeed, most scored forecasters with a good track record don’t give 25% risk of extinction, say, before e.g. 2200.
And as for 99%: this is wackadoodle wildly extreme, and probably off by a factor of roughly ~1,000x in odds format. If I assume the post’s implied probability is actually closer to 99%, then it seems egregious. You mention these >25% figures are not that out of place for MIRI, but what does that tell us? This domain probably isn’t that special, and humans would need to be calibrated forecasters for me to care much about their forecasts.
Here are some claims I stand by:
I genuinely think the pictured painted by that post (and estimates near 99%) are overstating the odds of extinction soon by a factor of roughly ~1,000x. (For intuition, that’s similar to going from 10% to 99%.)
I genuinely think these extreme figures are largely coming from people who haven’t demonstrated calibrated forecasting, which would make it additionally suspicious in any other domain, and should here too.
I genuinely think Eliezer does something harmful by overstating the odds, by an amount that isn’t reasonable.
I genuinely think it’s bad of him to criticize other proper-scored forecasts without being transparent about his own, so a fair comparison could be made.
On insults
This part I’ve moved to the bottom of this comment because I think it’s less central to the claim I’m making. For the criteria for “insulting” or sneering, well, a bunch of people (including me) found it like that. Some people I heard from described it as infuriating that he was saying these things without being transparent about his own forecasts. And yes, the following does seem to imply other people aren’t sane nor self-respecting:
Putting aside whether or not you think I have an axe to grind, don’t you see how some people would see that as insulting or sneering?
Is there any evidence that calibrated forecasters would be good at estimating high odds of extinction, if our actual odds are high? How could you ever even know? For instance, notions like <if we’re still alive, that means the odds of extinction must have been low> run afoul of philosophical issues like anthropics.
So I don’t understand this reasoning at all. You’re presuming that the odds of extinction are orders of magnitude lower than 99% (or whatever Yudkowsky’s actual assumed probability is), fine. But if your argument for why is that other forecasters don’t agree, so what? Maybe they’re just too optimistic. What would it even mean to be well-calibrated about x-risk forecasts?
If we were talking about stock market predictions instead, and you had evidence that your calibrated forecasters were earning more profit than Yudkowsky, I could understand this reasoning and would agree with it. But for me this logic doesn’t transfer to x-risks at all, and I’m confused that you think it does.
Here I strongly disagree. Forecasting x-risks is rife with special problems. Besides the (very important) anthropics confounders, how do you profit from a successful prediction of doom if there’s no-one left to calculate your brier score, or to pay out on a prediction market? And forecasters are worse at estimating extreme probabilities (<1% and >99%), and at longer-term predictions. Etc.
Regarding the insult thing: I agree that section can be interpreted as insulting, although it doesn’t have to be. Again, the post doesn’t directly address individuals, so one could just decide not to feel addressed by it.
But I’ll drop this point; I don’t think it’s all that cruxy, nor fruitful to argue about.