It’s quite easy and common to insult groups of people. And me and some other people found him very sneering in that post. In order to count as “calling them excruciatingly predictable”, it seems like you’re suggesting Eliezer would have had to be naming specific people, and that it doesn’t count if it’s about a group (people who had placed forecasts in that question)? If yes, why?
For that post that I described as fearmongering, it’s unrelated whether his “intention” is fearmongering or not. I would like if you elaborated. The post has a starkly doomsday attitude. We could just say it’s an April Fool’s joke, but the problem with this retort is Eliezer has said quite a few things with a similar attitude. And in the section “addressing” whether it’s an April Fool’s joke he first suggests that it is, but then implies that he intends for the reader to take the message very seriously so not really.
Roughly, the post seems to imply a chance of imminent extinction that is, like, a factor of ~100x higher (in odds format) than what scored aggregated forecasters roughly give. Such an extreme prediction could indeed be described as fearmongering.
In order to count as “fearmongering”, are you saying he would’ve had to meet the requirement of being motivated specifically for fearmongering? Because that’s what your last sentence suggests.
Regarding 1: Drop your antagonism towards Yudkowsky for a moment and consider how that quote could be not insulting. It simply says “people are slowly updating in excruciatingly predictable directions”.
Unless you have an axe to grind, I don’t understand how you immediately interpret that as “people are excruciatingly predictable”.
The point is simply: Yudkowsky has been warning of this AI stuff forever, and gotten increasingly worried (as evidenced by the post you called “fearmongering”). And the AI forecasts are getting shorter and shorter (which is an “excruciatingly predictable” direction), rather than occasionally getting longer (which would be an “epistemic random walk”).
Finally, I’m still not seeing how this is in good faith. You interpreted a quote as insulting and now call it “very sneering”, then wrote something I find 10x more insulting and sneering on your own (the second quote in my top comment). That seems like a weird double standard.
Regarding 2: “The post has a starkly doomsday attitude.” Yes. However, fearmongering, as described in a dictionary, is “the action of intentionally trying to make people afraid of something when this is not necessary or reasonable”. If someone thinks we’re actually doomed and writes a post saying so, it’s not fearmongering. Yudkowsky simply reported his firmly held beliefs. Yes, those are much much much more grim than what scored aggregated forecasters believe. But they’re not out of place for MIRI overall (with five predictions all noting risks in the range of 25%-99%).
To me, the main objectionable part of that post is the April Fool’s framing, which seems to have been done because of Yudkowsky’s worry that people who would despair needed an epistemic out or something. I understand that worry, but it’s led to so much confusion that I’m dubious whether it was worth it. Anyway, this comment clarified things for me.
You’re right about the definition of fearmongering then. I think he clearly tries to make people worried, and I often find it unreasonable. But I don’t expect everyone to think he meets the “unreasonable” criterion.
On the second quote in your top comment: indeed, most scored forecasters with a good track record don’t give 25% risk of extinction, say, before e.g. 2200.
And as for 99%: this is wackadoodle wildly extreme, and probably off by a factor of roughly ~1,000x in odds format. If I assume the post’s implied probability is actually closer to 99%, then it seems egregious. You mention these >25% figures are not that out of place for MIRI, but what does that tell us? This domain probably isn’t that special, and humans would need to be calibrated forecasters for me to care much about their forecasts.
Here are some claims I stand by:
I genuinely think the pictured painted by that post (and estimates near 99%) are overstating the odds of extinction soon by a factor of roughly ~1,000x. (For intuition, that’s similar to going from 10% to 99%.)
I genuinely think these extreme figures are largely coming from people who haven’t demonstrated calibrated forecasting, which would make it additionally suspicious in any other domain, and should here too.
I genuinely think Eliezer does something harmful by overstating the odds, by an amount that isn’t reasonable.
I genuinely think it’s bad of him to criticize other proper-scored forecasts without being transparent about his own, so a fair comparison could be made.
On insults
This part I’ve moved to the bottom of this comment because I think it’s less central to the claim I’m making. For the criteria for “insulting” or sneering, well, a bunch of people (including me) found it like that. Some people I heard from described it as infuriating that he was saying these things without being transparent about his own forecasts. And yes, the following does seem to imply other people aren’t sane nor self-respecting:
To be a slightly better Bayesian is to spend your entire life watching others slowly update in excruciatingly predictable directions that you jumped ahead of 6 years earlier so that your remaining life could be a random epistemic walk like a sane person with self-respect.
Putting aside whether or not you think I have an axe to grind, don’t you see how some people would see that as insulting or sneering?
Is there any evidence that calibrated forecasters would be good at estimating high odds of extinction, if our actual odds are high? How could you ever even know? For instance, notions like <if we’re still alive, that means the odds of extinction must have been low> run afoul of philosophical issues like anthropics.
And as for 99%: this is wackadoodle wildly extreme, and probably off by a factor of roughly ~1,000x in odds format. If I assume the post’s implied probability is actually closer to 99%, then it seems egregious. You mention these >25% figures are not that out of place for MIRI, but what does that tell us? This domain probably isn’t that special, and humans would need to be calibrated forecasters for me to care much about their forecasts.
So I don’t understand this reasoning at all. You’re presuming that the odds of extinction are orders of magnitude lower than 99% (or whatever Yudkowsky’s actual assumed probability is), fine. But if your argument for why is that other forecasters don’t agree, so what? Maybe they’re just too optimistic. What would it even mean to be well-calibrated about x-risk forecasts?
If we were talking about stock market predictions instead, and you had evidence that your calibrated forecasters were earning more profit than Yudkowsky, I could understand this reasoning and would agree with it. But for me this logic doesn’t transfer to x-risks at all, and I’m confused that you think it does.
This domain probably isn’t that special
Here I strongly disagree. Forecasting x-risks is rife with special problems. Besides the (very important) anthropics confounders, how do you profit from a successful prediction of doom if there’s no-one left to calculate your brier score, or to pay out on a prediction market? And forecasters are worse at estimating extreme probabilities (<1% and >99%), and at longer-term predictions. Etc.
Regarding the insult thing: I agree that section can be interpreted as insulting, although it doesn’t have to be. Again, the post doesn’t directly address individuals, so one could just decide not to feel addressed by it.
But I’ll drop this point; I don’t think it’s all that cruxy, nor fruitful to argue about.
A couple questions:
It’s quite easy and common to insult groups of people. And me and some other people found him very sneering in that post. In order to count as “calling them excruciatingly predictable”, it seems like you’re suggesting Eliezer would have had to be naming specific people, and that it doesn’t count if it’s about a group (people who had placed forecasts in that question)? If yes, why?
For that post that I described as fearmongering, it’s unrelated whether his “intention” is fearmongering or not. I would like if you elaborated. The post has a starkly doomsday attitude. We could just say it’s an April Fool’s joke, but the problem with this retort is Eliezer has said quite a few things with a similar attitude. And in the section “addressing” whether it’s an April Fool’s joke he first suggests that it is, but then implies that he intends for the reader to take the message very seriously so not really.
Roughly, the post seems to imply a chance of imminent extinction that is, like, a factor of ~100x higher (in odds format) than what scored aggregated forecasters roughly give. Such an extreme prediction could indeed be described as fearmongering.
In order to count as “fearmongering”, are you saying he would’ve had to meet the requirement of being motivated specifically for fearmongering? Because that’s what your last sentence suggests.
Regarding 1: Drop your antagonism towards Yudkowsky for a moment and consider how that quote could be not insulting. It simply says “people are slowly updating in excruciatingly predictable directions”.
Unless you have an axe to grind, I don’t understand how you immediately interpret that as “people are excruciatingly predictable”.
The point is simply: Yudkowsky has been warning of this AI stuff forever, and gotten increasingly worried (as evidenced by the post you called “fearmongering”). And the AI forecasts are getting shorter and shorter (which is an “excruciatingly predictable” direction), rather than occasionally getting longer (which would be an “epistemic random walk”).
Finally, I’m still not seeing how this is in good faith. You interpreted a quote as insulting and now call it “very sneering”, then wrote something I find 10x more insulting and sneering on your own (the second quote in my top comment). That seems like a weird double standard.
Regarding 2: “The post has a starkly doomsday attitude.” Yes. However, fearmongering, as described in a dictionary, is “the action of intentionally trying to make people afraid of something when this is not necessary or reasonable”. If someone thinks we’re actually doomed and writes a post saying so, it’s not fearmongering. Yudkowsky simply reported his firmly held beliefs. Yes, those are much much much more grim than what scored aggregated forecasters believe. But they’re not out of place for MIRI overall (with five predictions all noting risks in the range of 25%-99%).
To me, the main objectionable part of that post is the April Fool’s framing, which seems to have been done because of Yudkowsky’s worry that people who would despair needed an epistemic out or something. I understand that worry, but it’s led to so much confusion that I’m dubious whether it was worth it. Anyway, this comment clarified things for me.
You’re right about the definition of fearmongering then. I think he clearly tries to make people worried, and I often find it unreasonable. But I don’t expect everyone to think he meets the “unreasonable” criterion.
On the second quote in your top comment: indeed, most scored forecasters with a good track record don’t give 25% risk of extinction, say, before e.g. 2200.
And as for 99%: this is wackadoodle wildly extreme, and probably off by a factor of roughly ~1,000x in odds format. If I assume the post’s implied probability is actually closer to 99%, then it seems egregious. You mention these >25% figures are not that out of place for MIRI, but what does that tell us? This domain probably isn’t that special, and humans would need to be calibrated forecasters for me to care much about their forecasts.
Here are some claims I stand by:
I genuinely think the pictured painted by that post (and estimates near 99%) are overstating the odds of extinction soon by a factor of roughly ~1,000x. (For intuition, that’s similar to going from 10% to 99%.)
I genuinely think these extreme figures are largely coming from people who haven’t demonstrated calibrated forecasting, which would make it additionally suspicious in any other domain, and should here too.
I genuinely think Eliezer does something harmful by overstating the odds, by an amount that isn’t reasonable.
I genuinely think it’s bad of him to criticize other proper-scored forecasts without being transparent about his own, so a fair comparison could be made.
On insults
This part I’ve moved to the bottom of this comment because I think it’s less central to the claim I’m making. For the criteria for “insulting” or sneering, well, a bunch of people (including me) found it like that. Some people I heard from described it as infuriating that he was saying these things without being transparent about his own forecasts. And yes, the following does seem to imply other people aren’t sane nor self-respecting:
Putting aside whether or not you think I have an axe to grind, don’t you see how some people would see that as insulting or sneering?
Is there any evidence that calibrated forecasters would be good at estimating high odds of extinction, if our actual odds are high? How could you ever even know? For instance, notions like <if we’re still alive, that means the odds of extinction must have been low> run afoul of philosophical issues like anthropics.
So I don’t understand this reasoning at all. You’re presuming that the odds of extinction are orders of magnitude lower than 99% (or whatever Yudkowsky’s actual assumed probability is), fine. But if your argument for why is that other forecasters don’t agree, so what? Maybe they’re just too optimistic. What would it even mean to be well-calibrated about x-risk forecasts?
If we were talking about stock market predictions instead, and you had evidence that your calibrated forecasters were earning more profit than Yudkowsky, I could understand this reasoning and would agree with it. But for me this logic doesn’t transfer to x-risks at all, and I’m confused that you think it does.
Here I strongly disagree. Forecasting x-risks is rife with special problems. Besides the (very important) anthropics confounders, how do you profit from a successful prediction of doom if there’s no-one left to calculate your brier score, or to pay out on a prediction market? And forecasters are worse at estimating extreme probabilities (<1% and >99%), and at longer-term predictions. Etc.
Regarding the insult thing: I agree that section can be interpreted as insulting, although it doesn’t have to be. Again, the post doesn’t directly address individuals, so one could just decide not to feel addressed by it.
But I’ll drop this point; I don’t think it’s all that cruxy, nor fruitful to argue about.