So to summarize that case study criticism: everything you factchecked was accurate and you have no evidence of any kind that the Fermi story does not mean what O/Y interpret it as.
Re: the Fermi quote, this does not seem to be an accurate summary to me. Learning that Fermi meant 10% when he said “remote possibility” does in fact change how I view that incident.
If it were common knowledge that any hyperbolic language experts use when speaking about the unlikelihood of AGI (e.g. Andrew Ng’s statement “worrying about AI safety is like worrying about overpopulation on Mars”) actually corresponded to a 10% subjective probability of AGI, things would look very different than they currently do.
More generally, on a strategic level there is very little difference between a genuinely incorrect forecast and one that is “correct”, but communicated so poorly as to create a wrong impression in the mind of the listener. If the state of affairs is such that anyone who privately believes there is a 10% chance of AGI is incentivized to instead report their assessment as “remote”, the conclusion of Ord/Yudkowsky holds, and it remains impossible to discern whether AGI is imminent by listening to expert forecasts.
(I also don’t believe that said experts, if asked to translate their forecasts to numerical probabilities, would give a median estimate anywhere near as high as 10%, but that’s largely tangential to the discussion at hand.)
Furthermore, and more importantly, however: I deny that Fermi’s 10% somehow detracts from the point that forecasting the future of novel technologies is hard.
Four years prior to overseeing the world’s first nuclear reaction, Fermi believed that it was more likely than not that a nuclear chain reaction was impossible. Setting aside for a moment the question of whether Fermi’s specific probability assignment was negligible, or merely small, what this indicates is that the majority of the information necessary to determine the possibility of a nuclear chain reaction was in fact unavailable to Fermi at the time he made his forecast. This does not support the idea that making predictions about technology is easy, any more than it would have if Fermi had assigned 0.001% instead of 10%!
More generally, the specific probability estimate Fermi gave is nothing more than a red herring, one that is given undue attention by the OP. The relevant factor to Ord/Yudkowsky’s thesis is how much uncertainty there is in the probability distribution of a given technology—not whether the mean of said distribution, when treated as a point estimate, happens to be negligible or non-negligible. Focusing too much on the latter not only obfuscates the correct lesson to be learned, but also sometimes leads to nonsensical results.
If it were common knowledge that any hyperbolic language experts use when speaking about the unlikelihood of AGI (e.g. Andrew Ng’s statement “worrying about AI safety is like worrying about overpopulation on Mars”) actually corresponded to a 10% subjective probability of AGI, things would look very different than they currently do.
Did you have anything specific in mind about how things would look different? I have the impression that you’re trying to imply something in particular, but I’m not sure what it is.
EDIT: Also, I’m a little confused about whether you mean to be agreeing with me or disagreeing. The tone of your comment sounds like disagreeing, but content-wise it seems like we’re both agreeing that if someone is using language like “remote possibility” to mean 10%, that is a noteworthy and not-generally-obvious fact.
Maybe you’re saying that experts do frequently obfuscate with hyperbolic language, s.t. it’s not surprising to you that Fermi would mean 10% when he said “remote possibility”, but that this fact is not generally recognized. (And things would look very different if it was.) Is that it?
I think this comment raises some valid and interesting points. But I’d push back a bit on some points.
(Note that this comment was written quickly, so I may say things a bit unclearly or be saying opinions I haven’t mulled over for a long time.)
More generally, on a strategic level there is very little difference between a genuinely incorrect forecast and one that is “correct”, but communicated so poorly as to create a wrong impression in the mind of the listener.
There’s at least some truth to this. But it’s also possible to ask experts to give a number, as Fermi was asked. If the problem is poor communication, then asking experts to give a number will resolve at least part of the problem (though substantial damage may have been done by planting the verbal estimate in people’s mind). If the problem is poor estimation, then asking for an explicit estimate might make things worse, as it could give a more precise incorrect answer for people to anchor on. (I don’t know of specific evidence that people anchor more on numerical than verbal probability statements, but it seems likely me. Also, to be clear, despite this, I think I’m generally in favour of explicit probability estimates in many cases.)
If the state of affairs is such that anyone who privately believes there is a 10% chance of AGI is incentivized to instead report their assessment as “remote”, the conclusion of Ord/Yudkowsky holds, and it remains impossible to discern whether AGI is imminent by listening to expert forecasts.
I think this is true if no one asks the experts for explicit numerical estimate, or if the incentives to avoid giving such estimates are strong enough that experts will refuse when asked. I think both of those conditions hold to a substantial extent in the real world and in relation to AGI, and that that is a reason why the Fermi case has substantial relevance to the AGI case. But it still seems useful to me to be aware of the distinction between failures of communication vs of estimation, as it seems we could sometimes get evidence that discriminates between which of those is occurring/common, and that which is occurring/common could sometimes be relevant.
Furthermore, and more importantly, however: I deny that Fermi’s 10% somehow detracts from the point that forecasting the future of novel technologies is hard.
I definitely wasn’t claiming that forecasting the future of novel technologies is easy, and I didn’t interpret ESRogs as doing so either. What I was exploring was merely whether this case is a clear case of an expert’s technology forecast being “wrong” (and, if so, “how wrong”), and what this reflects about the typical accuracy of expert technology forecasts. They could conceivably be typically accurate even if very very hard to make, if experts are really good at it and put in lots of effort. But I think more likely they’re often wrong. The important question is essentially “how often”, and this post bites off the smaller question “what does the Fermi case tell us about that”.
As for the rest of the comment, I think both the point estimates and the uncertainty are relevant, at least when judging estimates (rather than making decisions based on them). This is in line with my understanding from e.g. Tetlock’s work. I don’t think I’d read much into an expert saying 1% rather than 10% for something as hard to forecast as an unprecedented tech development, unless I had reason to believe the expert was decently calibrated. But if they have given one of those numbers, and then we see what happens, then which number they gave makes a difference to how calibrated vs uncalibrated I should see them as (which I might then generalise in a weak way to experts more widely).
That said, I do generally think uncertainty of estimates is very important, and think the paper you linked to makes that point very well. And I do think one could easily focus too much on point estimates; e.g., I wouldn’t plug Ord’s existential risk estimates into a model as point estimates without explicitly representing a lot of uncertainty too.
Re: the Fermi quote, this does not seem to be an accurate summary to me. Learning that Fermi meant 10% when he said “remote possibility” does in fact change how I view that incident.
If it were common knowledge that any hyperbolic language experts use when speaking about the unlikelihood of AGI (e.g. Andrew Ng’s statement “worrying about AI safety is like worrying about overpopulation on Mars”) actually corresponded to a 10% subjective probability of AGI, things would look very different than they currently do.
More generally, on a strategic level there is very little difference between a genuinely incorrect forecast and one that is “correct”, but communicated so poorly as to create a wrong impression in the mind of the listener. If the state of affairs is such that anyone who privately believes there is a 10% chance of AGI is incentivized to instead report their assessment as “remote”, the conclusion of Ord/Yudkowsky holds, and it remains impossible to discern whether AGI is imminent by listening to expert forecasts.
(I also don’t believe that said experts, if asked to translate their forecasts to numerical probabilities, would give a median estimate anywhere near as high as 10%, but that’s largely tangential to the discussion at hand.)
Furthermore, and more importantly, however: I deny that Fermi’s 10% somehow detracts from the point that forecasting the future of novel technologies is hard.
Four years prior to overseeing the world’s first nuclear reaction, Fermi believed that it was more likely than not that a nuclear chain reaction was impossible. Setting aside for a moment the question of whether Fermi’s specific probability assignment was negligible, or merely small, what this indicates is that the majority of the information necessary to determine the possibility of a nuclear chain reaction was in fact unavailable to Fermi at the time he made his forecast. This does not support the idea that making predictions about technology is easy, any more than it would have if Fermi had assigned 0.001% instead of 10%!
More generally, the specific probability estimate Fermi gave is nothing more than a red herring, one that is given undue attention by the OP. The relevant factor to Ord/Yudkowsky’s thesis is how much uncertainty there is in the probability distribution of a given technology—not whether the mean of said distribution, when treated as a point estimate, happens to be negligible or non-negligible. Focusing too much on the latter not only obfuscates the correct lesson to be learned, but also sometimes leads to nonsensical results.
Did you have anything specific in mind about how things would look different? I have the impression that you’re trying to imply something in particular, but I’m not sure what it is.
EDIT: Also, I’m a little confused about whether you mean to be agreeing with me or disagreeing. The tone of your comment sounds like disagreeing, but content-wise it seems like we’re both agreeing that if someone is using language like “remote possibility” to mean 10%, that is a noteworthy and not-generally-obvious fact.
Maybe you’re saying that experts do frequently obfuscate with hyperbolic language, s.t. it’s not surprising to you that Fermi would mean 10% when he said “remote possibility”, but that this fact is not generally recognized. (And things would look very different if it was.) Is that it?
Minor thing: did you mean to refer to Fermi rather than to Rutherford in that last paragraph?
Oops, yes. Fixed.
I think this comment raises some valid and interesting points. But I’d push back a bit on some points.
(Note that this comment was written quickly, so I may say things a bit unclearly or be saying opinions I haven’t mulled over for a long time.)
There’s at least some truth to this. But it’s also possible to ask experts to give a number, as Fermi was asked. If the problem is poor communication, then asking experts to give a number will resolve at least part of the problem (though substantial damage may have been done by planting the verbal estimate in people’s mind). If the problem is poor estimation, then asking for an explicit estimate might make things worse, as it could give a more precise incorrect answer for people to anchor on. (I don’t know of specific evidence that people anchor more on numerical than verbal probability statements, but it seems likely me. Also, to be clear, despite this, I think I’m generally in favour of explicit probability estimates in many cases.)
I think this is true if no one asks the experts for explicit numerical estimate, or if the incentives to avoid giving such estimates are strong enough that experts will refuse when asked. I think both of those conditions hold to a substantial extent in the real world and in relation to AGI, and that that is a reason why the Fermi case has substantial relevance to the AGI case. But it still seems useful to me to be aware of the distinction between failures of communication vs of estimation, as it seems we could sometimes get evidence that discriminates between which of those is occurring/common, and that which is occurring/common could sometimes be relevant.
I definitely wasn’t claiming that forecasting the future of novel technologies is easy, and I didn’t interpret ESRogs as doing so either. What I was exploring was merely whether this case is a clear case of an expert’s technology forecast being “wrong” (and, if so, “how wrong”), and what this reflects about the typical accuracy of expert technology forecasts. They could conceivably be typically accurate even if very very hard to make, if experts are really good at it and put in lots of effort. But I think more likely they’re often wrong. The important question is essentially “how often”, and this post bites off the smaller question “what does the Fermi case tell us about that”.
As for the rest of the comment, I think both the point estimates and the uncertainty are relevant, at least when judging estimates (rather than making decisions based on them). This is in line with my understanding from e.g. Tetlock’s work. I don’t think I’d read much into an expert saying 1% rather than 10% for something as hard to forecast as an unprecedented tech development, unless I had reason to believe the expert was decently calibrated. But if they have given one of those numbers, and then we see what happens, then which number they gave makes a difference to how calibrated vs uncalibrated I should see them as (which I might then generalise in a weak way to experts more widely).
That said, I do generally think uncertainty of estimates is very important, and think the paper you linked to makes that point very well. And I do think one could easily focus too much on point estimates; e.g., I wouldn’t plug Ord’s existential risk estimates into a model as point estimates without explicitly representing a lot of uncertainty too.