If it were common knowledge that any hyperbolic language experts use when speaking about the unlikelihood of AGI (e.g. Andrew Ng’s statement “worrying about AI safety is like worrying about overpopulation on Mars”) actually corresponded to a 10% subjective probability of AGI, things would look very different than they currently do.
Did you have anything specific in mind about how things would look different? I have the impression that you’re trying to imply something in particular, but I’m not sure what it is.
EDIT: Also, I’m a little confused about whether you mean to be agreeing with me or disagreeing. The tone of your comment sounds like disagreeing, but content-wise it seems like we’re both agreeing that if someone is using language like “remote possibility” to mean 10%, that is a noteworthy and not-generally-obvious fact.
Maybe you’re saying that experts do frequently obfuscate with hyperbolic language, s.t. it’s not surprising to you that Fermi would mean 10% when he said “remote possibility”, but that this fact is not generally recognized. (And things would look very different if it was.) Is that it?
Did you have anything specific in mind about how things would look different? I have the impression that you’re trying to imply something in particular, but I’m not sure what it is.
EDIT: Also, I’m a little confused about whether you mean to be agreeing with me or disagreeing. The tone of your comment sounds like disagreeing, but content-wise it seems like we’re both agreeing that if someone is using language like “remote possibility” to mean 10%, that is a noteworthy and not-generally-obvious fact.
Maybe you’re saying that experts do frequently obfuscate with hyperbolic language, s.t. it’s not surprising to you that Fermi would mean 10% when he said “remote possibility”, but that this fact is not generally recognized. (And things would look very different if it was.) Is that it?
Minor thing: did you mean to refer to Fermi rather than to Rutherford in that last paragraph?
Oops, yes. Fixed.