It is the case that this evidence, post update, shifts estimates significantly in direction of ‘completely wrong or not even wrong’ for all insights that require world class genius level intelligence, such as, incidentally, forming opinion on AI risk which most world class geniuses did not form.
Most “world class geniuses” have not opinionated on AI risk. So “forming opinion on AI risk which most world class geniuses did not form” is hardly a task which requires “world class genius level intelligence”.
For a “Bayesian reasoner”, a piece of writing is its own sufficient evidence concerning its qualities. Said reasoner does not need to rely much on indirect evidence concerning the author, after the reasoner has read the actual writing itself.
Most “world class geniuses” have not opinionated on AI risk.
Nonetheless, the risk in question is also a personal risk of death for every genius… now idk how do we define geniuses here but obviously most geniuses could be presumed pretty good at preventing their own deaths, or deaths of their families. I should have said, forming a valid opinion.
For a “Bayesian reasoner”, a piece of writing is its own sufficient evidence concerning its qualities. Said reasoner does not need to rely much on indirect evidence concerning the author, after the reasoner has read the actual writing itself.
Assuming that absolutely nothing in the writing had to be taken on faith. True for mathematical proofs. False for almost everything else.
Nonetheless, the risk in question is also a personal risk of death for every genius… now idk how do we define geniuses here but obviously most geniuses could be presumed pretty good at preventing their own deaths, or deaths of their families.
That seems like a pretty questionable presumption to me. High IQ is linked to reduced mortality according to at least one study, but that needn’t imply that any particular fatal risk be likely to be uncovered, let alone prevented, by any particular genius; there’s no physical law stating that lethal threats must be obvious in proportion to their lethality. And that’s especially true for existential threats, which almost by definition must be without experiential precedent.
You’d have a stronger argument if you narrowed your reference class to AI researchers. Not a terribly original one in this context, but a stronger one.
Most “world class geniuses” have not opinionated on AI risk. So “forming opinion on AI risk which most world class geniuses did not form” is hardly a task which requires “world class genius level intelligence”.
For a “Bayesian reasoner”, a piece of writing is its own sufficient evidence concerning its qualities. Said reasoner does not need to rely much on indirect evidence concerning the author, after the reasoner has read the actual writing itself.
Nonetheless, the risk in question is also a personal risk of death for every genius… now idk how do we define geniuses here but obviously most geniuses could be presumed pretty good at preventing their own deaths, or deaths of their families. I should have said, forming a valid opinion.
Assuming that absolutely nothing in the writing had to be taken on faith. True for mathematical proofs. False for almost everything else.
That seems like a pretty questionable presumption to me. High IQ is linked to reduced mortality according to at least one study, but that needn’t imply that any particular fatal risk be likely to be uncovered, let alone prevented, by any particular genius; there’s no physical law stating that lethal threats must be obvious in proportion to their lethality. And that’s especially true for existential threats, which almost by definition must be without experiential precedent.
You’d have a stronger argument if you narrowed your reference class to AI researchers. Not a terribly original one in this context, but a stronger one.