Nonetheless, the risk in question is also a personal risk of death for every genius… now idk how do we define geniuses here but obviously most geniuses could be presumed pretty good at preventing their own deaths, or deaths of their families.
That seems like a pretty questionable presumption to me. High IQ is linked to reduced mortality according to at least one study, but that needn’t imply that any particular fatal risk be likely to be uncovered, let alone prevented, by any particular genius; there’s no physical law stating that lethal threats must be obvious in proportion to their lethality. And that’s especially true for existential threats, which almost by definition must be without experiential precedent.
You’d have a stronger argument if you narrowed your reference class to AI researchers. Not a terribly original one in this context, but a stronger one.
That seems like a pretty questionable presumption to me. High IQ is linked to reduced mortality according to at least one study, but that needn’t imply that any particular fatal risk be likely to be uncovered, let alone prevented, by any particular genius; there’s no physical law stating that lethal threats must be obvious in proportion to their lethality. And that’s especially true for existential threats, which almost by definition must be without experiential precedent.
You’d have a stronger argument if you narrowed your reference class to AI researchers. Not a terribly original one in this context, but a stronger one.