Quoting from [1], a survey of researchers who published at top ML conferences:
The median respondent’s probability of x-risk from humans failing to control AI was 10%
Admittedly, that’s a far cry from “the light cone is about to get ripped to shreds,” but it’s also pretty far from finding those concerns laughable. [Edited to add: another recent survey puts the median estimate of extinction-level extremely bad outcomes at 2%, lower but arguably still not laughable.]
To be clear I also am concerned, but at lower probability levels and mostly not about doom. The laughable part is the specific “our light cone is about to get ripped to shreds” by a paperclipper or the equivalent, because of an overconfident and mostly incorrect EY/LW/MIRI argument involving supposed complexity of value, failure of alignment approaches, fast takeoff, sharp left turn, etc.
I of course agree with Aaro Salosensaari that many of the concerned experts were/are downstream of LW. But this also works the other way to some degree: beliefs about AI risk will influence career decisions, so it’s obviously not surprising that most working on AI capability research think risk is low and those working on AI safety/alignment think the risk is greater.
Hyperbole aside, how many of those experts linked (and/or contributing to the 10% / 2% estimate) have arrived to their conclusion with a thought process that is “downstream” from the thoughtspace the parent commenter thinks suspect? Then it would not qualify as independent evidence or rebuttal, as it is included as the target of criticism.
One specific concern people could have with this thoughtspace is the concern that it’s hard to square with the knowledge that an AI PhD [edit: or rather, AI/ML expertise more broadly] provides. I took this point to be strongly suggested by the author’s suggestions that “experts knowledgeable in the relevant subject matters that would actually lead to doom find this laughable” and that someone who spent their early years “reading/studying deep learning, systems neuroscience, etc.” would not find risk arguments compelling. That’s directly refuted by the surveys (though I agree that some other concerns about this thoughtspace aren’t).
(However, it looks like the author was making a different point to what I first understood.)
This seems overstated; plenty of AI/ML experts are concerned. [1] [2] [3] [4] [5] [6] [7] [8] [9]
Quoting from [1], a survey of researchers who published at top ML conferences:
Admittedly, that’s a far cry from “the light cone is about to get ripped to shreds,” but it’s also pretty far from finding those concerns laughable. [Edited to add: another recent survey puts the median estimate of extinction-level extremely bad outcomes at 2%, lower but arguably still not laughable.]
To be clear I also am concerned, but at lower probability levels and mostly not about doom. The laughable part is the specific “our light cone is about to get ripped to shreds” by a paperclipper or the equivalent, because of an overconfident and mostly incorrect EY/LW/MIRI argument involving supposed complexity of value, failure of alignment approaches, fast takeoff, sharp left turn, etc.
I of course agree with Aaro Salosensaari that many of the concerned experts were/are downstream of LW. But this also works the other way to some degree: beliefs about AI risk will influence career decisions, so it’s obviously not surprising that most working on AI capability research think risk is low and those working on AI safety/alignment think the risk is greater.
Hyperbole aside, how many of those experts linked (and/or contributing to the 10% / 2% estimate) have arrived to their conclusion with a thought process that is “downstream” from the thoughtspace the parent commenter thinks suspect? Then it would not qualify as independent evidence or rebuttal, as it is included as the target of criticism.
One specific concern people could have with this thoughtspace is the concern that it’s hard to square with the knowledge that an AI PhD [edit: or rather, AI/ML expertise more broadly] provides. I took this point to be strongly suggested by the author’s suggestions that “experts knowledgeable in the relevant subject matters that would actually lead to doom find this laughable” and that someone who spent their early years “reading/studying deep learning, systems neuroscience, etc.” would not find risk arguments compelling. That’s directly refuted by the surveys (though I agree that some other concerns about this thoughtspace aren’t).
(However, it looks like the author was making a different point to what I first understood.)