One specific concern people could have with this thoughtspace is the concern that it’s hard to square with the knowledge that an AI PhD [edit: or rather, AI/ML expertise more broadly] provides. I took this point to be strongly suggested by the author’s suggestions that “experts knowledgeable in the relevant subject matters that would actually lead to doom find this laughable” and that someone who spent their early years “reading/studying deep learning, systems neuroscience, etc.” would not find risk arguments compelling. That’s directly refuted by the surveys (though I agree that some other concerns about this thoughtspace aren’t).
(However, it looks like the author was making a different point to what I first understood.)
One specific concern people could have with this thoughtspace is the concern that it’s hard to square with the knowledge that an AI PhD [edit: or rather, AI/ML expertise more broadly] provides. I took this point to be strongly suggested by the author’s suggestions that “experts knowledgeable in the relevant subject matters that would actually lead to doom find this laughable” and that someone who spent their early years “reading/studying deep learning, systems neuroscience, etc.” would not find risk arguments compelling. That’s directly refuted by the surveys (though I agree that some other concerns about this thoughtspace aren’t).
(However, it looks like the author was making a different point to what I first understood.)