I helped make this list in 2016 for a post by Nate, partly because I was dissatisfied with Scott’s list (which includes people like Richard Sutton, who thinks worrying about AI risk is carbon chauvinism):
These days I’d probably make a different list, including people like Yoshua Bengio. AI risk stuff is also sufficiently in the Overton window that I care more about researchers’ specific views than about “does the alignment problem seem nontrivial to you?”. Even if we’re just asking the latter question, I think it’s more useful to list the specific views and arguments of individuals (e.g., note that Rossi is more optimistic about the alignment problem than Russell), list the views and arguments of the similarly prominent CS people who think worrying about AGI is silly, and let people eyeball which people they think tend to produce better reasons.
I helped make this list in 2016 for a post by Nate, partly because I was dissatisfied with Scott’s list (which includes people like Richard Sutton, who thinks worrying about AI risk is carbon chauvinism):
These days I’d probably make a different list, including people like Yoshua Bengio. AI risk stuff is also sufficiently in the Overton window that I care more about researchers’ specific views than about “does the alignment problem seem nontrivial to you?”. Even if we’re just asking the latter question, I think it’s more useful to list the specific views and arguments of individuals (e.g., note that Rossi is more optimistic about the alignment problem than Russell), list the views and arguments of the similarly prominent CS people who think worrying about AGI is silly, and let people eyeball which people they think tend to produce better reasons.