I pushed this post out since I think it’s good to link to it in this other post. But there are at least 2 improvements I’d like to make and would appreciate help with:
I helped make this list in 2016 for a post by Nate, partly because I was dissatisfied with Scott’s list (which includes people like Richard Sutton, who thinks worrying about AI risk is carbon chauvinism):
These days I’d probably make a different list, including people like Yoshua Bengio. AI risk stuff is also sufficiently in the Overton window that I care more about researchers’ specific views than about “does the alignment problem seem nontrivial to you?”. Even if we’re just asking the latter question, I think it’s more useful to list the specific views and arguments of individuals (e.g., note that Rossi is more optimistic about the alignment problem than Russell), list the views and arguments of the similarly prominent CS people who think worrying about AGI is silly, and let people eyeball which people they think tend to produce better reasons.
Is there a better reference for ” a number of experts have voiced concerns about AI x-risk ”? I feel like there should be by now...
I hope someone actually answers your question, but FWIW, the Asilomar principles were signed by an impressive list of prominent AI experts. Five of the items are related to AGI and x-risk. The statements aren’t really strong enough to declare that those people “voiced concerns about AI x-risk”, but it’s a data-point for what can be said about AI x-risk while staying firmly in the mainstream.
My experience in casual discussions is that it’s enough to just name one example to make the point, and that example is of course Stuart Russell. When talking to non-ML people—who don’t know the currently-famous AI people anyway—I may also mention older examples like Alan Turing, Marvin Minsky, or Norbert Wiener.
I pushed this post out since I think it’s good to link to it in this other post. But there are at least 2 improvements I’d like to make and would appreciate help with:
Is there a better reference for ” a number of experts have voiced concerns about AI x-risk ”? I feel like there should be by now...
I just realized it would be nice to include examples where these heuristics lead to good judgments.
I helped make this list in 2016 for a post by Nate, partly because I was dissatisfied with Scott’s list (which includes people like Richard Sutton, who thinks worrying about AI risk is carbon chauvinism):
These days I’d probably make a different list, including people like Yoshua Bengio. AI risk stuff is also sufficiently in the Overton window that I care more about researchers’ specific views than about “does the alignment problem seem nontrivial to you?”. Even if we’re just asking the latter question, I think it’s more useful to list the specific views and arguments of individuals (e.g., note that Rossi is more optimistic about the alignment problem than Russell), list the views and arguments of the similarly prominent CS people who think worrying about AGI is silly, and let people eyeball which people they think tend to produce better reasons.
I hope someone actually answers your question, but FWIW, the Asilomar principles were signed by an impressive list of prominent AI experts. Five of the items are related to AGI and x-risk. The statements aren’t really strong enough to declare that those people “voiced concerns about AI x-risk”, but it’s a data-point for what can be said about AI x-risk while staying firmly in the mainstream.
My experience in casual discussions is that it’s enough to just name one example to make the point, and that example is of course Stuart Russell. When talking to non-ML people—who don’t know the currently-famous AI people anyway—I may also mention older examples like Alan Turing, Marvin Minsky, or Norbert Wiener.
Thanks for this nice post. :-)
Yeah I’ve had conversations with people who shot down a long list of concerned experts, e.g.:
Stuart Russell is GOFAI ==> out-of-touch
Shane Legg doesn’t do DL, does he even do research? ==> out-of-touch
Ilya Sutskever (and everyone at OpenAI) is crazy, they think AGI is 5 years away ==> out-of-touch
Anyone at DeepMind is just marketing their B.S. “AGI” story or drank the koolaid ==> out-of-touch
But then, even the big 5 of deep learning have all said things that can be used to support the case....
So it kind of seems like there should be a compendium of quotes somewhere, or something.
Sounds like their problem isn’t just misleading heuristics, it’s motivated cognition.
Oh sure, in some special cases. I don’t this this experience was particularly representative.