Is there a better reference for ” a number of experts have voiced concerns about AI x-risk ”? I feel like there should be by now...
I hope someone actually answers your question, but FWIW, the Asilomar principles were signed by an impressive list of prominent AI experts. Five of the items are related to AGI and x-risk. The statements aren’t really strong enough to declare that those people “voiced concerns about AI x-risk”, but it’s a data-point for what can be said about AI x-risk while staying firmly in the mainstream.
My experience in casual discussions is that it’s enough to just name one example to make the point, and that example is of course Stuart Russell. When talking to non-ML people—who don’t know the currently-famous AI people anyway—I may also mention older examples like Alan Turing, Marvin Minsky, or Norbert Wiener.
I hope someone actually answers your question, but FWIW, the Asilomar principles were signed by an impressive list of prominent AI experts. Five of the items are related to AGI and x-risk. The statements aren’t really strong enough to declare that those people “voiced concerns about AI x-risk”, but it’s a data-point for what can be said about AI x-risk while staying firmly in the mainstream.
My experience in casual discussions is that it’s enough to just name one example to make the point, and that example is of course Stuart Russell. When talking to non-ML people—who don’t know the currently-famous AI people anyway—I may also mention older examples like Alan Turing, Marvin Minsky, or Norbert Wiener.
Thanks for this nice post. :-)
Yeah I’ve had conversations with people who shot down a long list of concerned experts, e.g.:
Stuart Russell is GOFAI ==> out-of-touch
Shane Legg doesn’t do DL, does he even do research? ==> out-of-touch
Ilya Sutskever (and everyone at OpenAI) is crazy, they think AGI is 5 years away ==> out-of-touch
Anyone at DeepMind is just marketing their B.S. “AGI” story or drank the koolaid ==> out-of-touch
But then, even the big 5 of deep learning have all said things that can be used to support the case....
So it kind of seems like there should be a compendium of quotes somewhere, or something.
Sounds like their problem isn’t just misleading heuristics, it’s motivated cognition.
Oh sure, in some special cases. I don’t this this experience was particularly representative.