I’m a PhD student in Yoshua’s lab. I’ve spoken with him about this issue several times, and he has moved on this issue, as have Yann and Andrew. From my perspective following this issue, there was tremendous progress in the ML community’s attitude towards Xrisk.
I’m quite optimistic that such progress with continue, although pessimistic that it will be fast enough and that the ML community’s attitude will be anything like sufficient for a positive outcome.
I am curious if this has changed over the past 6 years since you posted this comment. Do you get the feeling that high profile researchers have shifted even further towards Xrisk concern, or if they continue with the same views as in 2016? Thanks!
There has been continued progress at about the rate I would’ve expected—maybe a bit faster. I think GPT-3 has helped change people’s views somewhat, as have further appreciation of other social issues of AI.
Comparing with articles from a year ago, e.g. http://www.popsci.com/bill-gates-fears-ai-ai-researchers-know-better, this represents significant progress.
I’m a PhD student in Yoshua’s lab. I’ve spoken with him about this issue several times, and he has moved on this issue, as have Yann and Andrew. From my perspective following this issue, there was tremendous progress in the ML community’s attitude towards Xrisk.
I’m quite optimistic that such progress with continue, although pessimistic that it will be fast enough and that the ML community’s attitude will be anything like sufficient for a positive outcome.
I am curious if this has changed over the past 6 years since you posted this comment. Do you get the feeling that high profile researchers have shifted even further towards Xrisk concern, or if they continue with the same views as in 2016? Thanks!
There has been continued progress at about the rate I would’ve expected—maybe a bit faster. I think GPT-3 has helped change people’s views somewhat, as have further appreciation of other social issues of AI.
Thank you!
Underrated comment of the thread!