Yoshua Bengio on AI progress, hype and risks
Yoshua Bengio, one the world’s leading expert on machine learning, and neural networks in particular, explains his view on these issues in an interview. Relevant quotes:
There are people who are grossly overestimating the progress that has been made. There are many, many years of small progress behind a lot of these things, including mundane things like more data and computer power. The hype isn’t about whether the stuff we’re doing is useful or not—it is. But people underestimate how much more science needs to be done. And it’s difficult to separate the hype from the reality because we are seeing these great things and also, to the naked eye, they look magical
[ Recursive self-improvement ] It’s not how AI is built these days. Machine learning means you have a painstaking, slow process of acquiring information through millions of examples. A machine improves itself, yes, but very, very slowly, and in very specialized ways. And the kind of algorithms we play with are not at all like little virus things that are self-programming. That’s not what we’re doing.
Right now, the way we’re teaching machines to be intelligent is that we have to tell the computer what is an image, even at the pixel level. For autonomous driving, humans label huge numbers of images of cars to show which parts are pedestrians or roads. It’s not at all how humans learn, and it’s not how animals learn. We’re missing something big. This is one of the main things we’re doing in my lab, but there are no short-term applications—it’s probably not going to be useful to build a product tomorrow.
We ought to be talking about these things [ AI risks ]. The thing I’m more worried about, in a foreseeable future, is not computers taking over the world. I’m more worried about misuse of AI. Things like bad military uses, manipulating people through really smart advertising; also, the social impact, like many people losing their jobs. Society needs to get together and come up with a collective response, and not leave it to the law of the jungle to sort things out.
I think it’s fair to say that Bengio has joined the ranks of AI researchers like his colleagues Andrew Ng and Yann LeCun who publicly express skepticism towards imminent human-extinction-level AI.
- 22 Jul 2019 19:46 UTC; 53 points) 's comment on The AI Timelines Scam by (
- AI safety in the age of neural networks and Stanislaw Lem 1959 prediction by 31 Jan 2016 19:08 UTC; 11 points) (
The above-mentioned researchers are skeptical in different ways. Andrew Ng thinks that human-level AI is ridiculously far away, and that trying to predict the future more than 5 years out is useless. Yann LeCun and Yoshua Bengio believe that advanced AI is far from imminent, but approve of people thinking about long-term AI safety.
+1 To go even further, I would add that it’s unproductive to think of these researchers as being on anyone’s “side”. These are smart, nuanced people and rounding their comments down to a specific agenda is a recipe for misunderstanding.
Comparing with articles from a year ago, e.g. http://www.popsci.com/bill-gates-fears-ai-ai-researchers-know-better, this represents significant progress.
I’m a PhD student in Yoshua’s lab. I’ve spoken with him about this issue several times, and he has moved on this issue, as have Yann and Andrew. From my perspective following this issue, there was tremendous progress in the ML community’s attitude towards Xrisk.
I’m quite optimistic that such progress with continue, although pessimistic that it will be fast enough and that the ML community’s attitude will be anything like sufficient for a positive outcome.
I am curious if this has changed over the past 6 years since you posted this comment. Do you get the feeling that high profile researchers have shifted even further towards Xrisk concern, or if they continue with the same views as in 2016? Thanks!
There has been continued progress at about the rate I would’ve expected—maybe a bit faster. I think GPT-3 has helped change people’s views somewhat, as have further appreciation of other social issues of AI.
Thank you!
Underrated comment of the thread!
Got around to reading the actual interview. The ‘imminent’ part is well and thoroughly skepted, but as has been talked to death around here, non-imminent human extinction still seems important. And that part just seems to get totally passed over, which leaves me feeling like there’s some disconnect somewhere.
It’s almost like this a viewpoint got some celebrity endorsements, which had some idiosyncrasies and were necessarily brief, and then members of the media formed their own opinions based largely just on those celebrity statements, plus their own preconceptions and interests.
The big thing that is missing is meta-cognitive self reflection. It might turn out that even today’s RNN structures are sufficient and the only lacking answer is how to interconnect multi-columnar networks with meta-cognition networks.
Yes. Given the architecture is right and capable few science is needed to train this AGI. It will learn on its own.
The amount of safety related research is for sure underestimated. Evolution of biological brains never needed extra constraints. Society needed and created constraints. And it had time to do so. If science gets the architecture right—do the scientists really know what is going on inside their networks? How can developers integrate safety? There will not be a society of similarly capable AIs that can self-constrain its members. These are critical science issues especially because we have little we can copy from.
I’m afraid we will never know whether someone is “close” to (super)human AGI, unless this entity reveals it. Now think nuclear bomb… and superAGI is supposed to be orders of magnitude more powerful/dangerous.
So, not unlike the wartime disappearance of scientific articles on nuclear topics, certain (sudden?) lack of progress reporting press could be an indicator.