Enough thoughtful AI researchers (including Yoshua Bengio, Yann LeCun) have criticized the hype about evil killer robots or “superintelligence,” that I hope we can finally lay that argument to rest. This article summarizes why I don’t currently spend my time working on preventing AI from turning evil.
Spectrum: You’ve already expressed your disagreement with many of the ideas associated with the Singularity movement. I’m interested in your thoughts about its sociology. How do you account for its popularity in Silicon Valley?
LeCun: It’s difficult to say. I’m kind of puzzled by that phenomenon. As Neil Gershenfeld has noted, the first part of a sigmoid looks a lot like an exponential. It’s another way of saying that what currently looks like exponential progress is very likely to hit some limit—physical, economical, societal—then go through an inflection point, and then saturate. I’m an optimist, but I’m also a realist.
There are people that you’d expect to hype the Singularity, like Ray Kurzweil. He’s a futurist. He likes to have this positivist view of the future. He sells a lot of books this way. But he has not contributed anything to the science of AI, as far as I can tell. He’s sold products based on technology, some of which were somewhat innovative, but nothing conceptually new. And certainly he has never written papers that taught the world anything on how to make progress in AI.
Spectrum: What do you think he is going to accomplish in his job at Google?
LeCun: Not much has come out so far.
Spectrum: I often notice when I talk to researchers about the Singularity that while privately they are extremely dismissive of it, in public, they’re much more temperate in their remarks. Is that because so many powerful people in Silicon Valley believe it?
LeCun: AI researchers, down in the trenches, have to strike a delicate balance: be optimistic about what you can achieve, but don’t oversell what you can do. Point out how difficult your job is, but don’t make it sound hopeless. You need to be honest with your funders, sponsors, and employers, with your peers and colleagues, with the public, and with yourself. It is difficult when there is a lot of uncertainty about future progress, and when less honest or more self-deluded people make wild claims of future success. That’s why we don’t like hype: it is made by people who are either dishonest or self-deluded, and makes the life of serious and honest scientists considerably more difficult.
When you are in the kind of position as Larry Page and Sergey Brin and Elon Musk and Mark Zuckerberg, you have to prepare for where technology is going in the long run. And you have a huge amount of resources to make the future happen in a way that you think will be good. So inevitably you have to ask yourself those questions: what will technology be like 10, 20, 30 years from now. It leads you to think about questions like the progress of AI, the Singularity, and questions of ethics.
Spectrum: Right. But you yourself have a very clear notion of where computers are going to go, and I don’t think you believe we will be downloading our consciousness into them in 30 years.
LeCun: Not anytime soon.
Spectrum: Or ever.
LeCun: No, you can’t say never; technology is advancing very quickly, at an accelerating pace. But there are things that are worth worrying about today, and there are things that are so far out that we can write science fiction about it, but there’s no reason to worry about it just now.
His article commentary on G+ seems to get more into the “dissing” territory:
See this video at 39:30 for Yann LeCun giving some comments. He said:
Human-level AI is not near
He agrees with Musk that there will be important issues when it becomes near
He thinks people should be talking about it but not acting because a) there is some risk b) the public thinks there is more risk than there is
Also here is an IEEE interview: