Andrew Ng dismisses UFAI concerns
“I don’t work on preventing AI from turning evil for the same reason that I don’t work on combating overpopulation on the planet Mars,” he said. “Hundreds of years from now when hopefully we’ve colonized Mars, overpopulation might be a serious problem and we’ll have to deal with it. It’ll be a pressing issue. There’s tons of pollution and people are dying and so you might say, ‘How can you not care about all these people dying of pollution on Mars?’ Well, it’s just not productive to work on that right now.”
Current AI systems, Ng contends, are basic relative to human intelligence, even if there are things they can do that exceed the capabilities of any human. “Maybe hundreds of years from now, maybe thousands of years from now—I don’t know—maybe there will be some AI that turn evil,” he said, “but that’s just so far away that I don’t know how to productively work on that.”
The bigger worry, he noted, was the effect that increasingly smart machines might have on the job market, displacing workers in all kinds of fields much faster than even industrialization displaced agricultural workers or automation displaced factory workers.
(An earlier version of this article was titled ’Andrew Ng disses UFAI concerns” which is the phrasing many of the commenters are responding to).
He doesn’t “diss UFAI concerns”, he says “I don’t know how to productively work on that”, which seems accurate for the conventional meaning of “productive” even after looking into the issue. He doesn’t address the question of whether it’s worthwhile to work on the issue unproductively, illustrating his point with the analogy (overpopulation on Mars) where it’s clearly not.
His article commentary on G+ seems to get more into the “dissing” territory:
See this video at 39:30 for Yann LeCun giving some comments. He said:
Human-level AI is not near
He agrees with Musk that there will be important issues when it becomes near
He thinks people should be talking about it but not acting because a) there is some risk b) the public thinks there is more risk than there is
Also here is an IEEE interview:
I think since he draws an analogy to a problem it would be actually absurd to work on (no point working on overpopulation on Mars unless several other events happen first), he does seem to be suggesting that it’s ridiculous worrying about things like UFAI now rather than “hundreds, maybe thousands of years from now”.
Anyway, I only thought the post was interesting from a PR point of view. The AI problem has been getting good press lately with Musk/Gates et al suggesting that it is something worth worrying about. Ng hasn’t said anything that will move the larger discussion in an interesting direction, but it does have the ability to move the whole problem back very slightly in the direction of ‘weird problem only weird people will spend time worrying about’ in the minds of the general public.
Juergen Schmidhuber is another notable AI researcher. He did an AMA a few days ago. He does believe in the singularity and that AIs will probably be indifferent to us if not hostile.
I also think he is the most likely person to create it. A lot of his research is general AI-ish stuff and not just deep convnets.
Tribal talk.
Which tribe is Ng in? (if that’s what you are talking about)
I am interpreting IlyaShpitser as commenting on OphilaDros’s presentation; why say Ng “disses” UFAI concerns instead of “dismisses” them?
It also doesn’t help that the underlying content is a handful of different issues that all bleed together: the orthogonality question, the imminence question, and the Hollywood question. Ng is against Hollywood and against imminence, and I haven’t read enough of his writing on the subject to be sure of his thoughts on orthogonality, which is one of the actual meaningful points of contention between MIRI and other experts on the issue. (And even those three don’t touch on Ng’s short objection, that he doesn’t see a fruitful open problem!)
My impression was that imminence is a point of contention, much less orthogonality. Who specifically do you have in mind?
This article is a good place to start in clarifying the MIRI position. Since their estimate for imminence seems to boil down to “we asked the community what they thought and made a distribution,” I don’t see that as contention.
There is broad uncertainty about timelines, but the MIRI position is “uncertainty means we should not be confident we have all the time we need,” not “we’re confident it will happen soon,” which is where someone would need to be for me to say they’re “for imminence.”
Interesting. I considered imminence more of a point of contention b/c the most outspoken “AI risk is overhyped” people are mostly using it as an argument (and I consider this bunch way more serious than Searle and Brooks: Yann LeCun, Yoshua Bengio, Andrew Ng).
Yes I didn’t mean Ng. “Diss” is sort of unfortunate phrasing, he just wants to get work done. Sorry for being unclear.
Ok, sure. Changed the title in line with Vaniver’s suggestion.
I had not understood what the “tribal talk” comment was referring to either and then decided to put only as much effort into understanding it as the commenter had in being understood. :)
That’s the critical mistake. AIs don’t turn evil. If they could, we would have FAI half-solved.
AIs deviate from their intended programming, in ways that are dangerous for humans. And it’s not thousands of years away, it’s away just as much as a self-driving car crashing into a group of people to avoid a dog crossing the street.
Even your clarification seems too anthromorphic to me.
AIs don’t turn evil, but I don’t think they deviate from their programming either. Their programming deviates from their programmers values. (Or, another possibility, their programmer’s values deviate from humanity’s values).
Programming != intended programming.
They do, if they are self-improving, although I imagine you could collapse “programming” and “meta-programming”, in which case an AI would just only partially deviate. The point is you couldn’t expect things turn out to be so simple when talking about a runaway AI.
But that’s a very different kind of issue than AI taking over the world and killing or enslaving all humans.
EDIT:
To expand: all technologies introduce safety issues.
Once we got fire some people got burnt. This doesn’t imply that UFFire (Unfriendly Fire) is the most pressing existential risk for humanity and we must devote huge amount of resources to prevent it and never use fire until we have proved that it will not turn “unfriendly”.
Well, there’s a phoenomenon called “flash over”, that realizes in a confined environment, and happens when the temperature of a fire becomes so high that all the substances within starts to burn and feed the reaction.
Now, imagine that the whole world could become a closed environment for the flashover...
So we should stop using fire until we prove that the world will not burst into flames?
However, UFFire does not uncontrollably exponentially reproduce or improve its functioning. Certainly a conflagration on a planet covered entirely by dry forest would be an unmitigatable problem rather quickly.
In fact, in such a scenario, we should dedicate a huge amount of resources to prevent it and never use fire until we have proved it will not turn “unfriendly”.
Do you realize this is a totally hypothetical scenario?