In response to someone asking “What are your biggest hopes and fears as they pertain to the future of artificial intelligence?”, LeCun responds that:
Every new technology has potential benefits and potential dangers. As with nuclear technology and biotech in decades past, societies will have to come up with guidelines and safety measures to prevent misuses of AI.
One hope is that AI will transform communication between people, and between people and machines. Ai will facilitate and mediate our interactions with the digital world and with each other. It could help people access information and protect their privacy. Beyond that, AI will drive our cars and reduce traffic accidents, help our doctors make medical decisions, and do all kinds of other things.
But it will have a profound impact on society, and we have to prepare for it. We need to think about ethical questions surrounding AI and establish rules and guidelines (e.g. for privacy protection). That said, AI will not happen one day out of the blue. It will be progressive, and it will give us time to think about the right way to deal with it.
It’s important to keep mind that the arrival of AI will not be any more or any less disruptive than the arrival of indoor plumbing, vaccines, the car, air travel, the television, the computer, the internet, etc.
EDIT: I didn’t see this one the first time. In response to someone asking “What do you think of the Friendly AI effort led by Yudkowsky? (e.g. is it premature? or fully worth the time to reduce the related aI existential risk?)”, LeCun says that:
We are still very, very far from building machines intelligent enough to present an existential risk. So, we have time to think about how to deal with it. But it’s a problem we have to take very seriously, just like people did with biotech, nuclear technology, etc.
I’d love to see a discussion between people like LeCun, Norvig, Yudkowsky and e.g. Russell. A discussion where they talk about what exactly they mean when they think about “AI risks”, and why they disagree, if they disagree.
Right now I often have the feeling that many people mean completely different things when they talk about AI risks. One person might mean that a lot of jobs will be gone, or that AI will destroy privacy, while the other person means something along the lines of “5 people in a basement launch a seed AI, which then turns the world into computronium”. These are vastly different perceptions, and I personally find myself somewhere between those positions.
LeCun and Norvig seem to disagree that there will be an uncontrollable intelligence explosion. And I am still not sure what exactly Russell believes.
Anyway, it is possible to figure this out. You just have to ask the right questions. And this never seems to happen when MIRI or FHI talk to experts. They never specifically ask about their controversial beliefs. If you e.g. ask someone if they agree that general AI could be a risk, a yes/no answer provides very little information about how much they agree with MIRI. You’ll have to ask specific questions.
Is it possible that MIRI knows privately (which is good enough for their own strategic purposes) that some of these high-profile people disagree with them on key issues, but they don’t want to publicly draw attention to that fact?
Yann LeCun, head of Facebook’s AI-lab, did an AMA on /r/MachineLearning/ a few days ago. You can find the thread here.
In response to someone asking “What are your biggest hopes and fears as they pertain to the future of artificial intelligence?”, LeCun responds that:
EDIT: I didn’t see this one the first time. In response to someone asking “What do you think of the Friendly AI effort led by Yudkowsky? (e.g. is it premature? or fully worth the time to reduce the related aI existential risk?)”, LeCun says that:
I’d love to see a discussion between people like LeCun, Norvig, Yudkowsky and e.g. Russell. A discussion where they talk about what exactly they mean when they think about “AI risks”, and why they disagree, if they disagree.
Right now I often have the feeling that many people mean completely different things when they talk about AI risks. One person might mean that a lot of jobs will be gone, or that AI will destroy privacy, while the other person means something along the lines of “5 people in a basement launch a seed AI, which then turns the world into computronium”. These are vastly different perceptions, and I personally find myself somewhere between those positions.
LeCun and Norvig seem to disagree that there will be an uncontrollable intelligence explosion. And I am still not sure what exactly Russell believes.
Anyway, it is possible to figure this out. You just have to ask the right questions. And this never seems to happen when MIRI or FHI talk to experts. They never specifically ask about their controversial beliefs. If you e.g. ask someone if they agree that general AI could be a risk, a yes/no answer provides very little information about how much they agree with MIRI. You’ll have to ask specific questions.
Is it possible that MIRI knows privately (which is good enough for their own strategic purposes) that some of these high-profile people disagree with them on key issues, but they don’t want to publicly draw attention to that fact?