That’s a subtly complicated question. I’ve been trying to write a blog post about it, but wander between two ways of addressing it.
First, we could summarize everything in just one sentence: « Deep learning can solve increasingly interesting problems, with less and less in manpower (and slightly more and slightly more in womenpower), and now is time to panick. » Then the question reduce to a long list of point-like « Problem solved! », and a warning it’s about to include the problem of finding increasingly interesting new problems.
A less consensual and more interesting way is to identify a series of conceptual revolutions that summarize and interpret what we learned so far. Or, at least, my own subjective and still preliminary take on what we learned. At this moment I’d count three conceptual revolutions, spread over different works in the last decade or two.
First, we learned how to train deep neural networks, and, even more important from a conceptual point of view, that the result mimicks/emulates human intuition/prejudices.
Second, we learned how to use self-play and reinforcement learning to best any human player on any board game (the drosophila of AI) which means this type of intelligence is now solved.
Third, we learned that semantics is data compression, and how learning to manipulate semantics with « attention » leads to increasingly impressive performances on new unknown cognitive tasks.
Fourth… but do we really need a fourth? In a way yes: we learned that reaching these milestones is doable without a fully conscious mind. It’s dreaming. For now.
That’s a subtly complicated question. I’ve been trying to write a blog post about it, but wander between two ways of addressing it.
First, we could summarize everything in just one sentence: « Deep learning can solve increasingly interesting problems, with less and less in manpower (and slightly more and slightly more in womenpower), and now is time to panick. » Then the question reduce to a long list of point-like « Problem solved! », and a warning it’s about to include the problem of finding increasingly interesting new problems.
A less consensual and more interesting way is to identify a series of conceptual revolutions that summarize and interpret what we learned so far. Or, at least, my own subjective and still preliminary take on what we learned. At this moment I’d count three conceptual revolutions, spread over different works in the last decade or two.
First, we learned how to train deep neural networks, and, even more important from a conceptual point of view, that the result mimicks/emulates human intuition/prejudices.
Second, we learned how to use self-play and reinforcement learning to best any human player on any board game (the drosophila of AI) which means this type of intelligence is now solved.
Third, we learned that semantics is data compression, and how learning to manipulate semantics with « attention » leads to increasingly impressive performances on new unknown cognitive tasks.
Fourth… but do we really need a fourth? In a way yes: we learned that reaching these milestones is doable without a fully conscious mind. It’s dreaming. For now.