Here’s a Facebook post by Yann LeCun from 2017 which has a similar message to this post and seems quite insightful:
My take on Ali Rahimi’s “Test of Time” award talk at NIPS.
Ali gave an entertaining and well-delivered talk. But I fundamentally disagree with the message.
The main message was, in essence, that the current practice in machine learning is akin to “alchemy” (his word).
It’s insulting, yes. But never mind that: It’s wrong!
Ali complained about the lack of (theoretical) understanding of many methods that are currently used in ML, particularly in deep learning.
Understanding (theoretical or otherwise) is a good thing. It’s the very purpose of many of us in the NIPS community.
But another important goal is inventing new methods, new techniques, and yes, new tricks.
In the history of science and technology, the engineering artifacts have almost always preceded the theoretical understanding: the lens and the telescope preceded optics theory, the steam engine preceded thermodynamics, the airplane preceded flight aerodynamics, radio and data communication preceded information theory, the computer preceded computer science.
Why? Because theorists will spontaneously study “simple” phenomena, and will not be enticed to study a complex one until there a practical importance to it.
Criticizing an entire community (and an incredibly successful one at that) for practicing “alchemy”, simply because our current theoretical tools haven’t caught up with our practice is dangerous.
Why dangerous? It’s exactly this kind of attitude that lead the ML community to abandon neural nets for over 10 years, *despite* ample empirical evidence that they worked very well in many situations.
Neural nets, with their non-convex loss functions, had no guarantees of convergence (though they did work in practice then, just as they do now). So people threw the baby with the bath water and focused on “provable” convex methods or glorified template matching methods (or even 1957-style random feature methods).
Sticking to a set of methods just because you can do theory about it, while ignoring a set of methods that empirically work better just because you don’t (yet) understand them theoretically is akin to looking for your lost car keys under the street light knowing you lost them someplace else.
Yes, we need better understanding of our methods. But the correct attitude is to attempt to fix the situation, not to insult a whole community for not having succeeded in fixing it yet. This is like criticizing James Watt for not being Carnot or Helmholtz.
I have organized and participated in numerous workshops that bring together deep learners and theoreticians, many of them hosted at IPAM. As a member of the scientific advisory board of IPAM, I have seen it as one of my missions to bring deep learning to the attention of the mathematics community. In fact, I’m co-organizer of such a workshop at IPAM in February 2018 ( http://www.ipam.ucla.edu/.../new-deep-learning-techniques/ ).
Ali: if you are not happy with our understanding of the methods you use everyday, fix it: work on the theory of deep learning, instead of complaining that others don’t do it, and instead of suggesting that the Good Old NIPS world was a better place when it used only “theoretically correct” methods. It wasn’t.
He describes how engineering artifacts often precede theoretical understanding and that deep learning worked empirically for a long time before we began to understand it theoretically. He says that researchers ignored deep learning because it didn’t fit into their existing models of how learning should work.
I think the high-level lesson from the Facebook post is that street-lighting occurs when we try to force reality to be understood in terms of our existing models of how it should work (incorrect models like phlogiston are common in the history of science). Though this LessWrong post argues that street-lighting occurs when researchers have a bias towards working on easier problems.
Instead a better approach is to allow reality and evidence to dictate how we create our models of the world even if those more correct models are more complex or require major departures from existing models (which creates a temptation to ‘flinch away’). I think a prime example of this is quantum mechanics: my understanding of how it was developed was that physicists noticed bizarre results from experiments like the double-split experiment and developed new theories (e.g. wave-particle duality) that described reality well even if they were counterintuitive or novel.
I guess the modern equivalent that’s relevant to AI alignment would be Singular Learning Theory which proposes a novel theory to explain how deep learning generalizes.
Here’s a Facebook post by Yann LeCun from 2017 which has a similar message to this post and seems quite insightful:
He describes how engineering artifacts often precede theoretical understanding and that deep learning worked empirically for a long time before we began to understand it theoretically. He says that researchers ignored deep learning because it didn’t fit into their existing models of how learning should work.
I think the high-level lesson from the Facebook post is that street-lighting occurs when we try to force reality to be understood in terms of our existing models of how it should work (incorrect models like phlogiston are common in the history of science). Though this LessWrong post argues that street-lighting occurs when researchers have a bias towards working on easier problems.
Instead a better approach is to allow reality and evidence to dictate how we create our models of the world even if those more correct models are more complex or require major departures from existing models (which creates a temptation to ‘flinch away’). I think a prime example of this is quantum mechanics: my understanding of how it was developed was that physicists noticed bizarre results from experiments like the double-split experiment and developed new theories (e.g. wave-particle duality) that described reality well even if they were counterintuitive or novel.
I guess the modern equivalent that’s relevant to AI alignment would be Singular Learning Theory which proposes a novel theory to explain how deep learning generalizes.