I see this in a different light: as far as I can tell, Yann LeCun believes that the way to advance AI is to tinker around, take opportunities to make advances when it seems feasible, find ways of fixing problems that come up in an ad-hoc, atheoretic manner (see e.g. this link), and then form some theory to explain what happened; while Stuart Russell thinks that it’s important to have a theory that you really believe in drive future work. As a result, I read LeCun as saying that when problems come up, we’ll see them and fix them by tinkering around, while Russell thinks that it’s important to have a theory in place before-hand to ensure that bad enough problems don’t come up and/or ensure that we already know how to solve them when they do.
It seems like this is the sort of deep divide that is hard to cross, since I would expect people to have strong opinions based on what they’ve seen work elsewhere. It has an echo of the previous concern, where Russell needs to somehow point out “look, this time it actually is important to have a theory instead of doing things ad-hoc” in a way that depends on the features of this particular issue rather than the way he likes doing work.
For reference, LeCun discussed his atheoretic/experimentalist views in more depth in this FB debate with Ali Rahimi and also this lecture. But maybe we should distinguish some distinct axes of the experimentalist/theorist divide in DL:
1) Experimentalism/theorism is a more appropriate paradigm for thinking about AI safety
2) Experimentalism/theorism is a more appropriate paradigm for making progress in AI capabilities
Where the LeCun/Russell debate is about (1) and LeCun/Rahimi is about (2). And maybe this is oversimplifying things, since “theorism” may be an overly broad way of describing Russell/Rahimi’s views on safety/capabilities, but I suspect LeCun is “seeing the same ghost”, or in his words (to Rahimi), seeing the same:
kind of attitude that lead the ML community to abandon neural nets for over 10 years, *despite* ample empirical evidence that they worked very well in many situations.
And whether or not Rahimi should be lumped into that “kind of attitude”, I think LeCun is right (from a certain perspective) to want to push back against that attitude.
I’d even go further: given that LeCun has been more successful than Rahimi/Russell in AI research this century, all else equal I would weight the former’s intuitions on research progress more. (I think the best counterargument is that while experimentalism might be better in the short-term, theorism has better payoff in the long-term, but I’m not sure about this.)
In fact, one of my major fears is that LeCun is right about this, because even if he is right about (2), I don’t think that’s good evidence he’s right about (1) since these seem pretty orthogonal. But they don’t look orthogonal until you spend a lot of time reading/thinking about AI safety, which you’re not inclined to do if you already know a lot about AI and assume that knowledge transfers to AI safety.
In other words, the “correct” intuitions (on experimentalism/theorism) for modern AI research might be the opposite of the “correct” intuitions for AI safety. (I would, for instance, predict that if Superintelligence were published during the era of GOFAI, all else equal it would’ve made a bigger splash because AI researchers then were more receptive to abstract theorizing.)
I would, for instance, predict that if Superintelligence were published during the era of GOFAI, all else equal it would’ve made a bigger splash because AI researchers then were more receptive to abstract theorizing.
And then it would probably have been seen as outmoded and thrown away completely when AI capabilities research progressed into realms that vastly surpassed GOFAI. I don’t know that there’s an easy way to get capabilities researchers to think seriously about safety concerns that haven’t manifested on a sufficient scale yet.
I see this in a different light: as far as I can tell, Yann LeCun believes that the way to advance AI is to tinker around, take opportunities to make advances when it seems feasible, find ways of fixing problems that come up in an ad-hoc, atheoretic manner (see e.g. this link), and then form some theory to explain what happened; while Stuart Russell thinks that it’s important to have a theory that you really believe in drive future work. As a result, I read LeCun as saying that when problems come up, we’ll see them and fix them by tinkering around, while Russell thinks that it’s important to have a theory in place before-hand to ensure that bad enough problems don’t come up and/or ensure that we already know how to solve them when they do.
It seems like this is the sort of deep divide that is hard to cross, since I would expect people to have strong opinions based on what they’ve seen work elsewhere. It has an echo of the previous concern, where Russell needs to somehow point out “look, this time it actually is important to have a theory instead of doing things ad-hoc” in a way that depends on the features of this particular issue rather than the way he likes doing work.
For reference, LeCun discussed his atheoretic/experimentalist views in more depth in this FB debate with Ali Rahimi and also this lecture. But maybe we should distinguish some distinct axes of the experimentalist/theorist divide in DL:
1) Experimentalism/theorism is a more appropriate paradigm for thinking about AI safety
2) Experimentalism/theorism is a more appropriate paradigm for making progress in AI capabilities
Where the LeCun/Russell debate is about (1) and LeCun/Rahimi is about (2). And maybe this is oversimplifying things, since “theorism” may be an overly broad way of describing Russell/Rahimi’s views on safety/capabilities, but I suspect LeCun is “seeing the same ghost”, or in his words (to Rahimi), seeing the same:
And whether or not Rahimi should be lumped into that “kind of attitude”, I think LeCun is right (from a certain perspective) to want to push back against that attitude.
I’d even go further: given that LeCun has been more successful than Rahimi/Russell in AI research this century, all else equal I would weight the former’s intuitions on research progress more. (I think the best counterargument is that while experimentalism might be better in the short-term, theorism has better payoff in the long-term, but I’m not sure about this.)
In fact, one of my major fears is that LeCun is right about this, because even if he is right about (2), I don’t think that’s good evidence he’s right about (1) since these seem pretty orthogonal. But they don’t look orthogonal until you spend a lot of time reading/thinking about AI safety, which you’re not inclined to do if you already know a lot about AI and assume that knowledge transfers to AI safety.
In other words, the “correct” intuitions (on experimentalism/theorism) for modern AI research might be the opposite of the “correct” intuitions for AI safety. (I would, for instance, predict that if Superintelligence were published during the era of GOFAI, all else equal it would’ve made a bigger splash because AI researchers then were more receptive to abstract theorizing.)
Good comment. I disagree with this bit:
And then it would probably have been seen as outmoded and thrown away completely when AI capabilities research progressed into realms that vastly surpassed GOFAI. I don’t know that there’s an easy way to get capabilities researchers to think seriously about safety concerns that haven’t manifested on a sufficient scale yet.