I definitely intended the implied context to be ‘problems people actually use deep learning for,’ which does impose constraints which I think are sufficient.
Certainly the claim I’m making isn’t true of literally all functions on high dimensional spaces. And if I actually cared about all functions, or even all continuous functions, on these spaces then I believe there are no free lunch theorems that prevent machine learning from being effective at all (e.g. what about those functions that have a vast number of huge oscillations right between those two points you just measured?!)
But in practice deep learning is applied to problems that humans care about. Computer vision and robotics control problems, for example, are very widely used. In these problems there are some distributions of functions that empirically exist, and a simple model of those types of problems is that they can be locally approximated over an area with positive size by taylor series at any point of the domain that you care about, but these local areas are stitched together essentially at random.
In that context, it makes sense that maybe the directional second derivatives of a function would be independent of one another and rarely would they all line up.
Beyond that I’d expect that if you impose a measure on the space of such functions in some way (maybe limiting by number of patches and growth rate of power series coefficients) that the density of functions with even one critical point would quickly approach zero, even while infinitely many such functions exist in an absolute sense.
I got a little defensive thinking about this since I felt like the context of ‘deep learning as it is practiced in real life’ was clear but looking back at the original post it maybe wasn’t outlined in that way. Even so I think your reply feels disingenuous because you’re explicitly constructing adversarial examples rather than sampling functions from some space to suggest that functions with many local optima are “common.” If I start suggesting that deep learning is robust to adversarial examples I have much deeper problems.
I definitely intended the implied context to be ‘problems people actually use deep learning for,’ which does impose constraints which I think are sufficient.
Certainly the claim I’m making isn’t true of literally all functions on high dimensional spaces. And if I actually cared about all functions, or even all continuous functions, on these spaces then I believe there are no free lunch theorems that prevent machine learning from being effective at all (e.g. what about those functions that have a vast number of huge oscillations right between those two points you just measured?!)
But in practice deep learning is applied to problems that humans care about. Computer vision and robotics control problems, for example, are very widely used. In these problems there are some distributions of functions that empirically exist, and a simple model of those types of problems is that they can be locally approximated over an area with positive size by taylor series at any point of the domain that you care about, but these local areas are stitched together essentially at random.
In that context, it makes sense that maybe the directional second derivatives of a function would be independent of one another and rarely would they all line up.
Beyond that I’d expect that if you impose a measure on the space of such functions in some way (maybe limiting by number of patches and growth rate of power series coefficients) that the density of functions with even one critical point would quickly approach zero, even while infinitely many such functions exist in an absolute sense.
I got a little defensive thinking about this since I felt like the context of ‘deep learning as it is practiced in real life’ was clear but looking back at the original post it maybe wasn’t outlined in that way. Even so I think your reply feels disingenuous because you’re explicitly constructing adversarial examples rather than sampling functions from some space to suggest that functions with many local optima are “common.” If I start suggesting that deep learning is robust to adversarial examples I have much deeper problems.