Obviously this is all speculation but maybe I’m saying that the universal approximation theorem implies that neural architectures are fractal in space of all distributtions (or some restricted subset thereof)?
Oh I actually don’t think this is speculation, if (big if) you satisfy the conditions for universal approximation then this is just true (specifically that the image of W is dense in function space). Like, for example, you can state Stone-Weierstrass as: for a Hausdorff space X, and the continuous functions under the sup norm C(X,R), the Banach subalgebra of polynomials is dense in C(X,R). In practice you’d only have a finite-dimensional subset of the polynomials, so this obviously can’t hold exactly, but as you increase the size of the polynomials, they’ll be more space-filling and the error bound will decrease.
Curious what’s your beef with universal approximation? Stone-weierstrass isn’t quantitative—is that the reason?
The problem is that the dimension of W required to achieve a given ϵ error bound grows exponentially with the dimension d of your underlying space X. For instance, if you assume that weights depend continuously on the target function, ϵ-approximating all Cn functions on [0,1]d with Sobolev norm ≤1 provably takes at least O(ϵ−d/n) parameters (DeVore et al.). This is a lower bound.
So for any realistic d universal approximation is basically useless—the number of parameters required is enormous. Which makes sense because approximation by basis functions is basically the continuous version of a lookup table.
Because neural networks actually work in practice, without requiring exponentially many parameters, this also tells you that the space of realistic target functions can’t just be some generic function space (even with smoothness conditions), it has to have some non-generic properties to escape the lower bound.
Oh I actually don’t think this is speculation, if (big if) you satisfy the conditions for universal approximation then this is just true (specifically that the image of W is dense in function space). Like, for example, you can state Stone-Weierstrass as: for a Hausdorff space X, and the continuous functions under the sup norm C(X,R), the Banach subalgebra of polynomials is dense in C(X,R). In practice you’d only have a finite-dimensional subset of the polynomials, so this obviously can’t hold exactly, but as you increase the size of the polynomials, they’ll be more space-filling and the error bound will decrease.
The problem is that the dimension of W required to achieve a given ϵ error bound grows exponentially with the dimension d of your underlying space X. For instance, if you assume that weights depend continuously on the target function, ϵ-approximating all Cn functions on [0,1]d with Sobolev norm ≤1 provably takes at least O(ϵ−d/n) parameters (DeVore et al.). This is a lower bound.
So for any realistic d universal approximation is basically useless—the number of parameters required is enormous. Which makes sense because approximation by basis functions is basically the continuous version of a lookup table.
Because neural networks actually work in practice, without requiring exponentially many parameters, this also tells you that the space of realistic target functions can’t just be some generic function space (even with smoothness conditions), it has to have some non-generic properties to escape the lower bound.
Ooooo okay so this seems like it’s directly pointing to the fractal story! Exciting!