Sorry I didn’t notice this earlier! What do you think about the argument that Joar gave?
If a function is small-volume, it’s complex, because it takes a lot of parameters to specify.
If a function is large-volume, it’s simple, because it can be compressed a lot since most parameters are redundant.
It sounds like you are saying: Some small-volume functions are actually simple, or at least this might be the case for all we know, because maybe it’s just really hard for neural networks to efficiently represent that function. This is especially clear when we think of simplicity in the minimum description length / Kolmogorov sense; the “+BusyBeaver(9)” function can be written in a few lines of code but would require a neural net larger than the universe to implement. Am I interpreting you correctly? Do you think there are other important senses of simplicity besides that one?
Yeah, exactly—the problem is that there are some small-volume functions which are actually simple. The argument for small-volume --> complex doesn’t go through since there could be other ways of specifying the function.
Other senses of simplicity include various circuit complexities and Levin complexity. There’s no argument that parameter-space volume corresponds to either of them AFAIK(you might think parameter-space volume would correspond to “neural net complexity”, the number of neurons in a minimal-size neural net needed to compute the function, but I don’t think this is true either—every parameter is Gaussian so it’s unlikely for most to be zero)
Sorry I didn’t notice this earlier! What do you think about the argument that Joar gave?
If a function is small-volume, it’s complex, because it takes a lot of parameters to specify.
If a function is large-volume, it’s simple, because it can be compressed a lot since most parameters are redundant.
It sounds like you are saying: Some small-volume functions are actually simple, or at least this might be the case for all we know, because maybe it’s just really hard for neural networks to efficiently represent that function. This is especially clear when we think of simplicity in the minimum description length / Kolmogorov sense; the “+BusyBeaver(9)” function can be written in a few lines of code but would require a neural net larger than the universe to implement. Am I interpreting you correctly? Do you think there are other important senses of simplicity besides that one?
Yeah, exactly—the problem is that there are some small-volume functions which are actually simple. The argument for small-volume --> complex doesn’t go through since there could be other ways of specifying the function.
Other senses of simplicity include various circuit complexities and Levin complexity. There’s no argument that parameter-space volume corresponds to either of them AFAIK(you might think parameter-space volume would correspond to “neural net complexity”, the number of neurons in a minimal-size neural net needed to compute the function, but I don’t think this is true either—every parameter is Gaussian so it’s unlikely for most to be zero)