Cheers for posting! I’ve got a question about the claim that optimizers compress by default, due to the entropy maximization-style argument given around 20:00 (apologies if you covered this, it’s not easy to check back through a video):
Let’s say that we have a neural network of width 100, which is trained on a dataset which could be trained to perfect accuracy on a network of width of only 30. If it compresses it into only 30 weights there’s a 70-dimensional space of free parameters and we should expect a randomly selected solution to be of this kind.
I agree that if we randomly sample zero-loss weight configurations, we end up with this kind of compression, but it seems that any kind of learning we know how to do is dependent on the paths that one can take to reach it, and that abstracting this away can give very different results to any high-dimensional optimization that we actually know how to do.
Assuming that the network is parameterized by, say, float16s, maximal compression of the data would result in the output of the network being sensitive to the final bit of the weights in as many cases as possible, thereby leaving the largest number of free bits, so 16 bits of info would be compressed in to one weight, rather than spread among 3-4.
My intuition is that these highly compressed arrangements would be very sensitive to perturbations, and render them incredibly difficult to reach in practice (and also have a big problem with an unknown examples, and are therefore screened off by techniques like dropout and regularization). There is therefore a competing incentive towards minima which are easy to land on—probably flat minima surrounded by areas of relatively good performance. Further, I expect that these kind of minima tend to leverage the whole network for redundancy and flatness (not needing to depend tightly on the final bit of weights).
The properties of would be not just compression but some combination of compression and smoothness (smoothness being sort of a variant of compression where the final bits don’t matter much) which would not result in some subset of the parameters having all the useful information.
If you agree that this is what happens, in what sense is there really compression, if the info is spread among multiple bits? Perhaps given the structure of NNs, we should expect to be able to compress by removing the last bits of weights as these are the easiest to leave free given the structure of training?
If you disagree I’d be curious to know where. I sense that Mingard et al shares your conclusion but I don’t yet understand the claimed empirical demonstration.
tldr: optimization may compress by default, but learning seems to counteract this by choosing easy-to-find minima.
it seems that any kind of learning we know how to do is dependent on the paths that one can take to reach it, and that abstracting this away can give very different results to any high-dimensional optimization that we actually know how to do.
This is where Mingard et al come in. One of their main results is that SGD training on neural nets does quite well approximate just-randomly-sampling-an-optimal-point. Turns out our methods are not actually very path-dependent in practice!
My intuition is that these highly compressed arrangements would be very sensitive to perturbations, and render them incredibly difficult to reach in practice… There is therefore a competing incentive towards minima which are easy to land on—probably flat minima surrounded by areas of relatively good performance.
There is a mismatch between your intuition and the implications of “flat minima surrounded by areas of relatively good performance”.
Remember, the whole point of the “highly compressed arrangements” is that we only need to lock in a few parameter values in order to get optimal behavior; once those few values are locked in, the rest of the parameters can mostly vary however they want without screwing stuff up. “Flat minimum surrounded by areas of relatively good performance” is synonymous with compression: if we can vary the parameters in lots of ways without losing much performance, that implies that all the info needed for optimal performance has been compressed into whatever-we-can’t-vary-without-losing-performance.
Now, your intuition is correct in the sense that info may be spread over many parameters; the relevant “ways to vary things” may not just be “adjust one param while holding others constant”. For instance, it might be more useful to look at parameter variation along local eigendirections of the Hessian. Then the claim would be something like “flat optimum = performance is flat along lots of eigendirections, therefore we can project the parameter-values onto the non-flat eigendirections and those projections are the ‘compressed info’”. (Tbc, I still don’t know what the best way is to characterize this sort of thing, but eigendirections are an obvious approximation which will probably work.)
Turns out our methods are not actually very path-dependent in practice!
Yeah I get that’s what Mingard et al are trying to show but the meaning of their empirical results isn’t clear to me—but I’ll try and properly read the actual paper rather than the blog post before saying any more in that direction.
“Flat minimum surrounded by areas of relatively good performance” is synonymous with compression. if we can vary the parameters in lots of ways without losing much performance, that implies that all the info needed for optimal performance has been compressed into whatever-we-can’t-vary-without-losing-performance.
I get that a truly flat area is synonymous with compression—but I think being surrounded by areas of good performance is anti-correlated with compression because it indicates redundancy and less-than-maximal sensitivity.
I agree that viewing it as flat eigendimensions in parameter space is the right way to think about it, I still worry that the same concerns apply that maximal compression in this space is traded against ease of finding what would be a flat plain in many dimensions, but a maximally steep ravine in all of the other directions. I can imagine this could be investigated with some small experiments, or they may well already exist but I can’t promise I’ll follow up, if anyone is interested let me know.
Cheers for posting! I’ve got a question about the claim that optimizers compress by default, due to the entropy maximization-style argument given around 20:00 (apologies if you covered this, it’s not easy to check back through a video):
Let’s say that we have a neural network of width 100, which is trained on a dataset which could be trained to perfect accuracy on a network of width of only 30. If it compresses it into only 30 weights there’s a 70-dimensional space of free parameters and we should expect a randomly selected solution to be of this kind.
I agree that if we randomly sample zero-loss weight configurations, we end up with this kind of compression, but it seems that any kind of learning we know how to do is dependent on the paths that one can take to reach it, and that abstracting this away can give very different results to any high-dimensional optimization that we actually know how to do.
Assuming that the network is parameterized by, say, float16s, maximal compression of the data would result in the output of the network being sensitive to the final bit of the weights in as many cases as possible, thereby leaving the largest number of free bits, so 16 bits of info would be compressed in to one weight, rather than spread among 3-4.
My intuition is that these highly compressed arrangements would be very sensitive to perturbations, and render them incredibly difficult to reach in practice (and also have a big problem with an unknown examples, and are therefore screened off by techniques like dropout and regularization). There is therefore a competing incentive towards minima which are easy to land on—probably flat minima surrounded by areas of relatively good performance. Further, I expect that these kind of minima tend to leverage the whole network for redundancy and flatness (not needing to depend tightly on the final bit of weights).
The properties of would be not just compression but some combination of compression and smoothness (smoothness being sort of a variant of compression where the final bits don’t matter much) which would not result in some subset of the parameters having all the useful information.
If you agree that this is what happens, in what sense is there really compression, if the info is spread among multiple bits? Perhaps given the structure of NNs, we should expect to be able to compress by removing the last bits of weights as these are the easiest to leave free given the structure of training?
If you disagree I’d be curious to know where. I sense that Mingard et al shares your conclusion but I don’t yet understand the claimed empirical demonstration.
tldr: optimization may compress by default, but learning seems to counteract this by choosing easy-to-find minima.
This is where Mingard et al come in. One of their main results is that SGD training on neural nets does quite well approximate just-randomly-sampling-an-optimal-point. Turns out our methods are not actually very path-dependent in practice!
There is a mismatch between your intuition and the implications of “flat minima surrounded by areas of relatively good performance”.
Remember, the whole point of the “highly compressed arrangements” is that we only need to lock in a few parameter values in order to get optimal behavior; once those few values are locked in, the rest of the parameters can mostly vary however they want without screwing stuff up. “Flat minimum surrounded by areas of relatively good performance” is synonymous with compression: if we can vary the parameters in lots of ways without losing much performance, that implies that all the info needed for optimal performance has been compressed into whatever-we-can’t-vary-without-losing-performance.
Now, your intuition is correct in the sense that info may be spread over many parameters; the relevant “ways to vary things” may not just be “adjust one param while holding others constant”. For instance, it might be more useful to look at parameter variation along local eigendirections of the Hessian. Then the claim would be something like “flat optimum = performance is flat along lots of eigendirections, therefore we can project the parameter-values onto the non-flat eigendirections and those projections are the ‘compressed info’”. (Tbc, I still don’t know what the best way is to characterize this sort of thing, but eigendirections are an obvious approximation which will probably work.)
Yeah I get that’s what Mingard et al are trying to show but the meaning of their empirical results isn’t clear to me—but I’ll try and properly read the actual paper rather than the blog post before saying any more in that direction.
I get that a truly flat area is synonymous with compression—but I think being surrounded by areas of good performance is anti-correlated with compression because it indicates redundancy and less-than-maximal sensitivity.
I agree that viewing it as flat eigendimensions in parameter space is the right way to think about it, I still worry that the same concerns apply that maximal compression in this space is traded against ease of finding what would be a flat plain in many dimensions, but a maximally steep ravine in all of the other directions. I can imagine this could be investigated with some small experiments, or they may well already exist but I can’t promise I’ll follow up, if anyone is interested let me know.