I’d be interested in the relationship between this and Implicit Gradient Regularization and the sharp/flat minima lit.The basic idea there is to compare the continuous gradient flow on the original objective, to the path followed by SGD due to discretization. They show that the latter can be re-interpreted as optimizing a modified objective which favors flat minima (low sensitivity to parameter perturbations). This isn’t clearly the same as what you’re analyzing here, since you’re looking at variance due to sampling instead, but they might be related under appropriate regularity conditions.
Sam Smith also has this nice paper on sharp/flat minima which links a lot of previous observations together, and has some similarities to your approach here.
Haven’t thought about any of this too closely, so my apologies if these aren’t useful! Seems close enough that it might be of interest though.
My own understanding of the flat minima idea is that it’s a different thing. It’s not really about noise, it’s about gradient descent in general being a pretty shitty optimization method, which converges very poorly to sharp minima (more precisely, minima with a high condition number). (Continuous gradient flow circumvents that, but using step sizes small enough to circumvent the problem in practice would make GD prohibitively slow. The methods we actually use are not a good approximation of continuous flow, as I understand it.) If you want flat minima, then an optimization algorithm which converges very poorly to sharp minima could actually be a good thing, so long as you combine it with some way to escape the basin of the sharp minimum (e.g. noise in SGD).
That said, I haven’t read the various papers on this, so I’m at high risk of misunderstanding.
Also worth noting that there are reasons to expect convergence to flat minima besides bias in SGD itself. A flatter basin fills more of the parameter space than a sharper basin, so we’re more likely to initialize in a flat basin (relevant to the NTK/GP/Mingard et al picture) or accidentally stumble into one.
I’d be interested in the relationship between this and Implicit Gradient Regularization and the sharp/flat minima lit.The basic idea there is to compare the continuous gradient flow on the original objective, to the path followed by SGD due to discretization. They show that the latter can be re-interpreted as optimizing a modified objective which favors flat minima (low sensitivity to parameter perturbations). This isn’t clearly the same as what you’re analyzing here, since you’re looking at variance due to sampling instead, but they might be related under appropriate regularity conditions.
Sam Smith also has this nice paper on sharp/flat minima which links a lot of previous observations together, and has some similarities to your approach here.
Haven’t thought about any of this too closely, so my apologies if these aren’t useful! Seems close enough that it might be of interest though.
My own understanding of the flat minima idea is that it’s a different thing. It’s not really about noise, it’s about gradient descent in general being a pretty shitty optimization method, which converges very poorly to sharp minima (more precisely, minima with a high condition number). (Continuous gradient flow circumvents that, but using step sizes small enough to circumvent the problem in practice would make GD prohibitively slow. The methods we actually use are not a good approximation of continuous flow, as I understand it.) If you want flat minima, then an optimization algorithm which converges very poorly to sharp minima could actually be a good thing, so long as you combine it with some way to escape the basin of the sharp minimum (e.g. noise in SGD).
That said, I haven’t read the various papers on this, so I’m at high risk of misunderstanding.
Also worth noting that there are reasons to expect convergence to flat minima besides bias in SGD itself. A flatter basin fills more of the parameter space than a sharper basin, so we’re more likely to initialize in a flat basin (relevant to the NTK/GP/Mingard et al picture) or accidentally stumble into one.