I’m not sure why we should expect that beyond the argument I already give in the post. The geometry of the loss landscape is already fully accounted for by the Boltzmann factor; what else does singular learning theory add here?
So I believe the point of SLT is that the Boltzmann-weighted integral over the state-space simplifies in certain settings as the number of data points approaches infinity. That integral is going to be dominated by a narrow ‘band’ around the minimum set, and to evaluate it generally you have to consider the entire minimum set. But when there are singularities, places where there are cusps or intersections of the minimum set, the narrow band’s effective dimensionality can go up(this is illustrated in the tweet I linked). This means that as n→∞ you can just consider the behavior near the ‘cuspiest’ singularity(I think this is what the RLCT measures) to understand the whole integral.
(...uh, I think. I actually haven’t looked into the details enough to write with confidence, but the above is my impression from what reading I have done and jesse’s tweet)
To me that just sounds like you’re saying the integral is dominated by the contribution of the simplest functions that are of minimum loss, and the contribution factor scales like e−O(d) where d is the effective dimensionality near the singularity representing this function, equivalently the complexity of said function. That’s exactly what I’m saying in my post—where is the added content here?
Here’s a concrete toy example where SLT and this post give different answers (SLT is more specific). Let f(θ)(x)=θ1θ2x. And let L(f(θ))=|f(θ)(1)|2=θ21θ22. Then the minimal loss is achieved at the set of parameters where θ1=0or θ2=0 (note that this looks like two intersecting lines, with the singularity being the intersection). Note that all θ in that set also give the same exact f(θ). The theory in your post here doesn’t say much beyond the standard point that gradient descent will (likely) select a minimal or near-minimal θ, but it can’t distinguish between the different values of θ within that minimal set.
SLT on the other hand says that gradient descent will be more likely to choose the specific singular value θ1=0=θ2 .
Now I’m not sure this example is sufficiently realistic to demonstrate why you would care about SLT’s extra specificity, since in this case I’m perfectly happy with any value of θ in the minimal set—they all give the exact same f(θ). If I were to try to generalize this into a useful example, I would try to find a case where L(f(θ)) has a minimal set that contains multiple different f(θ). For example, L only evaluates f(θ) on a subset of points (the ‘training data’) but different choices of minimal θ give different values outside of that subset of training data. Then we can consider which f(θ) has the best generalization to out-of-training data—do the parameters predicted by SLT yield f(θ) that are best at generalizing?
Disclaimer: I have a very rudimentary understanding of SLT and may be misrepresenting it.
I don’t think this representation of the theory in my post is correct. The effective dimension of the singularity near the origin is much higher, e.g. because near every other minimal point of this loss function the Hessian doesn’t vanish, while for the singularity at the origin it does vanish. If you discretized this setup by looking at it with a lattice of mesh ε, say, you would notice that the origin is surrounded by many parameters that give nearly identical loss, while near other parts of the space the number of such parameters is far fewer.
The reason you have to do some kind of “translation” between the two theories is that SLT can see not just exactly optimal points but also nearly optimal points, and bad singularities are surrounded by many more nearly optimal points than better-behaved singularities. You can interpret the discretized picture above as the SLT picture seen at some “resolution” or “scale” ε, i.e. if you discretized the loss function by evaluating it on a lattice with mesh ε you get my picture. Of course, this loses the information of what happens as ε→0 and n→∞ in some thermodynamic limit, which is what you recover when you do SLT.
I just don’t see what this thermodynamic limit tells you about the learning behavior of NNs that we didn’t know before. We already know NNs approximate Solomonoff induction if the A-complexity is a good approximation to Kolmogorov complexity and so forth. What additional information is gained by knowing what A looks like as a smooth function as opposed to a discrete function?
In addition, the strong dependence of SLT on A being analytic is bad, because analytic functions are rigid: their value in a small open subset determines their value globally. I can see why you need this assumption because quantifying what happens near a singularity becomes incredibly difficult for general smooth functions, but because of the rigidity of analytic functions the approximation that “we can just pretend NNs are analytic” is more pernicious than e.g. “we can just pretend NNs are smooth”. Typical approximation theorems like Stone-Weierstrass also fail to save you because they only work in the sup-norm and that’s completely useless for determining behavior at singularities. So I’m yet to be convinced that the additional details in SLT provide a more useful account of NN learning than my simple description above.
The effective dimension of the singularity near the origin is much higher, e.g. because near every other minimal point of this loss function the Hessian doesn’t vanish, while for the singularity at the origin it does vanish. If you discretized this setup by looking at it with a lattice of mesh ε, say, you would notice that the origin is surrounded by many parameters that give nearly identical loss, while near other parts of the space the number of such parameters is far fewer.
As I read it, the arguments you make in the original post depend only on the macrostate f, which is the same for both the singular and non-singular points of the minimal loss set (in my example), so they can’t distinguish these points at all. I see that you’re also applying the logic to points near the minimal set and arguing that the nearly-optimal points are more abundant near the singularities than near the non-singularities. I think that’s a significant point not made at all in your original point that brings it closer to SLT, so I’d encourage you to add it to the post.
I think there’s also terminology mismatch between your post and SLT. You refer to singularities of A(i.e. its derivative is degenerate) while SLT refers to singularities of the set of minimal loss parameters. The point θ=(0,1) in my example is not singular at all in SLT but A(θ) is singular. This terminology collision makes it sound like you’ve recreated SLT more than you actually have.
I’m not too sure how to respond to this comment because it seems like you’re not understanding what I’m trying to say.
I agree there’s some terminology mismatch, but this is inevitable because SLT is a continuous model and my model is discrete. If you want to translate between them, you need to imagine discretizing SLT, which means you discretize both the codomain of the neural network and the space of functions you’re trying to represent in some suitable way. If you do this, then you’ll notice that the worse a singularity is, the lower the A-complexity of the corresponding discrete function will turn out to be, because many of the neighbors map to the same function after discretization.
The content that SLT adds on top of this is what happens in the limit where your discretization becomes infinitely fine and your dataset becomes infinitely large, but your model doesn’t become infinitely large. In this case, SLT claims that the worst singularities dominate the equilibrium behavior of SGD, which I agree is an accurate claim. However, I’m not sure what this claim is supposed to tell us about how NNs learn. I can’t make any novel predictions about NNs with this knowledge that I couldn’t before.
In this case, SLT claims that the worst singularities dominate the equilibrium behavior of SGD, which I agree is an accurate claim. However, I’m not sure what this claim is supposed to tell us about how NNs learn
I think the implied claim is something like “analyzing the singularities of the model will also be helpful for understanding SGD in more realistic settings” or maybe just “investigating this area further will lead to insights which are applicable in more realistic settings”. I mostly don’t buy it myself.
the worse a singularity is, the lower the A-complexity of the corresponding discrete function will turn out to be
This is where we diverge. Please let me know where you think my error is in the following. Returning to my explicit example (though I wrote f(θ) originally but will instead use A(θ) in this post since that matches your definitions).
1. Let f0(x)=0x be the constant zero function and S=A−1(f0).
2. Observe that S is the minimal loss set under our loss function and also S is the set of parameters θ=(θ1,θ2) where θ1=0 or θ2=0.
3. Let α,β∈S . Then A−1(α)=f0=A−1(β) by definition of S. Therefore, c(A(α))=c(A(β)).
4. SLT says that θ=(0,0) is a singularity of S but that θ=(0,1)∈S is not a singularity.
5. Therefore, there exists a singularity (according to SLT) which has identical A-complexity (and also loss) as a non-singular point, contradicting your statement I quote.
You need to discretize the function before taking preimages. If you just take preimages in the continuous setting, of course you’re not going to see any of the interesting behavior SLT is capturing.
In your case, let’s say that we discretize the function space by choosing which one of the functions gk(x)=kηx you’re closest to for some η>0. In addition, we also discretize the codomain of A by looking at the lattice (εZ)2 for some ε>0. Now, you’ll notice that there’s a radius ∼√η disk around the origin which contains only functions mapping to the zero function, and as our lattice has fundamental area ε2 this means the “relative weight” of the singularity at the origin is like O(η/ε2).
In contrast, all other points mapping to the zero function only get a relative weight of O(η/(kε2)) where kε is the absolute value of their nonzero coordinate. Cutting off the domain somewhere to make it compact and summing over all kε>√η to exclude the disk at the origin gives O(√η/ε) for the total contribution of all the other points in the minimum loss set. So in the limit η/ε2→0 the singularity at the origin accounts for almost everything in the preimage of A. The origin is privileged in my picture just as it is in the SLT picture.
I think your mistake is that you’re trying to translate between these two models too literally, when you should be thinking of my model as a discretization of the SLT model. Because it’s a discretization at a particular scale, it doesn’t capture what happens as the scale is changing. That’s the main shortcoming relative to SLT, but it’s not clear to me how important capturing this thermodynamic-like limit is to begin with.
Again, maybe I’m misrepresenting the actual content of SLT here, but it’s not clear to me what SLT says aside from this, so...
Everything I wrote in steps 1-4 was done in a discrete setting (otherwise |A−1(f0)| is not finite and whole thing falls apart). I was intending θ to be pairs of floating point numbers and A to be floats to floats.
However, using that I think I see what you’re trying to say. Which is that θ1θ2 will equal zero for some cases where θ1 and θ2 are both non-zero but very small and will multiply down to zero due to the limits of floating point numbers. Therefore the pre-image of A−1(f0) is actually larger than I claimed, and specifically contains a small neighborhood of (0,0).
That doesn’t invalidate my calculation that shows that (0,0) is equally likely as (0,1) though: they still have the same loss and A-complexity (since they have the same macrostate). On the other hand, you’re saying that there are points in parameter space that are very close to (0,0) that are also in this same pre-image and also equally likely. Therefore even if (0,0) is just as likely as (0,1), being near to (0,0) is more likely than being near to (0,1). I think it’s fair to say that that is at least qualitatively the same as SLT gives in the continous version of this.
However, I do think this result “happened” due to factors that weren’t discussed in your original post, which makes it sound like it is “due to” A-complexity. A-complexity is a function of the macrostate, which is the same at all of these points and so does not distinguish between (0,0) and (0,1) at all. In other words, your post tells me which f is likely while SLT tells me which θ is likely—these are not the same thing. But you clearly have additional ideas not stated in the post that also help you figure out which θ is likely. Until that is clarified, I think you have a mental theory of this which is very different from what you wrote.
Sure, I agree that I didn’t put this information into the post. However, why do you need to know which θ is more likely to know anything about e.g. how neural networks generalize?
I understand that SLT has some additional content beyond what is in the post, and I’ve tried to explain how you could make that fit in this framework. I just don’t understand why that additional content is relevant, which is why I left it out.
As an additional note, I wasn’t really talking about floating point precision being the important variable here. I’m just saying that if you want A-complexity to match the notion of real log canonical threshold, you have to discretize SLT in a way that might not be obvious at first glance, and in a way where some conclusions end up being scale-dependent. This is why if you’re interested in studying this question of the relative contribution of singular points to the partition function, SLT is a better setting to be doing it in. At the risk of repeating myself, I just don’t know why you would try to do that.
In my view, it’s a significant philosophical difference between SLT and your post that your post talks only about choosing macrostates while SLT talks about choosing microstates. I’m much less qualified to know (let alone explain) the benefits of SLT, though I can speculate. If we stop training after a finite number of steps, then I think it’s helpful to know where it’s converging to. In my example, if you think it’s converging to (0,1), then stopping close to that will get you a function that doesn’t generalize too well. If you know it’s converging to (0,0) then stopping close to that will get you a much better function—possibly exactly equally as good as you pointed out due to discretization.
Now this logic is basically exactly what you’re saying in these comments! But I think if someone read your post without prior knowledge of SLT, they wouldn’t figure out that it’s more likely to converge to a point near (0,0) than near (0,1). If they read an SLT post instead, they would figure that out. In that sense, SLT is more useful.
I am not confident that that is the intended benefit of SLT according to its proponents, though. And I wouldn’t be surprised if you could write a simpler explanation of this in your framework than SLT gives, I just think that this post wasn’t it.
I’m explaining why singularities(places where the minimum-loss set has self-intersections) would also tend to have higher effective dimensionality(number of degrees of freedom which you can vary while obtaining similar loss). That’s what’s novel about SLT as compared with previous broad-basin theories.
I don’t think this is something that requires explanation, though. If you take an arbitrary geometric object in maths, a good definition of its singular points will be “points where the tangent space has higher dimension than expected”. If this is the minimum set of a loss function and the tangent space has higher dimension than expected, that intuitively means that locally there are more directions you can move along without changing the loss function, probably suggesting that there are more directions you can move along without changing the function being implemented at all. So the function being implemented is simple, and the rest of the argument works as I outline it in the post.
I think I understand what you and Jesse are getting at, though: there’s a particular behavior that only becomes visible in the smooth or analytic setting, which is that minima of the loss function that are more singular become more dominant as n→∞ in the Boltzmann integral, as opposed to maintaining just the same dominance factor of e−O(d). You don’t see this in the discrete case because there’s a finite nonzero gap in loss between first-best and second-best fits, and so the second-best fits are exponentially punished in the limit and become irrelevant, while in the singular case any first-best fit has some second best “space” surrounding it whose volume is more concentrated towards the singularity point.
While I understand that, I’m not too sure what predictions you would make about the behavior of neural networks on the basis of this observation. For instance, if this smooth behavior is really essential to the generalization of NNs, wouldn’t we predict that generalization would become worse as people switch to lower precision floating point numbers? I don’t think that prediction would have held up very well if someone had made it 5 years ago.
If this is the minimum set of a loss function and the tangent space has higher dimension than expected, that intuitively means that locally there are more directions you can move along without changing the loss function
I think it is pretty obvious in the case of valleys without self-intersections, but that’s just the broad basin case. As for the self-intersection case, well, if it’s obvious to you that singularities will be surrounded by narrow bands of larger dimensionality—including in cases where that “dimensionality” is fractional—then you have a better intuition for the geometry of singularities than me and, I suspect, most other readers, so it might be helpful to make that aspect explicit.
Say that you have a loss function L:Rn→R. The minimum loss set is probably not exactly ∇L=0, but it has something to do with that, so let’s pretend that it’s exactly that for now.
This is a collection of n equations that are generically independent and so should define a subset of dimension zero, i.e. a collection of points in Rn. However, there might be points at which the partial derivatives vanishing don’t define independent equations, so we get something of positive codimension.
In these cases, what happens is that the gradient ∇L itself has vanishing derivatives in some directions. In other words, the Hessian matrix ∇2L fails to be of full rank. Say that this matrix has rank r at a specific singular point p∈Rn and consider the set L<Lmin+ε. Diagonalizing ∇2L will generically bring L into a form where it’s the linear combination of r quadratic terms and higher-order cubic terms, and locally the volume contribution to this set around p will be something of order εr/2ε(n−r)/3=εr/6+n/3. The worse the singularity, the smaller the rank r and the greater the volume contribution of the singularity to the set L<Lmin+ε.
The worst singularities dominate the behavior at small ε because you can move “much further” along vectors where L scales in a cubic fashion than directions where it scales in a quadratic fashion, so those dimensions are the only ones that “count” in some calculation when you compare singularities. The tangent space intuition doesn’t apply directly here but something like that still applies, in the sense that the worse a singularity, the more directions you have to move away from it without changing the value of the loss very much.
Is this intuitive now? I’m not sure what more to do to make the result intuitive.
Hmm, what you’re describing is still in what I was referring to as “the broad basin regime”. Sorry if I was unclear—I was thinking of any case where there is no self-intersection of the minimum loss manifold as being a “broad basin”. I think the main innovation of SLT occurs elsewhere.
Look at the image in the tweet I linked. At the point where the curves intersect, it’s not just that the Hessian fails to be of full-rank, it’s not even well-defined. The image illustrates how volume clusters around a single point where the singularity is, not merely around the minimal-loss manifold with the greatest dimensionality. That is what is novel about singular learning theory.
Can you give an example of L which has the mode of singularity you’re talking about? I don’t think I’m quite following what you’re talking about here.
In SLT L is assumed analytic, so I don’t understand how the Hessian can fail to be well-defined anywhere. It’s possible that the Hessian vanishes at some point, suggesting that the singularity there is even worse than quadratic, e.g.L(x,y)=x2y2 at the origin or something like that. But even in this regime essentially the same logic is going to apply—the worse the singularity, the further away you can move from it without changing the value of L very much, and accordingly the singularity contributes more to the volume of the set L(x)<Lmin+ε as ε→0.
In SLT L is assumed analytic, so I don’t understand how the Hessian can fail to be well-defined
Yeah sorry that was probably needlessly confusing, I was just referencing the image in Jesse’s tweet for ease of illustration(you’re right that it’s not analytic, I’m not sure what’s going on there) The Hessian could also just be 0 at a self-intersection point like in the example you gave. That’s the sort of case I had in mind. I was confused by your earlier comment because it sounded like you were just describing a valley of dimension r, but as you say there could be isolated points like that also.
I still maintain that this behavior—of volume clustering near singularities when considering a narrow band about the loss minimum—is the main distinguishing feature of SLT and so could use a mention in the OP.
So I believe the point of SLT is that the Boltzmann-weighted integral over the state-space simplifies in certain settings as the number of data points approaches infinity. That integral is going to be dominated by a narrow ‘band’ around the minimum set, and to evaluate it generally you have to consider the entire minimum set. But when there are singularities, places where there are cusps or intersections of the minimum set, the narrow band’s effective dimensionality can go up(this is illustrated in the tweet I linked). This means that as n→∞ you can just consider the behavior near the ‘cuspiest’ singularity(I think this is what the RLCT measures) to understand the whole integral.
(...uh, I think. I actually haven’t looked into the details enough to write with confidence, but the above is my impression from what reading I have done and jesse’s tweet)
To me that just sounds like you’re saying the integral is dominated by the contribution of the simplest functions that are of minimum loss, and the contribution factor scales like e−O(d) where d is the effective dimensionality near the singularity representing this function, equivalently the complexity of said function. That’s exactly what I’m saying in my post—where is the added content here?
Here’s a concrete toy example where SLT and this post give different answers (SLT is more specific). Let f(θ)(x)=θ1θ2x. And let L(f(θ))=|f(θ)(1)|2=θ21θ22. Then the minimal loss is achieved at the set of parameters where θ1=0 or θ2=0 (note that this looks like two intersecting lines, with the singularity being the intersection). Note that all θ in that set also give the same exact f(θ). The theory in your post here doesn’t say much beyond the standard point that gradient descent will (likely) select a minimal or near-minimal θ, but it can’t distinguish between the different values of θ within that minimal set.
SLT on the other hand says that gradient descent will be more likely to choose the specific singular value θ1=0=θ2 .
Now I’m not sure this example is sufficiently realistic to demonstrate why you would care about SLT’s extra specificity, since in this case I’m perfectly happy with any value of θ in the minimal set—they all give the exact same f(θ). If I were to try to generalize this into a useful example, I would try to find a case where L(f(θ)) has a minimal set that contains multiple different f(θ). For example, L only evaluates f(θ) on a subset of points (the ‘training data’) but different choices of minimal θ give different values outside of that subset of training data. Then we can consider which f(θ) has the best generalization to out-of-training data—do the parameters predicted by SLT yield f(θ) that are best at generalizing?
Disclaimer: I have a very rudimentary understanding of SLT and may be misrepresenting it.
I don’t think this representation of the theory in my post is correct. The effective dimension of the singularity near the origin is much higher, e.g. because near every other minimal point of this loss function the Hessian doesn’t vanish, while for the singularity at the origin it does vanish. If you discretized this setup by looking at it with a lattice of mesh ε, say, you would notice that the origin is surrounded by many parameters that give nearly identical loss, while near other parts of the space the number of such parameters is far fewer.
The reason you have to do some kind of “translation” between the two theories is that SLT can see not just exactly optimal points but also nearly optimal points, and bad singularities are surrounded by many more nearly optimal points than better-behaved singularities. You can interpret the discretized picture above as the SLT picture seen at some “resolution” or “scale” ε, i.e. if you discretized the loss function by evaluating it on a lattice with mesh ε you get my picture. Of course, this loses the information of what happens as ε→0 and n→∞ in some thermodynamic limit, which is what you recover when you do SLT.
I just don’t see what this thermodynamic limit tells you about the learning behavior of NNs that we didn’t know before. We already know NNs approximate Solomonoff induction if the A-complexity is a good approximation to Kolmogorov complexity and so forth. What additional information is gained by knowing what A looks like as a smooth function as opposed to a discrete function?
In addition, the strong dependence of SLT on A being analytic is bad, because analytic functions are rigid: their value in a small open subset determines their value globally. I can see why you need this assumption because quantifying what happens near a singularity becomes incredibly difficult for general smooth functions, but because of the rigidity of analytic functions the approximation that “we can just pretend NNs are analytic” is more pernicious than e.g. “we can just pretend NNs are smooth”. Typical approximation theorems like Stone-Weierstrass also fail to save you because they only work in the sup-norm and that’s completely useless for determining behavior at singularities. So I’m yet to be convinced that the additional details in SLT provide a more useful account of NN learning than my simple description above.
As I read it, the arguments you make in the original post depend only on the macrostate f, which is the same for both the singular and non-singular points of the minimal loss set (in my example), so they can’t distinguish these points at all. I see that you’re also applying the logic to points near the minimal set and arguing that the nearly-optimal points are more abundant near the singularities than near the non-singularities. I think that’s a significant point not made at all in your original point that brings it closer to SLT, so I’d encourage you to add it to the post.
I think there’s also terminology mismatch between your post and SLT. You refer to singularities of A(i.e. its derivative is degenerate) while SLT refers to singularities of the set of minimal loss parameters. The point θ=(0,1) in my example is not singular at all in SLT but A(θ) is singular. This terminology collision makes it sound like you’ve recreated SLT more than you actually have.
I’m not too sure how to respond to this comment because it seems like you’re not understanding what I’m trying to say.
I agree there’s some terminology mismatch, but this is inevitable because SLT is a continuous model and my model is discrete. If you want to translate between them, you need to imagine discretizing SLT, which means you discretize both the codomain of the neural network and the space of functions you’re trying to represent in some suitable way. If you do this, then you’ll notice that the worse a singularity is, the lower the A-complexity of the corresponding discrete function will turn out to be, because many of the neighbors map to the same function after discretization.
The content that SLT adds on top of this is what happens in the limit where your discretization becomes infinitely fine and your dataset becomes infinitely large, but your model doesn’t become infinitely large. In this case, SLT claims that the worst singularities dominate the equilibrium behavior of SGD, which I agree is an accurate claim. However, I’m not sure what this claim is supposed to tell us about how NNs learn. I can’t make any novel predictions about NNs with this knowledge that I couldn’t before.
I think the implied claim is something like “analyzing the singularities of the model will also be helpful for understanding SGD in more realistic settings” or maybe just “investigating this area further will lead to insights which are applicable in more realistic settings”. I mostly don’t buy it myself.
This is where we diverge. Please let me know where you think my error is in the following. Returning to my explicit example (though I wrote f(θ) originally but will instead use A(θ) in this post since that matches your definitions).
1. Let f0(x)=0x be the constant zero function and S=A−1(f0).
2. Observe that S is the minimal loss set under our loss function and also S is the set of parameters θ=(θ1,θ2) where θ1=0 or θ2=0.
3. Let α,β∈S . Then A−1(α)=f0=A−1(β) by definition of S. Therefore, c(A(α))=c(A(β)).
4. SLT says that θ=(0,0) is a singularity of S but that θ=(0,1)∈S is not a singularity.
5. Therefore, there exists a singularity (according to SLT) which has identical A-complexity (and also loss) as a non-singular point, contradicting your statement I quote.
You need to discretize the function before taking preimages. If you just take preimages in the continuous setting, of course you’re not going to see any of the interesting behavior SLT is capturing.
In your case, let’s say that we discretize the function space by choosing which one of the functions gk(x)=kηx you’re closest to for some η>0. In addition, we also discretize the codomain of A by looking at the lattice (εZ)2 for some ε>0. Now, you’ll notice that there’s a radius ∼√η disk around the origin which contains only functions mapping to the zero function, and as our lattice has fundamental area ε2 this means the “relative weight” of the singularity at the origin is like O(η/ε2).
In contrast, all other points mapping to the zero function only get a relative weight of O(η/(kε2)) where kε is the absolute value of their nonzero coordinate. Cutting off the domain somewhere to make it compact and summing over all kε>√η to exclude the disk at the origin gives O(√η/ε) for the total contribution of all the other points in the minimum loss set. So in the limit η/ε2→0 the singularity at the origin accounts for almost everything in the preimage of A. The origin is privileged in my picture just as it is in the SLT picture.
I think your mistake is that you’re trying to translate between these two models too literally, when you should be thinking of my model as a discretization of the SLT model. Because it’s a discretization at a particular scale, it doesn’t capture what happens as the scale is changing. That’s the main shortcoming relative to SLT, but it’s not clear to me how important capturing this thermodynamic-like limit is to begin with.
Again, maybe I’m misrepresenting the actual content of SLT here, but it’s not clear to me what SLT says aside from this, so...
Everything I wrote in steps 1-4 was done in a discrete setting (otherwise |A−1(f0)| is not finite and whole thing falls apart). I was intending θ to be pairs of floating point numbers and A to be floats to floats.
However, using that I think I see what you’re trying to say. Which is that θ1θ2 will equal zero for some cases where θ1 and θ2 are both non-zero but very small and will multiply down to zero due to the limits of floating point numbers. Therefore the pre-image of A−1(f0) is actually larger than I claimed, and specifically contains a small neighborhood of (0,0).
That doesn’t invalidate my calculation that shows that (0,0) is equally likely as (0,1) though: they still have the same loss and A-complexity (since they have the same macrostate). On the other hand, you’re saying that there are points in parameter space that are very close to (0,0) that are also in this same pre-image and also equally likely. Therefore even if (0,0) is just as likely as (0,1), being near to (0,0) is more likely than being near to (0,1). I think it’s fair to say that that is at least qualitatively the same as SLT gives in the continous version of this.
However, I do think this result “happened” due to factors that weren’t discussed in your original post, which makes it sound like it is “due to” A-complexity. A-complexity is a function of the macrostate, which is the same at all of these points and so does not distinguish between (0,0) and (0,1) at all. In other words, your post tells me which f is likely while SLT tells me which θ is likely—these are not the same thing. But you clearly have additional ideas not stated in the post that also help you figure out which θ is likely. Until that is clarified, I think you have a mental theory of this which is very different from what you wrote.
Sure, I agree that I didn’t put this information into the post. However, why do you need to know which θ is more likely to know anything about e.g. how neural networks generalize?
I understand that SLT has some additional content beyond what is in the post, and I’ve tried to explain how you could make that fit in this framework. I just don’t understand why that additional content is relevant, which is why I left it out.
As an additional note, I wasn’t really talking about floating point precision being the important variable here. I’m just saying that if you want A-complexity to match the notion of real log canonical threshold, you have to discretize SLT in a way that might not be obvious at first glance, and in a way where some conclusions end up being scale-dependent. This is why if you’re interested in studying this question of the relative contribution of singular points to the partition function, SLT is a better setting to be doing it in. At the risk of repeating myself, I just don’t know why you would try to do that.
In my view, it’s a significant philosophical difference between SLT and your post that your post talks only about choosing macrostates while SLT talks about choosing microstates. I’m much less qualified to know (let alone explain) the benefits of SLT, though I can speculate. If we stop training after a finite number of steps, then I think it’s helpful to know where it’s converging to. In my example, if you think it’s converging to (0,1), then stopping close to that will get you a function that doesn’t generalize too well. If you know it’s converging to (0,0) then stopping close to that will get you a much better function—possibly exactly equally as good as you pointed out due to discretization.
Now this logic is basically exactly what you’re saying in these comments! But I think if someone read your post without prior knowledge of SLT, they wouldn’t figure out that it’s more likely to converge to a point near (0,0) than near (0,1). If they read an SLT post instead, they would figure that out. In that sense, SLT is more useful.
I am not confident that that is the intended benefit of SLT according to its proponents, though. And I wouldn’t be surprised if you could write a simpler explanation of this in your framework than SLT gives, I just think that this post wasn’t it.
I’m explaining why singularities(places where the minimum-loss set has self-intersections) would also tend to have higher effective dimensionality(number of degrees of freedom which you can vary while obtaining similar loss). That’s what’s novel about SLT as compared with previous broad-basin theories.
I don’t think this is something that requires explanation, though. If you take an arbitrary geometric object in maths, a good definition of its singular points will be “points where the tangent space has higher dimension than expected”. If this is the minimum set of a loss function and the tangent space has higher dimension than expected, that intuitively means that locally there are more directions you can move along without changing the loss function, probably suggesting that there are more directions you can move along without changing the function being implemented at all. So the function being implemented is simple, and the rest of the argument works as I outline it in the post.
I think I understand what you and Jesse are getting at, though: there’s a particular behavior that only becomes visible in the smooth or analytic setting, which is that minima of the loss function that are more singular become more dominant as n→∞ in the Boltzmann integral, as opposed to maintaining just the same dominance factor of e−O(d). You don’t see this in the discrete case because there’s a finite nonzero gap in loss between first-best and second-best fits, and so the second-best fits are exponentially punished in the limit and become irrelevant, while in the singular case any first-best fit has some second best “space” surrounding it whose volume is more concentrated towards the singularity point.
While I understand that, I’m not too sure what predictions you would make about the behavior of neural networks on the basis of this observation. For instance, if this smooth behavior is really essential to the generalization of NNs, wouldn’t we predict that generalization would become worse as people switch to lower precision floating point numbers? I don’t think that prediction would have held up very well if someone had made it 5 years ago.
I think it is pretty obvious in the case of valleys without self-intersections, but that’s just the broad basin case. As for the self-intersection case, well, if it’s obvious to you that singularities will be surrounded by narrow bands of larger dimensionality—including in cases where that “dimensionality” is fractional—then you have a better intuition for the geometry of singularities than me and, I suspect, most other readers, so it might be helpful to make that aspect explicit.
Say that you have a loss function L:Rn→R. The minimum loss set is probably not exactly ∇L=0, but it has something to do with that, so let’s pretend that it’s exactly that for now.
This is a collection of n equations that are generically independent and so should define a subset of dimension zero, i.e. a collection of points in Rn. However, there might be points at which the partial derivatives vanishing don’t define independent equations, so we get something of positive codimension.
In these cases, what happens is that the gradient ∇L itself has vanishing derivatives in some directions. In other words, the Hessian matrix ∇2L fails to be of full rank. Say that this matrix has rank r at a specific singular point p∈Rn and consider the set L<Lmin+ε. Diagonalizing ∇2L will generically bring L into a form where it’s the linear combination of r quadratic terms and higher-order cubic terms, and locally the volume contribution to this set around p will be something of order εr/2ε(n−r)/3=εr/6+n/3. The worse the singularity, the smaller the rank r and the greater the volume contribution of the singularity to the set L<Lmin+ε.
The worst singularities dominate the behavior at small ε because you can move “much further” along vectors where L scales in a cubic fashion than directions where it scales in a quadratic fashion, so those dimensions are the only ones that “count” in some calculation when you compare singularities. The tangent space intuition doesn’t apply directly here but something like that still applies, in the sense that the worse a singularity, the more directions you have to move away from it without changing the value of the loss very much.
Is this intuitive now? I’m not sure what more to do to make the result intuitive.
Hmm, what you’re describing is still in what I was referring to as “the broad basin regime”. Sorry if I was unclear—I was thinking of any case where there is no self-intersection of the minimum loss manifold as being a “broad basin”. I think the main innovation of SLT occurs elsewhere.
Look at the image in the tweet I linked. At the point where the curves intersect, it’s not just that the Hessian fails to be of full-rank, it’s not even well-defined. The image illustrates how volume clusters around a single point where the singularity is, not merely around the minimal-loss manifold with the greatest dimensionality. That is what is novel about singular learning theory.
Can you give an example of L which has the mode of singularity you’re talking about? I don’t think I’m quite following what you’re talking about here.
In SLT L is assumed analytic, so I don’t understand how the Hessian can fail to be well-defined anywhere. It’s possible that the Hessian vanishes at some point, suggesting that the singularity there is even worse than quadratic, e.g.L(x,y)=x2y2 at the origin or something like that. But even in this regime essentially the same logic is going to apply—the worse the singularity, the further away you can move from it without changing the value of L very much, and accordingly the singularity contributes more to the volume of the set L(x)<Lmin+ε as ε→0.
Yeah sorry that was probably needlessly confusing, I was just referencing the image in Jesse’s tweet for ease of illustration(you’re right that it’s not analytic, I’m not sure what’s going on there) The Hessian could also just be 0 at a self-intersection point like in the example you gave. That’s the sort of case I had in mind. I was confused by your earlier comment because it sounded like you were just describing a valley of dimension r, but as you say there could be isolated points like that also.
I still maintain that this behavior—of volume clustering near singularities when considering a narrow band about the loss minimum—is the main distinguishing feature of SLT and so could use a mention in the OP.