Sure, I agree that I didn’t put this information into the post. However, why do you need to know which θ is more likely to know anything about e.g. how neural networks generalize?
I understand that SLT has some additional content beyond what is in the post, and I’ve tried to explain how you could make that fit in this framework. I just don’t understand why that additional content is relevant, which is why I left it out.
As an additional note, I wasn’t really talking about floating point precision being the important variable here. I’m just saying that if you want A-complexity to match the notion of real log canonical threshold, you have to discretize SLT in a way that might not be obvious at first glance, and in a way where some conclusions end up being scale-dependent. This is why if you’re interested in studying this question of the relative contribution of singular points to the partition function, SLT is a better setting to be doing it in. At the risk of repeating myself, I just don’t know why you would try to do that.
In my view, it’s a significant philosophical difference between SLT and your post that your post talks only about choosing macrostates while SLT talks about choosing microstates. I’m much less qualified to know (let alone explain) the benefits of SLT, though I can speculate. If we stop training after a finite number of steps, then I think it’s helpful to know where it’s converging to. In my example, if you think it’s converging to (0,1), then stopping close to that will get you a function that doesn’t generalize too well. If you know it’s converging to (0,0) then stopping close to that will get you a much better function—possibly exactly equally as good as you pointed out due to discretization.
Now this logic is basically exactly what you’re saying in these comments! But I think if someone read your post without prior knowledge of SLT, they wouldn’t figure out that it’s more likely to converge to a point near (0,0) than near (0,1). If they read an SLT post instead, they would figure that out. In that sense, SLT is more useful.
I am not confident that that is the intended benefit of SLT according to its proponents, though. And I wouldn’t be surprised if you could write a simpler explanation of this in your framework than SLT gives, I just think that this post wasn’t it.
Sure, I agree that I didn’t put this information into the post. However, why do you need to know which θ is more likely to know anything about e.g. how neural networks generalize?
I understand that SLT has some additional content beyond what is in the post, and I’ve tried to explain how you could make that fit in this framework. I just don’t understand why that additional content is relevant, which is why I left it out.
As an additional note, I wasn’t really talking about floating point precision being the important variable here. I’m just saying that if you want A-complexity to match the notion of real log canonical threshold, you have to discretize SLT in a way that might not be obvious at first glance, and in a way where some conclusions end up being scale-dependent. This is why if you’re interested in studying this question of the relative contribution of singular points to the partition function, SLT is a better setting to be doing it in. At the risk of repeating myself, I just don’t know why you would try to do that.
In my view, it’s a significant philosophical difference between SLT and your post that your post talks only about choosing macrostates while SLT talks about choosing microstates. I’m much less qualified to know (let alone explain) the benefits of SLT, though I can speculate. If we stop training after a finite number of steps, then I think it’s helpful to know where it’s converging to. In my example, if you think it’s converging to (0,1), then stopping close to that will get you a function that doesn’t generalize too well. If you know it’s converging to (0,0) then stopping close to that will get you a much better function—possibly exactly equally as good as you pointed out due to discretization.
Now this logic is basically exactly what you’re saying in these comments! But I think if someone read your post without prior knowledge of SLT, they wouldn’t figure out that it’s more likely to converge to a point near (0,0) than near (0,1). If they read an SLT post instead, they would figure that out. In that sense, SLT is more useful.
I am not confident that that is the intended benefit of SLT according to its proponents, though. And I wouldn’t be surprised if you could write a simpler explanation of this in your framework than SLT gives, I just think that this post wasn’t it.