I think there’s no such thing as parameters, just processes that produce better and better approximations to parameters, and the only “real” measures of complexity have to do with the invariants that determine the costs of those processes, which in statistical learning theory are primarily geometric (somewhat tautologically, since the process of approximation is essentially a process of probing the geometry of the governing potential near the parameter).
From that point of view trying to conflate parameters w1,w2 such that p(x|w1)≈p(x|w2) is naive, because w1,w2 aren’t real, only processes that produce better approximations to them are real, and so the ∂∂w derivatives of p(x|w1),p(x|w2) which control such processes are deeply important, and those could be quite different despite p(x|w1)≈p(x|w2) being quite similar.
So I view “local geometry matters” and “the real thing are processes approximating parameters, not parameters” as basically synonymous.
I think there’s no such thing as parameters, just processes that produce better and better approximations to parameters, and the only “real” measures of complexity have to do with the invariants that determine the costs of those processes, which in statistical learning theory are primarily geometric (somewhat tautologically, since the process of approximation is essentially a process of probing the geometry of the governing potential near the parameter).
From that point of view trying to conflate parameters w1,w2 such that p(x|w1)≈p(x|w2) is naive, because w1,w2 aren’t real, only processes that produce better approximations to them are real, and so the ∂∂w derivatives of p(x|w1),p(x|w2) which control such processes are deeply important, and those could be quite different despite p(x|w1)≈p(x|w2) being quite similar.
So I view “local geometry matters” and “the real thing are processes approximating parameters, not parameters” as basically synonymous.