Ah, no, I suppose that part is supposed to be handled by whatever approximation process we define for Λ? That is, the “correct” definition of the “most minimal approximate summary” would implicitly constrain the possible choices of boundaries for which Λ is equivalent to X0?
Almost. The hope/expectation is that different choices yield approximately the same Λ, though still probably modulo some conditions (like e.g. sufficiently large T).
By the way, do we need the proof of the theorem to be quite this involved? It seems we can just note that for for any two (sets of) variables X1, X2 separated by distance D, the earliest sampling-step at which their values can intermingle (= their lightcones intersect) is D/2 (since even in the “fastest” case, they can’t do better than moving towards each other at 1 variable per 1 sampling-step).
Almost. The hope/expectation is that different choices yield approximately the same Λ, though still probably modulo some conditions (like e.g. sufficiently large T).
Can you elaborate on this expectation? Intuitively, Λ should consist of a number of higher-level variables as well, and each of them should correspond to a specific set of lower-level variables: abstractions and the elements they abstract over. So for a given Λ, there should be a specific “correct” way to draw the boundaries in the low-level system.
But if ~any way of drawing the boundaries yields the same Λ, then what does this mean?
Or perhaps the “boundaries” in the mesoscale-approximation approach represent something other than the factorization of X into individual abstractions?
Sure, but isn’t the goal of the whole agenda to show that Λdoes have a certain correct factorization, i. e. that abstractions are convergent?
I suppose it may be that any choice of low-level boundaries results in the same Λ, but the Λ itself has a canonical factorization, and going from Λ back to XT reveals the corresponding canonical factorization of XT? And then depending on how close the initial choice of boundaries was to the “correct” one, Λ is easier or harder to compute (or there’s something else about the right choice that makes it nice to use).
Almost. The hope/expectation is that different choices yield approximately the same Λ, though still probably modulo some conditions (like e.g. sufficiently large T).
System size, i.e. number of variables.
By the way, do we need the proof of the theorem to be quite this involved? It seems we can just note that for for any two (sets of) variables X1, X2 separated by distance D, the earliest sampling-step at which their values can intermingle (= their lightcones intersect) is D/2 (since even in the “fastest” case, they can’t do better than moving towards each other at 1 variable per 1 sampling-step).
Yeah, that probably works.
Can you elaborate on this expectation? Intuitively, Λ should consist of a number of higher-level variables as well, and each of them should correspond to a specific set of lower-level variables: abstractions and the elements they abstract over. So for a given Λ, there should be a specific “correct” way to draw the boundaries in the low-level system.
But if ~any way of drawing the boundaries yields the same Λ, then what does this mean?
Or perhaps the “boundaries” in the mesoscale-approximation approach represent something other than the factorization of X into individual abstractions?
Λ is conceptually just the whole bag of abstractions (at a certain scale), unfactored.
Sure, but isn’t the goal of the whole agenda to show that Λ does have a certain correct factorization, i. e. that abstractions are convergent?
I suppose it may be that any choice of low-level boundaries results in the same Λ, but the Λ itself has a canonical factorization, and going from Λ back to XT reveals the corresponding canonical factorization of XT? And then depending on how close the initial choice of boundaries was to the “correct” one, Λ is easier or harder to compute (or there’s something else about the right choice that makes it nice to use).
Yes, there is a story for a canonical factorization of Λ, it’s just separate from the story in this post.