I don’t understand the mechanism that lets you can find some clever way to check which (gauge, wavefunction) pairs are legal
I’m not really sure how gauge-fixing works in QM, so I can’t comment here. But I don’t think it matters: there’s no need to check which pairs are “legal”, you just run all possible pairs in parallel and see what observables they output. Pairs which are in fact gauge-equivalent will produce the same observables by definition, and will accrue probability to the same strings.
Perhaps you’re worried that physically-different worlds could end up contributing identical strings of observables? Possibly, but (a) I think if all possible strings of observables are the same they should be considered equivalent, so that could be one way of distinguishing them (b) this issue seems orthogonal to k-complexity VS. alt-complexity.
That gives me a somewhat clearer picture. (Thanks!) It sounds like the idea is that we have one machine that dovetails through everything and separates them into bins according to their behavior (as revealed so far), and a second machine that picks a bin.
Presumably the bins are given some sort of prefix-free code, so that when a behavior-difference is revealed within a bin (e.g. after more time has passed) it can be split into two bins, with some rule for which one is “default” (e.g., the leftmost).
I buy that something like this can probably be made to work.
In the case of physics, what it suggests the hypothesis
reality is the thing that iterates over all gauges, and applies physics to cluster #657
as competing with a cluster of longer hypotheses like
reality applies physics to gauge #25098723405890723450987098710987298743509
Where that first program beats most elements of the cluster (b/c the gauge-identifier is so dang long in general), but the cluster as a whole may still beat that first program (by at most a fixed constant factor, corresponding roughly to the length of the iterating machine).
(And my current state is something like: it’s interseting that you can think of yourself as living in one particular program, without losing more than a factor of «fixed constant» from your overall probability on what you saw. But I still don’t see why you’d want to, as opposed to realizing that you live in lots and lots of different programs.)
Presumably the bins are given some sort of prefix-free code, so that when a behavior-difference is revealed within a bin (e.g. after more time has passed) it can be split into two bins, with some rule for which one is “default
I only just realized that you’re mainly thinking of the complexity of semimeasures on infinite sequences, not the complexity of finite strings. I guess that should have been obvious from the OP; the results I’ve been citing are about finite strings. My bad! For semimeasures, this paper proves that there actually is a non-constant gap between the log-total-probability and description complexity. Instead the gap is bounded by the Kolmogorov complexity of the length of the sequences. This is discussed in section 4.5.4 of Li&Vitanyi.
I’m pretty confident that the set of compatible (gauge, wavefunction) pairs is computably enumerable, so I think that the coding theorem should apply.
There’s an insight that I’ve glimpsed—though I still haven’t checked the details—which is that we can guarantee that it’s possible to name the ‘correct’ (gauge, wavefunction) cluster without necessarily having to name any single gauge (as would be prohibatively expensive), by dovetailing all the (guage, wavefunction) pairs (in some representation where you can comptuably detect compatibility) and binning the compatible ones, and then giving a code that points to the desired cluster (where the length of that code goes according to cluster-size, which will in general be smaller than picking a full gauge).
This does depend on your ability to distinguish when two programs belong in the same cluster together, which I’m pretty sure can be done for (gauge, wavefunction) pairs (in some suitable representation), but which ofc can’t be done in general.
I usually think of gauge freedom as saying “there is a family of configurations that all produce the same observables”. I don’t think that gives a way to say some configurations are correct/incorrect. Rather some pairs of configurations are equivalent and some aren’t.
That said, I do think you can probably do something like the approach described to assign a label to each equivalence class of configurations and do your evolution in that label space, which avoids having to pick a gauge.
I’m not really sure how gauge-fixing works in QM, so I can’t comment here. But I don’t think it matters: there’s no need to check which pairs are “legal”, you just run all possible pairs in parallel and see what observables they output. Pairs which are in fact gauge-equivalent will produce the same observables by definition, and will accrue probability to the same strings.
Perhaps you’re worried that physically-different worlds could end up contributing identical strings of observables? Possibly, but (a) I think if all possible strings of observables are the same they should be considered equivalent, so that could be one way of distinguishing them (b) this issue seems orthogonal to k-complexity VS. alt-complexity.
That gives me a somewhat clearer picture. (Thanks!) It sounds like the idea is that we have one machine that dovetails through everything and separates them into bins according to their behavior (as revealed so far), and a second machine that picks a bin.
Presumably the bins are given some sort of prefix-free code, so that when a behavior-difference is revealed within a bin (e.g. after more time has passed) it can be split into two bins, with some rule for which one is “default” (e.g., the leftmost).
I buy that something like this can probably be made to work.
In the case of physics, what it suggests the hypothesis
as competing with a cluster of longer hypotheses like
Where that first program beats most elements of the cluster (b/c the gauge-identifier is so dang long in general), but the cluster as a whole may still beat that first program (by at most a fixed constant factor, corresponding roughly to the length of the iterating machine).
(And my current state is something like: it’s interseting that you can think of yourself as living in one particular program, without losing more than a factor of «fixed constant» from your overall probability on what you saw. But I still don’t see why you’d want to, as opposed to realizing that you live in lots and lots of different programs.)
I only just realized that you’re mainly thinking of the complexity of semimeasures on infinite sequences, not the complexity of finite strings. I guess that should have been obvious from the OP; the results I’ve been citing are about finite strings. My bad! For semimeasures, this paper proves that there actually is a non-constant gap between the log-total-probability and description complexity. Instead the gap is bounded by the Kolmogorov complexity of the length of the sequences. This is discussed in section 4.5.4 of Li&Vitanyi.
Cool, thanks.
I’m pretty confident that the set of compatible (gauge, wavefunction) pairs is computably enumerable, so I think that the coding theorem should apply.
There’s an insight that I’ve glimpsed—though I still haven’t checked the details—which is that we can guarantee that it’s possible to name the ‘correct’ (gauge, wavefunction) cluster without necessarily having to name any single gauge (as would be prohibatively expensive), by dovetailing all the (guage, wavefunction) pairs (in some representation where you can comptuably detect compatibility) and binning the compatible ones, and then giving a code that points to the desired cluster (where the length of that code goes according to cluster-size, which will in general be smaller than picking a full gauge).
This does depend on your ability to distinguish when two programs belong in the same cluster together, which I’m pretty sure can be done for (gauge, wavefunction) pairs (in some suitable representation), but which ofc can’t be done in general.
Thanks for the reference!
Seems good to edit the correction in the post, so readers know that in some cases it’s not constant.
How does this correctness check work?
I usually think of gauge freedom as saying “there is a family of configurations that all produce the same observables”. I don’t think that gives a way to say some configurations are correct/incorrect. Rather some pairs of configurations are equivalent and some aren’t.
That said, I do think you can probably do something like the approach described to assign a label to each equivalence class of configurations and do your evolution in that label space, which avoids having to pick a gauge.