It looks like Theorem 1 can be improved slightly, by dropping the “only if” condition on pCD>0. We can then code up something like Kolmogorov complexity by adding a probability 12 transition from every site to our chosen UTM.
If you only want the weaker statement that there is no stationary distribution, it looks like there’s a cheaper argument: Since Φ is aperiodic and irreducible the hypothetical stationary distribution π is unique.Φ is closed under the action of Δ, and (2) implies that for any g∈Δ, the map Γg is an automorphism of the Markov chain. If the (infinite) transition matrix is T, then Γg can be considered as a permutation matrix with (abusing notation) Γ−1gTΓg=T. Then TΓgπ=Γgπ and so Γgπ=π by uniqueness. So π is constant on orbits of ΓΔ, which are all countably infinite. Hence π is everywhere 0, a contradiction.
The above still holds if (2) is restricted to only hold for a group G<Δ such that every orbit under ΓG is infinite.
I think the above argument shows why (2) is too strong; we shouldn’t expect the world to look the same if you pick a “wrong” (ie. complicated) UTM to start off with. Weakening (2) might mean saying something like asserting only pCD=∑μ(Γ)pΓ(C)Γ(D). To do this, we might define the measures μ and p together (ie. finding a fixed point of a map from pairs (p,μ) to (p′,μ′)). In such a model, μ constraints the transition probabilities, p′ is stationary; it’s not clear how one might formalise a derivation of μ′ from p′ but it seems plausible that there is a canonical way to do it.
It may still be possible to get a unique (up to scaling) invariant measure (with infinite sum) over the UTMs by invoking something like the Krein-Rutman theorem and applying it to the transition operator. I haven’t yet verified that all the conditions hold.
This measure would then be an encoding-invariant way to compare UTMs’ “intrinsic complexity” in the sense of “number of bits needed to simulate”.
Consider the function a(M1,M2)=2−d(M1,M2)−d(M2,M1) where d(M1,M2)=min(|x||x∈{0,1}∗:∀y∈{0,1}∗:M1(xy)=M2(y) unless neither of these halts). The reversible Markov chain with transition probabilities p(M1,M2)=a(M1,M2)∑M′2a(M1,M′2) has a bounded positive invariant measure μ(M)=∑M′a(M,M′). Of course, as the post showed, the total measure is infinite. Also, because the chain is reversible and transient, the invariant measure is far from unique—indeed, for any machine M0, the measure μ(M)=p(0)(M,M0)+2∑∞n=1p(n)(M,M0) will be a bounded positive invariant measure.
It seems tempting (to me) to try to get a probability measure by modding out the output-permutations (that the post uses to show this isn’t possible for the full set of UTMs). To this end, consider the set of UTMs that have no output. (These will be unaffected by the output-permutations.) We can try to use the induced sub-digraph on these to build a probability measure μ. The measure of each UTM should be a function of the rooted edge-labeled digraph GM rooted at that UTM.
The most natural topology on rooted edge-labeled infinite digraphs is the one generated by the sets {G:G′ is isomorphic to an induced rooted edge-labeled subgraph of G} where G′ ranges over finite rooted edge-labeled digraphs—we could hope that μ is continuous according to this topology. Unfortunately, this can’t work: if μ(M)>0 then μ−1((12μ(M),∞)) must be open, and so it must contain some finite intersection of the generating sets; however, every such intersection that’s nonempty (as this one is) contains infinitely many UTMs, so the total measure must be infinite as well.
The first half constructs an invariant measure which is then shown to be unsatisfactory because UTMs can rank arbitrarily high while only being good at encoding variations of themselves. This is mostly the case because the chain is transient; if it was positive recurrent then the measure would be finite, and UTMs ranking high would have to be good at encoding (and being encoded by) the average UTM rather than just a select family of UTMs.
The second half looks at whether we can get better results (ie a probability measure) by restricting our attention to output-free “UTMs” (though I misspoke; these are not actually UTMs but rather universal semidecidable languages (we can call them USDLs)). It concludes that we can’t if the measure will be continuous on the given digraph—however, this is an awkward notion of continuity: a low-complexity USDL whose behavior is tweaked very slightly but in a complex way may be very close in the given topology, but should have measure much lower than the starting USDL. So I consider this question unanswered.
In order to understand what the measure μ that was constructed from d will reward, here’s the sort of machine that comes close to supMμ(M)=3:
Let M0 be an arbitrary UTM. Now consider the function r(n)=n−2⌊lgn⌋ (or, really, any function r:N+→N0 with r(n)<n that visits every nonnegative integer infinitely many times), and let L={x∈{0,1}∗:|x|>2,x|x|−1=xr(|x|−1),x|x|−2=xr(|x|−2)}. (The indices here are zero-based.) Choose x0∈L such that x0 has no proper prefix in L. Then, construct the UTM M that does:
repeat:
s := ""
while s not in L:
# if there is no next character, halt
s := s + readchar()
if s == x0:
break
M0()
This M will have μ(M)>3−2−|x0|+d(M0,M)2−|x0|−d(M0,M).
M here is optimized for building up internal states (that are then UTMs that are efficiently encoded), while also being very easy to reset from these internal states; in other words being easy to “encode” from the UTMs it efficiently encodes, using at most 2 bits (an average of 1+√52). This is somewhat interesting, but clearly doesn’t capture the kind of computational expressivity we’re primarily interested in.
Context: At this weekend’s MIRI workshop on logical uncertainty, we were talking about Markus Mueller’s paper on priors over Turing machines and bitstrings (as those seem analogous to priors over logical statements).
Mueller’s original result examined a directed graph structure over prefix computers, where an edge is given by a prefix that causes one computer to simulate another. This gave rise to a Markov chain structure, but Mueller showed in two different ways that this Markov chain was not positive recurrent, and thus that it would not give rise to a non-arbitrary prior.
However, I noted that the Markov chain structure was set up to punish computers that were difficult to simulate, but not to punish computers that were bad at simulating others. Some natural ways to modify the transition probabilities defeated Mueller’s more direct counterexample.
However, in this draft Jacob and Janos showed that no stationary transition matrix (with nonzero coefficients) on that digraph of universal prefix computers could be positive recurrent, or even null recurrent (going beyond Mueller’s result).
It looks like Theorem 1 can be improved slightly, by dropping the “only if” condition on pCD>0. We can then code up something like Kolmogorov complexity by adding a probability 12 transition from every site to our chosen UTM.
If you only want the weaker statement that there is no stationary distribution, it looks like there’s a cheaper argument: Since Φ is aperiodic and irreducible the hypothetical stationary distribution π is unique.Φ is closed under the action of Δ, and (2) implies that for any g∈Δ, the map Γg is an automorphism of the Markov chain. If the (infinite) transition matrix is T, then Γg can be considered as a permutation matrix with (abusing notation) Γ−1gTΓg=T. Then TΓgπ=Γgπ and so Γgπ=π by uniqueness. So π is constant on orbits of ΓΔ, which are all countably infinite. Hence π is everywhere 0, a contradiction.
The above still holds if (2) is restricted to only hold for a group G<Δ such that every orbit under ΓG is infinite.
I think the above argument shows why (2) is too strong; we shouldn’t expect the world to look the same if you pick a “wrong” (ie. complicated) UTM to start off with. Weakening (2) might mean saying something like asserting only pCD=∑μ(Γ)pΓ(C)Γ(D). To do this, we might define the measures μ and p together (ie. finding a fixed point of a map from pairs (p,μ) to (p′,μ′)). In such a model, μ constraints the transition probabilities, p′ is stationary; it’s not clear how one might formalise a derivation of μ′ from p′ but it seems plausible that there is a canonical way to do it.
It may still be possible to get a unique (up to scaling) invariant measure (with infinite sum) over the UTMs by invoking something like the Krein-Rutman theorem and applying it to the transition operator. I haven’t yet verified that all the conditions hold.
This measure would then be an encoding-invariant way to compare UTMs’ “intrinsic complexity” in the sense of “number of bits needed to simulate”.
Consider the function a(M1,M2)=2−d(M1,M2)−d(M2,M1) where d(M1,M2)=min(|x||x∈{0,1}∗:∀y∈{0,1}∗:M1(xy)=M2(y) unless neither of these halts). The reversible Markov chain with transition probabilities p(M1,M2)=a(M1,M2)∑M′2a(M1,M′2) has a bounded positive invariant measure μ(M)=∑M′a(M,M′). Of course, as the post showed, the total measure is infinite. Also, because the chain is reversible and transient, the invariant measure is far from unique—indeed, for any machine M0, the measure μ(M)=p(0)(M,M0)+2∑∞n=1p(n)(M,M0) will be a bounded positive invariant measure.
It seems tempting (to me) to try to get a probability measure by modding out the output-permutations (that the post uses to show this isn’t possible for the full set of UTMs). To this end, consider the set of UTMs that have no output. (These will be unaffected by the output-permutations.) We can try to use the induced sub-digraph on these to build a probability measure μ. The measure of each UTM should be a function of the rooted edge-labeled digraph GM rooted at that UTM.
The most natural topology on rooted edge-labeled infinite digraphs is the one generated by the sets {G:G′ is isomorphic to an induced rooted edge-labeled subgraph of G} where G′ ranges over finite rooted edge-labeled digraphs—we could hope that μ is continuous according to this topology. Unfortunately, this can’t work: if μ(M)>0 then μ−1((12μ(M),∞)) must be open, and so it must contain some finite intersection of the generating sets; however, every such intersection that’s nonempty (as this one is) contains infinitely many UTMs, so the total measure must be infinite as well.
These results are still a bit unsatisfying.
The first half constructs an invariant measure which is then shown to be unsatisfactory because UTMs can rank arbitrarily high while only being good at encoding variations of themselves. This is mostly the case because the chain is transient; if it was positive recurrent then the measure would be finite, and UTMs ranking high would have to be good at encoding (and being encoded by) the average UTM rather than just a select family of UTMs.
The second half looks at whether we can get better results (ie a probability measure) by restricting our attention to output-free “UTMs” (though I misspoke; these are not actually UTMs but rather universal semidecidable languages (we can call them USDLs)). It concludes that we can’t if the measure will be continuous on the given digraph—however, this is an awkward notion of continuity: a low-complexity USDL whose behavior is tweaked very slightly but in a complex way may be very close in the given topology, but should have measure much lower than the starting USDL. So I consider this question unanswered.
In order to understand what the measure μ that was constructed from d will reward, here’s the sort of machine that comes close to supMμ(M)=3:
Let M0 be an arbitrary UTM. Now consider the function r(n)=n−2⌊lgn⌋ (or, really, any function r:N+→N0 with r(n)<n that visits every nonnegative integer infinitely many times), and let L={x∈{0,1}∗:|x|>2,x|x|−1=xr(|x|−1),x|x|−2=xr(|x|−2)}. (The indices here are zero-based.) Choose x0∈L such that x0 has no proper prefix in L. Then, construct the UTM M that does:
This M will have μ(M)>3−2−|x0|+d(M0,M)2−|x0|−d(M0,M).
M here is optimized for building up internal states (that are then UTMs that are efficiently encoded), while also being very easy to reset from these internal states; in other words being easy to “encode” from the UTMs it efficiently encodes, using at most 2 bits (an average of 1+√52). This is somewhat interesting, but clearly doesn’t capture the kind of computational expressivity we’re primarily interested in.
This is a follow-up note to a nice paper of Markus Mueller on the possibility of a machine-invariant notion of Kolmogorov complexity, available here: http://www.sciencedirect.com/science/article/pii/S0304397509006550
Context: At this weekend’s MIRI workshop on logical uncertainty, we were talking about Markus Mueller’s paper on priors over Turing machines and bitstrings (as those seem analogous to priors over logical statements).
Mueller’s original result examined a directed graph structure over prefix computers, where an edge is given by a prefix that causes one computer to simulate another. This gave rise to a Markov chain structure, but Mueller showed in two different ways that this Markov chain was not positive recurrent, and thus that it would not give rise to a non-arbitrary prior.
However, I noted that the Markov chain structure was set up to punish computers that were difficult to simulate, but not to punish computers that were bad at simulating others. Some natural ways to modify the transition probabilities defeated Mueller’s more direct counterexample.
However, in this draft Jacob and Janos showed that no stationary transition matrix (with nonzero coefficients) on that digraph of universal prefix computers could be positive recurrent, or even null recurrent (going beyond Mueller’s result).