Let’s see if I understand this correctly. You’re redefining current sets to a mapping from the set of all computable and turing-complete languages to their weights in [0,1]. Except for the first step of the process, you will actually have non-zero weights on each of those languages, so we’re working with true countably infinite sets here. This might require a stronger hypercomputer to implement than WiaLLL, but I suppose that’s fine for the purely theoretical purposes of these constructions.
Weighting each program by 1/2^i probably doesn’t work, since that still results in a total measure of 1 for each set of all programs with a given length n ∈ N (since there are 2^n programs in each of those sets), but that’s a minor detail—weighting by e.g. 1/4^i should get you a workable result.
I think this is still attackable in so far as that it’s possible to produce a setup that will make X win over any non-X-like. It would lose to certain other X-likes in each round, but this does not hold in the (I → ∞) limit.
It’s probably not necessary for me to write up the modified attack in full detail; the basic idea is to redefine the categories of Xn programs as follows:
Programs that begin with 2n 0 bits. This prefix is solely used to encode that these programs are of type 1; it encodes no other data. The rest of the program source encodes a program in some suitable turing-complete language (the details of which don’t matter for this attack).
Programs that begin with j ∈ [n,2n) 0 bits. All such programs are defined to implement X.
Programs that don’t fit either 1 or 2. All such programs are defined to implement X(n+1).
So the support of Xn for X approaches 0 in the (n → ∞) limit, which allows the X-like family to keep a large weight through arbitrarily many iterations, and even in the limit of (I → ∞). However, it approaches 0 much slower than their support for any non-X-like, which means that X still wins over every non-X-like at each step of the simplicity computation.
Also note that the choice of a prefix length linear in the X-like level is fairly arbitrary. I think this is sufficient to get the desired effect, but if this turns out to be incorrect we can of course use any computable function here, meaning we can adjust this attack to make the fraction of non-X-likes produced fall much faster.
But perhaps we can define the “low-level index” of a language Y as the infimum of all low-level indices as measured from languages X?
I don’t understand this suggestion. If I understood your previous modifications correctly, all but the first current sets of your process assign non-zero measure to every computable and turing-complete language, so taking minimums about languages from this set wouldn’t get you any useful information. As for X-likes, if you are going to treat those seperately from other languages in the simplicity determination process, you’d need to specify some precise way of how to recognize them. The specific X families I’ve talked about are of course only examples; there’s countably infinitely more such families that exhibit similar properties. Please elaborate on what you meant?
Let’s see if I understand this correctly. You’re redefining current sets to a mapping from the set of all computable and turing-complete languages to their weights in [0,1]. Except for the first step of the process, you will actually have non-zero weights on each of those languages, so we’re working with true countably infinite sets here. This might require a stronger hypercomputer to implement than WiaLLL, but I suppose that’s fine for the purely theoretical purposes of these constructions.
Weighting each program by 1/2^i probably doesn’t work, since that still results in a total measure of 1 for each set of all programs with a given length n ∈ N (since there are 2^n programs in each of those sets), but that’s a minor detail—weighting by e.g. 1/4^i should get you a workable result.
I think this is still attackable in so far as that it’s possible to produce a setup that will make X win over any non-X-like. It would lose to certain other X-likes in each round, but this does not hold in the (I → ∞) limit.
It’s probably not necessary for me to write up the modified attack in full detail; the basic idea is to redefine the categories of Xn programs as follows:
Programs that begin with 2n 0 bits. This prefix is solely used to encode that these programs are of type 1; it encodes no other data. The rest of the program source encodes a program in some suitable turing-complete language (the details of which don’t matter for this attack).
Programs that begin with j ∈ [n,2n) 0 bits. All such programs are defined to implement X.
Programs that don’t fit either 1 or 2. All such programs are defined to implement X(n+1).
So the support of Xn for X approaches 0 in the (n → ∞) limit, which allows the X-like family to keep a large weight through arbitrarily many iterations, and even in the limit of (I → ∞). However, it approaches 0 much slower than their support for any non-X-like, which means that X still wins over every non-X-like at each step of the simplicity computation.
Also note that the choice of a prefix length linear in the X-like level is fairly arbitrary. I think this is sufficient to get the desired effect, but if this turns out to be incorrect we can of course use any computable function here, meaning we can adjust this attack to make the fraction of non-X-likes produced fall much faster.
I don’t understand this suggestion. If I understood your previous modifications correctly, all but the first current sets of your process assign non-zero measure to every computable and turing-complete language, so taking minimums about languages from this set wouldn’t get you any useful information. As for X-likes, if you are going to treat those seperately from other languages in the simplicity determination process, you’d need to specify some precise way of how to recognize them. The specific X families I’ve talked about are of course only examples; there’s countably infinitely more such families that exhibit similar properties. Please elaborate on what you meant?