I think that while a sleek decoding algorithm and a massive look-up table might be mathematically equivalent, they differ markedly in what sort of process actually carries them out, at least from the POV of an observer on the same ‘metaphysical level’ as the process. In this case, the look-up table is essentially the program-that-lists-the-results, and the algorithm is the shortest description of how to get them. The equivalence is because, in some kind of sense, process and results imply each other. In my mind, this a bit like some kind of space-like-information and time-like-information equivalence, or as that between a hologram and the surface it’s projected from.
In the end, how are we to ever prefer one kind of description over the other? I can only think that it either comes down to some arbitrary aesthetic appreciation of elegance, or maybe some kind of match between the form of description and how it fits in with our POV; our minds can be described in many ways, but only one corresponds directly with how we observe ourselves and reality, and we want any model to describe our minds with as minimal re-framing as possible.
Now, could someone please tell me if what I have just said makes any kind of sense?!
In the end, how are we to ever prefer one kind of description over the other?
The minimum size of an algorithm will depend on the context in which
it is represented. To meaningfully compare minimum algorithm sizes we
must choose a context that represents the essential entities and
relationships of the domain in consideration.
The minimum size of an algorithm will depend on the context in which it is represented
Isn’t one of the basic results of Kolmogorov complexity/information theory that algorithms/programs can be converted from one formalism/domain to another with a constant-size prefix/penalty and hence there will be only a constant factor penalty in # of bits needed to distinguish the right algorithm in even the most biased formalism?
Isn’t one of the basic results of Kolmogorov complexity/information theory that algorithms/programs can be converted from one formalism/domain to another with a constant-size prefix/penalty...
I believe that my point holds.
This constant-size prefix becomes part of the context in which the algorithm is represented. One way to think about it is that the prefix creates an interpretation layer which translates the algorithm from its domain of implementation to the substrate domain.
To restate my point in these new terms, the prefix must be chosen to provide the appropriate model of the domain under consideration, to the algorithms being compared. It does not make sense to consider algorithms implemented under different domain models (different prefixes).
For example if I want to compare the complexity of 3sat expressions, then I shouldn’t be considering algorithms in domains that support multiplication.
Another way to think of the constant-size prefix is that one can choose any computer language in which to write the program which outputs the string, and then encode a compiler for that language in the prefix.
This works fine for theory: after all, K-complexity is not computable, so we really are in the domain of theory here. For practical situations (even stretching the term “practical” to include QM interpretations!), if the length of the prefix is non-negligible compared to the length of the program, then we can get misleading results. (I would love a correction or some help in supporting my intuition here.)
As a result, I think I agree that the choice of representation matters.
However, I don’t agree that there is a principled way of choosing the right representation. There is no such thing as the substrate domain. Phrases such as “the essential entities and relationships of the domain” are too subjective.
...if the length of the prefix is non-negligible compared to the length of the program, then we can get misleading results.
For the purposes of complexity comparisons the prefix should be held constant across the algorithms. You should always be comparing algorithms in the same language.
However, I don’t agree that there is a principled way of choosing the right representation.
You are correct. I only use phrases such as “the essential entities and relationships of the domain” to outline the nature of the problem.
The problem with comparing the complexity of QM interpretations is that our representation of those interpretations is arbitrary. We can only guess at the proper
representation of QM. By choosing different representations we could favor one theory or the other as the most simple.
For the purposes of complexity comparisons the prefix should be held constant across the algorithms. You should always be comparing algorithms in the same language.
Oh, that seems sensible. It makes the problem of choosing the language even more acute though, since now we can ignore the description length of the compiler itself, meaning that even crazy languages (such as the language which outputs Encyclopedia Brittanica with a single instruction) are in contention. The point of requiring the language to be encoded in the prefix, and its length added to the description length, is to prevent us from “cheating” in this way.
I had always assumed that it was necessary to allow the prefix to vary. Clearly “abcabcabc” and “aaabbbccc” require different prefixes to express them as succinctly as possible. In principle there’s no clear distinction between a prefix which encodes an entire new language and a prefix which just sets up a function to take advantage of the regularities of the string.
In principle there’s no clear distinction between a prefix which encodes an entire new language and a prefix which just sets up a function to take advantage of the regularities of the string.
Yes, and this is important to see. The split between content and context can be made anywhere, but the meaning of the content changes depending on where the split is made.
If you allow the prefix to change then you are considering string lengths in terms of the base language. This language can bias the result in relation to the problem domain that you are actually interested in.
For example if I want to compare the complexity of 3sat expressions, then I shouldn’t be considering algorithms in domains that support multiplication.
I think that while a sleek decoding algorithm and a massive look-up table might be mathematically equivalent, they differ markedly in what sort of process actually carries them out, at least from the POV of an observer on the same ‘metaphysical level’ as the process. In this case, the look-up table is essentially the program-that-lists-the-results, and the algorithm is the shortest description of how to get them. The equivalence is because, in some kind of sense, process and results imply each other. In my mind, this a bit like some kind of space-like-information and time-like-information equivalence, or as that between a hologram and the surface it’s projected from.
In the end, how are we to ever prefer one kind of description over the other? I can only think that it either comes down to some arbitrary aesthetic appreciation of elegance, or maybe some kind of match between the form of description and how it fits in with our POV; our minds can be described in many ways, but only one corresponds directly with how we observe ourselves and reality, and we want any model to describe our minds with as minimal re-framing as possible.
Now, could someone please tell me if what I have just said makes any kind of sense?!
The minimum size of an algorithm will depend on the context in which it is represented. To meaningfully compare minimum algorithm sizes we must choose a context that represents the essential entities and relationships of the domain in consideration.
Isn’t one of the basic results of Kolmogorov complexity/information theory that algorithms/programs can be converted from one formalism/domain to another with a constant-size prefix/penalty and hence there will be only a constant factor penalty in # of bits needed to distinguish the right algorithm in even the most biased formalism?
I believe that my point holds.
This constant-size prefix becomes part of the context in which the algorithm is represented. One way to think about it is that the prefix creates an interpretation layer which translates the algorithm from its domain of implementation to the substrate domain.
To restate my point in these new terms, the prefix must be chosen to provide the appropriate model of the domain under consideration, to the algorithms being compared. It does not make sense to consider algorithms implemented under different domain models (different prefixes).
For example if I want to compare the complexity of 3sat expressions, then I shouldn’t be considering algorithms in domains that support multiplication.
Another way to think of the constant-size prefix is that one can choose any computer language in which to write the program which outputs the string, and then encode a compiler for that language in the prefix.
This works fine for theory: after all, K-complexity is not computable, so we really are in the domain of theory here. For practical situations (even stretching the term “practical” to include QM interpretations!), if the length of the prefix is non-negligible compared to the length of the program, then we can get misleading results. (I would love a correction or some help in supporting my intuition here.)
As a result, I think I agree that the choice of representation matters.
However, I don’t agree that there is a principled way of choosing the right representation. There is no such thing as the substrate domain. Phrases such as “the essential entities and relationships of the domain” are too subjective.
For the purposes of complexity comparisons the prefix should be held constant across the algorithms. You should always be comparing algorithms in the same language.
You are correct. I only use phrases such as “the essential entities and relationships of the domain” to outline the nature of the problem.
The problem with comparing the complexity of QM interpretations is that our representation of those interpretations is arbitrary. We can only guess at the proper representation of QM. By choosing different representations we could favor one theory or the other as the most simple.
Oh, that seems sensible. It makes the problem of choosing the language even more acute though, since now we can ignore the description length of the compiler itself, meaning that even crazy languages (such as the language which outputs Encyclopedia Brittanica with a single instruction) are in contention. The point of requiring the language to be encoded in the prefix, and its length added to the description length, is to prevent us from “cheating” in this way.
I had always assumed that it was necessary to allow the prefix to vary. Clearly “abcabcabc” and “aaabbbccc” require different prefixes to express them as succinctly as possible. In principle there’s no clear distinction between a prefix which encodes an entire new language and a prefix which just sets up a function to take advantage of the regularities of the string.
Yes, and this is important to see. The split between content and context can be made anywhere, but the meaning of the content changes depending on where the split is made.
If you allow the prefix to change then you are considering string lengths in terms of the base language. This language can bias the result in relation to the problem domain that you are actually interested in.
As I said above: