First, remember that Kolmogorov complexity is only well-defined up to a constant, which is determined by your model of computation. So saying that something has a Kolmogorov complexity of 500 bits only makes sense if the model of computation has been defined. In this case, it has—the Universal Turing Machine model. The reason I’m mentioning this is that a particular simulation might have a wildly differing complexity when specified on a Turing Machine as opposed to, say, x86 machine code (which is typically very complex and contains a lot of redundancies, greatly inflating bit counts).
Second, the article addresses why you can have the paradoxical situation of the Universe being low-complexity while specific things in the Universe are of high complexity (using the Earth as an example):
If you want to print out the entire universe from the beginning of time to the end, you only need to specify the laws of physics.
If you want to print out just Earth by itself, then it’s not enough to specify the laws of physics. You also have to point to just Earth within the universe.
This is why you can have computer programs, like from the demo scene, that seemingly can’t be compressed smaller than thousands of bits, despite existing in a low-complexity Universe. To specify a program, you have to give enough information to pick it out from the sea of other allowed programs.
To specify the Universe, you only have to specify enough information to pick it out from the landscape of all possible Universes. According to string theory (which is a Universal theory in the sense that it is Turing-complete) the landscape of possible Universes is 2^500 or so, which leads to 500 bits of information. Perhaps this is where Eliezer got the figure from (though I admit that I don’t exactly know where he got it from either). Note that this would be an upper bound. It’s possible that the Universe is much simpler than what string theory suggests.
That said, that’s the complexity under the string theory model, not the turing machine model. So the Kolmogorov complexity under the Turing machine model would be less than 500+(the amount of information needed to specify string theory under the Turing machine model). The latter would also probably be a few hundred bits (string theory’s theoretical core is quite simple once you strip away all the maths that is needed for doing calculations).
So Eliezer might be wrong about that particular figure but it would surprise me if it were many orders of magnitude off the mark.
To specify the Universe, you only have to specify enough information to pick it out from the landscape of all possible Universes
Of course not. You have to specify the landscape itself, otherwise it’s like saying “page 273 of [unspecified book]” .
According to string theory (which is a Universal theory in the sense that it is Turing-complete)
As far as I can see, that is only true in that ST allows Turing machines to exist physically. That’s not the kind s of Turing completeness you want. You want to know that String Theory is itself Turing computable, not requiring hypercomputation. Or whatever is actually the ultimate physical theory. Because K complexity doesn’t work other wise. And the computability of physics is far from a given:-
Note that the fact that a theory might consist of a small number differential equations is quite irrelevant, because any one equation could be uncomputable.
First, remember that Kolmogorov complexity is only well-defined up to a constant, which is determined by your model of computation.
And that is why it makes little sense to talk about the complexity of one object without comparing or using it in relation to other objects, like the OP asked. If you implement the law of physics in Brainfuck code, on a x86 instruction set, or on a machine specifically dedicated to have the law of physics as a command, you can get wildly different complexities, even down to a few bit (think about the UTM that has a specific instruction that emulates the law of physics).
Does it even make sense to talk of relative Kolmogorov complexity when we’re dealing with finite strings? For any two finite strings A and B, it is always possible to find some model of computation in which A is less complex than B (and vice versa, of course).
In particular, “If K1 and K2 are the complexity functions relative to description languages L1 and L2, then there is a constant c – which depends only on the languages L1 and L2 chosen – such that
Yes, I’m aware of this. It’s still true, I believe, that for any two finite strings s1 and s2 one can find description languages L1 and L2 (with complexity functions K1 and K2) such that
K1(s1) > K1(s2)
and
K2(s1) < K2(s2).
So there is no language-independent sense in which s1 is more complex than s2 (or vice versa). To make the claim more concrete, consider the fact that for any finite string, one could construct a Universal Turing Machine that outputs that string when given the input 0 (the string is essentially hard-coded into the structure of the machine). This corresponds to a description language in which that string has minimal K-complexity.
This is all compatible with the invariance theorem. As a simple illustration, let the constant c associated with L1 and L2 be 5, let K1(s1) be 10, K1(s2) be 9, K2(s1) be 6 and K2(s2) be 8. In this example, both the inequalities I’ve given above are true, and the invariance theorem isn’t violated.
First, remember that Kolmogorov complexity is only well-defined up to a constant, which is determined by your model of computation. [...]
Right, if you do something like (say) take away the addition instruction then the shortest program might get longer because it has to emulate addition using subtraction and negation (or use a totally different approach, or include a translation pass that expands the additions into negate+subtract or include a universal turing machine that runs the original machine or… yeah).
Second, the article addresses why you can have the paradoxical situation of the Universe being low-complexity while specific things in the Universe are of high complexity.
In this case I care about the complexity of the laws that govern the change in state (positions of atoms or values of the wave functions) with respect to time. I will not make you pay for the presumably absurd amount of data required for the initial state of >10^(82) atoms. That would make talking about the complexity of the laws laughably negligible. I realize that this is, in a sense, an arbitrary distinction but… that’s the question I want to know the answer to.
According to string theory (which is a Universal theory in the sense that it is Turing-complete) the landscape of possible Universes is 2^500 or so, which leads to 500 bits of information. Perhaps this is where Eliezer got the figure from (though I admit that I don’t exactly know where he got it from either).
Interesting guess at where the number could have come from. I just assumed he tallied up various aspects of the standard model somehow and got an answer between 450 and 510.
I will not make you pay for the presumably absurd amount of data required for the initial state of >10^(82) atoms.
You don’t have any atoms in the initial state—nor anything that can reasonably be called matter. My (very ignorant) guess is that we won’t even know what it takes to specify the initial state before we have a unified GR+QFT theory.
According to string theory (which is a Universal theory in the sense that it is Turing-complete) the landscape of possible Universes is 2^500 or so, which leads to 500 bits of information.
If one wishes to describe oneself in a particular universe, then, assuming MWI, fixing the universe is peanuts, complexity-wise, compared to fixing the appropriate Everett branch. The number of bits there is just astounding, it would seem to me.
You can see it in just five lines here, page 1. And an even more compact formulation would just list the symmetry groups, the various fields and how they transform under each group, and would then stipulate that the Lagrangian contains every possible renormalizable term (which is a principle in the construction of such theories, since renormalizable terms that aren’t included get generated anyway).
First, remember that Kolmogorov complexity is only well-defined up to a constant, which is determined by your model of computation. So saying that something has a Kolmogorov complexity of 500 bits only makes sense if the model of computation has been defined. In this case, it has—the Universal Turing Machine model. The reason I’m mentioning this is that a particular simulation might have a wildly differing complexity when specified on a Turing Machine as opposed to, say, x86 machine code (which is typically very complex and contains a lot of redundancies, greatly inflating bit counts).
Second, the article addresses why you can have the paradoxical situation of the Universe being low-complexity while specific things in the Universe are of high complexity (using the Earth as an example):
This is why you can have computer programs, like from the demo scene, that seemingly can’t be compressed smaller than thousands of bits, despite existing in a low-complexity Universe. To specify a program, you have to give enough information to pick it out from the sea of other allowed programs.
To specify the Universe, you only have to specify enough information to pick it out from the landscape of all possible Universes. According to string theory (which is a Universal theory in the sense that it is Turing-complete) the landscape of possible Universes is 2^500 or so, which leads to 500 bits of information. Perhaps this is where Eliezer got the figure from (though I admit that I don’t exactly know where he got it from either). Note that this would be an upper bound. It’s possible that the Universe is much simpler than what string theory suggests.
That said, that’s the complexity under the string theory model, not the turing machine model. So the Kolmogorov complexity under the Turing machine model would be less than 500+(the amount of information needed to specify string theory under the Turing machine model). The latter would also probably be a few hundred bits (string theory’s theoretical core is quite simple once you strip away all the maths that is needed for doing calculations).
So Eliezer might be wrong about that particular figure but it would surprise me if it were many orders of magnitude off the mark.
Of course not. You have to specify the landscape itself, otherwise it’s like saying “page 273 of [unspecified book]” .
As far as I can see, that is only true in that ST allows Turing machines to exist physically. That’s not the kind s of Turing completeness you want. You want to know that String Theory is itself Turing computable, not requiring hypercomputation. Or whatever is actually the ultimate physical theory. Because K complexity doesn’t work other wise. And the computability of physics is far from a given:-
https://en.m.wikipedia.org/wiki/Computability_in_Analysis_and_Physics
Note that the fact that a theory might consist of a small number differential equations is quite irrelevant, because any one equation could be uncomputable.
And that is why it makes little sense to talk about the complexity of one object without comparing or using it in relation to other objects, like the OP asked. If you implement the law of physics in Brainfuck code, on a x86 instruction set, or on a machine specifically dedicated to have the law of physics as a command, you can get wildly different complexities, even down to a few bit (think about the UTM that has a specific instruction that emulates the law of physics).
Does it even make sense to talk of relative Kolmogorov complexity when we’re dealing with finite strings? For any two finite strings A and B, it is always possible to find some model of computation in which A is less complex than B (and vice versa, of course).
It does make sense to talk of relative KC, actually, but it’s not trivial to see why: http://en.wikipedia.org/wiki/Kolmogorov_complexity#Invariance_theorem
In particular, “If K1 and K2 are the complexity functions relative to description languages L1 and L2, then there is a constant c – which depends only on the languages L1 and L2 chosen – such that
Yes, I’m aware of this. It’s still true, I believe, that for any two finite strings s1 and s2 one can find description languages L1 and L2 (with complexity functions K1 and K2) such that
K1(s1) > K1(s2)
and
K2(s1) < K2(s2).
So there is no language-independent sense in which s1 is more complex than s2 (or vice versa). To make the claim more concrete, consider the fact that for any finite string, one could construct a Universal Turing Machine that outputs that string when given the input 0 (the string is essentially hard-coded into the structure of the machine). This corresponds to a description language in which that string has minimal K-complexity.
This is all compatible with the invariance theorem. As a simple illustration, let the constant c associated with L1 and L2 be 5, let K1(s1) be 10, K1(s2) be 9, K2(s1) be 6 and K2(s2) be 8. In this example, both the inequalities I’ve given above are true, and the invariance theorem isn’t violated.
Right, if you do something like (say) take away the addition instruction then the shortest program might get longer because it has to emulate addition using subtraction and negation (or use a totally different approach, or include a translation pass that expands the additions into negate+subtract or include a universal turing machine that runs the original machine or… yeah).
In this case I care about the complexity of the laws that govern the change in state (positions of atoms or values of the wave functions) with respect to time. I will not make you pay for the presumably absurd amount of data required for the initial state of >10^(82) atoms. That would make talking about the complexity of the laws laughably negligible. I realize that this is, in a sense, an arbitrary distinction but… that’s the question I want to know the answer to.
Interesting guess at where the number could have come from. I just assumed he tallied up various aspects of the standard model somehow and got an answer between 450 and 510.
The standard model is over here, see page 36: http://arxiv.org/pdf/hep-th/0610241v1.pdf
You don’t have any atoms in the initial state—nor anything that can reasonably be called matter. My (very ignorant) guess is that we won’t even know what it takes to specify the initial state before we have a unified GR+QFT theory.
If one wishes to describe oneself in a particular universe, then, assuming MWI, fixing the universe is peanuts, complexity-wise, compared to fixing the appropriate Everett branch. The number of bits there is just astounding, it would seem to me.
Uh, wow, that’s a somewhat large equation. It has like 500 terms. Seems… inconsistent with physicists seeing beauty in physics.
You can see it in just five lines here, page 1. And an even more compact formulation would just list the symmetry groups, the various fields and how they transform under each group, and would then stipulate that the Lagrangian contains every possible renormalizable term (which is a principle in the construction of such theories, since renormalizable terms that aren’t included get generated anyway).