Question: Tegmark, in one of his multiverse papers, suggests that ordering measure by complexity seems to be an explanation for finding ourselves in a simple universe as well as a possible to answer to the question ‘how much relative existence do these structures get?’ My intuition says rather strongly that this is almost assuredly correct. Do you know of any other sane ways of assigning measure to ‘structures’ or ‘computations’ other than complexity?
Could you elaborate? It seems to me that because there exists a much greater number of complex computations than there are simple computations, we should expect to find ourselves in a complex one. But this, obviously, does not seem to be the case.
If we run each universe-program with probability 2 to the power of minus L, where L is the length of the program in bits, and additionally assume that a valid program can’t be a prefix of another valid program, then the total probability sums to 1 or less (by Kraft’s inequality). In this setup shorter programs carry most of the probability weight despite being vastly outnumbered by longer ones. I think the same holds for most other probability distributions over programs that you can imagine.
Question: Tegmark, in one of his multiverse papers, suggests that ordering measure by complexity seems to be an explanation for finding ourselves in a simple universe as well as a possible to answer to the question ‘how much relative existence do these structures get?’ My intuition says rather strongly that this is almost assuredly correct. Do you know of any other sane ways of assigning measure to ‘structures’ or ‘computations’ other than complexity?
Could you elaborate? It seems to me that because there exists a much greater number of complex computations than there are simple computations, we should expect to find ourselves in a complex one. But this, obviously, does not seem to be the case.
Meanwhile, a newly-minted hamster scurries down the candy aisle in a vacant supermarket.
If we run each universe-program with probability 2 to the power of minus L, where L is the length of the program in bits, and additionally assume that a valid program can’t be a prefix of another valid program, then the total probability sums to 1 or less (by Kraft’s inequality). In this setup shorter programs carry most of the probability weight despite being vastly outnumbered by longer ones. I think the same holds for most other probability distributions over programs that you can imagine.