Why would that be more common than “create a giant blackhole at same coordinates”? Or an array of black holes, spaced by 10 light years, or the like, you get the idea.
You need to establish that little differences would be more common than giant differences I described.
You need to establish that little differences would be more common than giant differences I described.
No, I don’t, because they don’t have to be more common. They just have to be common. I didn’t include black holes, etc. in the simple version, because they’re not necessary to get the result. You could include them in the category of variations, and the conclusion would get stronger, not weaker. For most observers in the universe, there was always a giant black hole there and that’s all there is to it.
The set of small variations is a multiplier on the abundance of universes which look like Lawful Universe #N. The larger the set of small variations, the bigger that multiplier gets, for everything.
No, I don’t, because they don’t have to be more common.
You’ve been trying to show that “universes where the correct generalization is the simple rules are more common than universes where the correct generalization is the complex rules”, and you’ve been arguing that this is still true when we are not considering the longer programs that are exactly equivalent to the shorter programs.
That, so far, is an entirely baseless assertion. You only described—rather vaguely—some of the more complex programs that look like simpler programs, without demonstrating that those programs are not grossly outnumbered by the more complex programs that look very obviously more complex. Such as, for example, programs encoding an universe with apparent true randomness—done using the extra bits.
That being said, our universe does look like it has infinite complexity (due to apparent non-determinism), and as such, infinite complexity of that kind is not improbable. E.g. I can set up a short TM tape prefix that will copy all subsequent bits from the program tape to the output tape. If you pick a very long program at random, it’s not very improbable that it begins with this short prefix, and thus corresponds to a random universe with no order to it whatsoever. Vast majority of long programs beginning with this prefix will not correspond to any shorter program, as random data is not compressible on average. Perhaps a point could be made that most of very long programs correspond to universes with simple probabilistic laws.
You only described—rather vaguely—some of the more complex programs that look like simpler programs, without demonstrating that those programs are not grossly outnumbered by the more complex programs that look very obviously more complex. Such as, for example, programs encoding an universe with apparent true randomness—done using the extra bits.
No, that’s not it at all. I have described how, for every complex program that looks complex, you can construct a large number of equally complex programs that look simple, and therefore should expect simple models to be much more common than complex ones.
You’d need to show that for every complex-looking program, you can make >=n simple looking programs, which do not overlap with the other simple looking programs that you’re constructing for another complex looking program. (Because it won’t do if for every complex looking program, you’re constructing the same, say, 100 simple looking programs). I don’t even see a vague sketch of an argument for that.
edit: Hell you haven’t even defined what constitutes a complex looking program. There’s a trivial example: all programs beginning with the shortest prefix that copies all subsequent program bits verbatim onto the output tape. These programs are complex looking in the sense that vast majority of them do not have any simpler representation than they are. Those programs are also incredibly numerous.
edit2: also the whole argument completely breaks down at infinity. Observe: for every even integer, I can construct 10 odd integers (10n +1, 10n+3, …) . Does that mean a randomly chosen integer is likely to be even? No.
Because it won’t do if for every complex looking program, you’re constructing the same, say, 100 simple looking programs.
That is exactly what I’ve done, and it’s sufficient. The whole point is to justify why the Kolmogorov measure for apparent universe probability is justified starting from the assumption that all mathematical-object universes are equally likely. Demonstrating that the number of additional copies that can be made of a simpler universe relative to the more complex one is in direct proportion to the difference in Kolmogorov complexity, which is what I have done, is sufficient.
You know, it’d be a lot more helpful if it was anything remotely close to “done” rather than vaguely handwaved with some sort of fuzzy (mis)understanding of terms being discussed at it’s core. What does “difference in Kolmogorov complexity” even mean when your program of length L does not have any equivalents of length <L ? If it has no simpler equivalent, Kolmogorov’s complexity is L.
Given a program describing some “simple rules” (what ever that means, anyway), one can make a likewise large number of variations where, instead of a single photon being created somewhere obscure or under some hard to reach conditions, photons are created on a randomly spaced regular lattice over some space of conditions, for example, with some specific spacing of the points of that lattice. Which is very noticeable, and does not locally look like any “simple rules” to much anyone.
edit: note that most definitions of T.M. do not have pointers, and heads move by 1 step at a time, which actually makes it very nontrivial to do some highly localized, surgical changes to data, especially in the context of some program that’s applying same rules everywhere. So it is not obviously the case that a single point change to the world would be less code than something blatantly obvious to the inhabitants.
Who said anything about a mathematical proof? I linked a more formal exposition of the logic in a more abstract model elsewhere in this comment thread; this is an application of that principle.
Why would that be more common than “create a giant blackhole at same coordinates”? Or an array of black holes, spaced by 10 light years, or the like, you get the idea.
You need to establish that little differences would be more common than giant differences I described.
No, I don’t, because they don’t have to be more common. They just have to be common. I didn’t include black holes, etc. in the simple version, because they’re not necessary to get the result. You could include them in the category of variations, and the conclusion would get stronger, not weaker. For most observers in the universe, there was always a giant black hole there and that’s all there is to it.
The set of small variations is a multiplier on the abundance of universes which look like Lawful Universe #N. The larger the set of small variations, the bigger that multiplier gets, for everything.
You’ve been trying to show that “universes where the correct generalization is the simple rules are more common than universes where the correct generalization is the complex rules”, and you’ve been arguing that this is still true when we are not considering the longer programs that are exactly equivalent to the shorter programs.
That, so far, is an entirely baseless assertion. You only described—rather vaguely—some of the more complex programs that look like simpler programs, without demonstrating that those programs are not grossly outnumbered by the more complex programs that look very obviously more complex. Such as, for example, programs encoding an universe with apparent true randomness—done using the extra bits.
That being said, our universe does look like it has infinite complexity (due to apparent non-determinism), and as such, infinite complexity of that kind is not improbable. E.g. I can set up a short TM tape prefix that will copy all subsequent bits from the program tape to the output tape. If you pick a very long program at random, it’s not very improbable that it begins with this short prefix, and thus corresponds to a random universe with no order to it whatsoever. Vast majority of long programs beginning with this prefix will not correspond to any shorter program, as random data is not compressible on average. Perhaps a point could be made that most of very long programs correspond to universes with simple probabilistic laws.
No, that’s not it at all. I have described how, for every complex program that looks complex, you can construct a large number of equally complex programs that look simple, and therefore should expect simple models to be much more common than complex ones.
You’d need to show that for every complex-looking program, you can make >=n simple looking programs, which do not overlap with the other simple looking programs that you’re constructing for another complex looking program. (Because it won’t do if for every complex looking program, you’re constructing the same, say, 100 simple looking programs). I don’t even see a vague sketch of an argument for that.
edit: Hell you haven’t even defined what constitutes a complex looking program. There’s a trivial example: all programs beginning with the shortest prefix that copies all subsequent program bits verbatim onto the output tape. These programs are complex looking in the sense that vast majority of them do not have any simpler representation than they are. Those programs are also incredibly numerous.
edit2: also the whole argument completely breaks down at infinity. Observe: for every even integer, I can construct 10 odd integers (10n +1, 10n+3, …) . Does that mean a randomly chosen integer is likely to be even? No.
That is exactly what I’ve done, and it’s sufficient. The whole point is to justify why the Kolmogorov measure for apparent universe probability is justified starting from the assumption that all mathematical-object universes are equally likely. Demonstrating that the number of additional copies that can be made of a simpler universe relative to the more complex one is in direct proportion to the difference in Kolmogorov complexity, which is what I have done, is sufficient.
You know, it’d be a lot more helpful if it was anything remotely close to “done” rather than vaguely handwaved with some sort of fuzzy (mis)understanding of terms being discussed at it’s core. What does “difference in Kolmogorov complexity” even mean when your program of length L does not have any equivalents of length <L ? If it has no simpler equivalent, Kolmogorov’s complexity is L.
Given a program describing some “simple rules” (what ever that means, anyway), one can make a likewise large number of variations where, instead of a single photon being created somewhere obscure or under some hard to reach conditions, photons are created on a randomly spaced regular lattice over some space of conditions, for example, with some specific spacing of the points of that lattice. Which is very noticeable, and does not locally look like any “simple rules” to much anyone.
edit: note that most definitions of T.M. do not have pointers, and heads move by 1 step at a time, which actually makes it very nontrivial to do some highly localized, surgical changes to data, especially in the context of some program that’s applying same rules everywhere. So it is not obviously the case that a single point change to the world would be less code than something blatantly obvious to the inhabitants.
This doesn’t look remotely like a mathematical proof, though.
Who said anything about a mathematical proof? I linked a more formal exposition of the logic in a more abstract model elsewhere in this comment thread; this is an application of that principle.