You’d need to show that for every complex-looking program, you can make >=n simple looking programs, which do not overlap with the other simple looking programs that you’re constructing for another complex looking program. (Because it won’t do if for every complex looking program, you’re constructing the same, say, 100 simple looking programs). I don’t even see a vague sketch of an argument for that.
edit: Hell you haven’t even defined what constitutes a complex looking program. There’s a trivial example: all programs beginning with the shortest prefix that copies all subsequent program bits verbatim onto the output tape. These programs are complex looking in the sense that vast majority of them do not have any simpler representation than they are. Those programs are also incredibly numerous.
edit2: also the whole argument completely breaks down at infinity. Observe: for every even integer, I can construct 10 odd integers (10n +1, 10n+3, …) . Does that mean a randomly chosen integer is likely to be even? No.
Because it won’t do if for every complex looking program, you’re constructing the same, say, 100 simple looking programs.
That is exactly what I’ve done, and it’s sufficient. The whole point is to justify why the Kolmogorov measure for apparent universe probability is justified starting from the assumption that all mathematical-object universes are equally likely. Demonstrating that the number of additional copies that can be made of a simpler universe relative to the more complex one is in direct proportion to the difference in Kolmogorov complexity, which is what I have done, is sufficient.
You know, it’d be a lot more helpful if it was anything remotely close to “done” rather than vaguely handwaved with some sort of fuzzy (mis)understanding of terms being discussed at it’s core. What does “difference in Kolmogorov complexity” even mean when your program of length L does not have any equivalents of length <L ? If it has no simpler equivalent, Kolmogorov’s complexity is L.
Given a program describing some “simple rules” (what ever that means, anyway), one can make a likewise large number of variations where, instead of a single photon being created somewhere obscure or under some hard to reach conditions, photons are created on a randomly spaced regular lattice over some space of conditions, for example, with some specific spacing of the points of that lattice. Which is very noticeable, and does not locally look like any “simple rules” to much anyone.
edit: note that most definitions of T.M. do not have pointers, and heads move by 1 step at a time, which actually makes it very nontrivial to do some highly localized, surgical changes to data, especially in the context of some program that’s applying same rules everywhere. So it is not obviously the case that a single point change to the world would be less code than something blatantly obvious to the inhabitants.
You’d need to show that for every complex-looking program, you can make >=n simple looking programs, which do not overlap with the other simple looking programs that you’re constructing for another complex looking program. (Because it won’t do if for every complex looking program, you’re constructing the same, say, 100 simple looking programs). I don’t even see a vague sketch of an argument for that.
edit: Hell you haven’t even defined what constitutes a complex looking program. There’s a trivial example: all programs beginning with the shortest prefix that copies all subsequent program bits verbatim onto the output tape. These programs are complex looking in the sense that vast majority of them do not have any simpler representation than they are. Those programs are also incredibly numerous.
edit2: also the whole argument completely breaks down at infinity. Observe: for every even integer, I can construct 10 odd integers (10n +1, 10n+3, …) . Does that mean a randomly chosen integer is likely to be even? No.
That is exactly what I’ve done, and it’s sufficient. The whole point is to justify why the Kolmogorov measure for apparent universe probability is justified starting from the assumption that all mathematical-object universes are equally likely. Demonstrating that the number of additional copies that can be made of a simpler universe relative to the more complex one is in direct proportion to the difference in Kolmogorov complexity, which is what I have done, is sufficient.
You know, it’d be a lot more helpful if it was anything remotely close to “done” rather than vaguely handwaved with some sort of fuzzy (mis)understanding of terms being discussed at it’s core. What does “difference in Kolmogorov complexity” even mean when your program of length L does not have any equivalents of length <L ? If it has no simpler equivalent, Kolmogorov’s complexity is L.
Given a program describing some “simple rules” (what ever that means, anyway), one can make a likewise large number of variations where, instead of a single photon being created somewhere obscure or under some hard to reach conditions, photons are created on a randomly spaced regular lattice over some space of conditions, for example, with some specific spacing of the points of that lattice. Which is very noticeable, and does not locally look like any “simple rules” to much anyone.
edit: note that most definitions of T.M. do not have pointers, and heads move by 1 step at a time, which actually makes it very nontrivial to do some highly localized, surgical changes to data, especially in the context of some program that’s applying same rules everywhere. So it is not obviously the case that a single point change to the world would be less code than something blatantly obvious to the inhabitants.