As near as I can figure, the corresponding state of affairs to a complexity+leverage prior improbability would be a Tegmark Level IV multiverse in which each reality got an amount of magical-reality-fluid corresponding to the complexity of its program (1/2 to the power of its Kolmogorov complexity) and then this magical-reality-fluid had to be divided among all the causal elements within that universe—if you contain 3↑↑↑3 causal nodes, then each node can only get 1/3↑↑↑3 of the total realness of that universe.
This reminds me a lot of Levin’s universal search algorithm, and the associated Levin complexity.
To formalize, I think you will want to assign each program p, of length #p, a prior weight 2^-#p (as in usual Solomonoff induction), and then divide that weight among the execution steps of the program (each execution step corresponding to some sort of causal node). So if program p executes for t steps before stopping, then each individual step gets a prior weight 2^-#p/t. The connection to universal search is as follows: Imagine dovetailing all possible programs on one big computer, giving each program p a share 2^-#’p of all the execution steps. (If the program stops, then start it again, so that the computer doesn’t have idle steps). In the limit, the computer will spend a proportion 2^-#p/t of its resources executing each particular step of p, so this is an intuitive sense of the step’s prior “weight”.
You’ll then want to condition on your evidence to get a posterior distribution. Most steps of most programs won’t in any sense correspond to an intelligent observer (or AI program) having your evidence, E, but some of them will. Let nE(p) be the number of steps in a program p which so-correspond (for a lot of programs nE(p) will be zero) and then program p will get posterior weight proportional to 2^-#p x (nE(p) / t). Normalize, and that gives you the posterior probability you are in a universe executed by a program p.
You asked if there are any anthropic problems with this measure. I can think of a few:
Should “giant” observers (corresponding to lots of execution steps) count for more weight than “midget” observers (corresponding to fewer steps)? They do in this measure, which seems a bit counter-intuitive.
The posterior will tend to focus weight on programs which have a high proportion (nE(p) / t) of their execution steps corresponding to observers like you. If you take your observations at face value (i.e. you are not in a simulation), then this leads to the same sort of “Great Filter” issues that Katja Grace noticed with the SIA. There is a shift towards universes which have a high density of habitable planets, occupied by observers like us, but where very few or none of those observers ever expand off their home worlds to become super-advanced civilizations, since if they did they would take the executions steps away from observers like us.
There also seems to be a good reason in this measure NOT to take your observations at face value. The term nE(p) / t will tend to be maximized in universes very unlike ours: ones which are built of dense “computronium” running lots of different observer simulations, and you’re one of them. Our own universe is very “sparse” in comparison (very few execution steps corresponding to observers).
Even if you deal with simulations, there appears to be a “cyclic history” problem. The density nE(p)/t will tend to be is maximized if civilizations last for a long time (large number of observers), but go through periodic “resets”, wiping out all traces of the prior cycles (so leading to lots of observers in a state like us). Maybe there is some sort of AI guardian in the universe which interrupts civilizations before they create their own (rival) AIs, but is not so unfriendly as to wipe them out altogether. So it just knocks them back to the stone age from time to time. That seems highly unlikely a priori, but it does get magnified a lot in posterior probability.
On the plus side, note that there is no particular reason in this measure to expect you are in a very big universe or multiverse, so this defuses the “presumptuous philosopher” objection (as well as some technical problems if the weight is dominated by infinite universes). Large universes will tend to correspond to many copies of you (high nE(p)) but also to a large number of execution steps t. What matters is the density of observers (hence the computronium problem) rather than the total size.
This reminds me a lot of Levin’s universal search algorithm, and the associated Levin complexity.
To formalize, I think you will want to assign each program p, of length #p, a prior weight 2^-#p (as in usual Solomonoff induction), and then divide that weight among the execution steps of the program (each execution step corresponding to some sort of causal node). So if program p executes for t steps before stopping, then each individual step gets a prior weight 2^-#p/t. The connection to universal search is as follows: Imagine dovetailing all possible programs on one big computer, giving each program p a share 2^-#’p of all the execution steps. (If the program stops, then start it again, so that the computer doesn’t have idle steps). In the limit, the computer will spend a proportion 2^-#p/t of its resources executing each particular step of p, so this is an intuitive sense of the step’s prior “weight”.
You’ll then want to condition on your evidence to get a posterior distribution. Most steps of most programs won’t in any sense correspond to an intelligent observer (or AI program) having your evidence, E, but some of them will. Let nE(p) be the number of steps in a program p which so-correspond (for a lot of programs nE(p) will be zero) and then program p will get posterior weight proportional to 2^-#p x (nE(p) / t). Normalize, and that gives you the posterior probability you are in a universe executed by a program p.
You asked if there are any anthropic problems with this measure. I can think of a few:
Should “giant” observers (corresponding to lots of execution steps) count for more weight than “midget” observers (corresponding to fewer steps)? They do in this measure, which seems a bit counter-intuitive.
The posterior will tend to focus weight on programs which have a high proportion (nE(p) / t) of their execution steps corresponding to observers like you. If you take your observations at face value (i.e. you are not in a simulation), then this leads to the same sort of “Great Filter” issues that Katja Grace noticed with the SIA. There is a shift towards universes which have a high density of habitable planets, occupied by observers like us, but where very few or none of those observers ever expand off their home worlds to become super-advanced civilizations, since if they did they would take the executions steps away from observers like us.
There also seems to be a good reason in this measure NOT to take your observations at face value. The term nE(p) / t will tend to be maximized in universes very unlike ours: ones which are built of dense “computronium” running lots of different observer simulations, and you’re one of them. Our own universe is very “sparse” in comparison (very few execution steps corresponding to observers).
Even if you deal with simulations, there appears to be a “cyclic history” problem. The density nE(p)/t will tend to be is maximized if civilizations last for a long time (large number of observers), but go through periodic “resets”, wiping out all traces of the prior cycles (so leading to lots of observers in a state like us). Maybe there is some sort of AI guardian in the universe which interrupts civilizations before they create their own (rival) AIs, but is not so unfriendly as to wipe them out altogether. So it just knocks them back to the stone age from time to time. That seems highly unlikely a priori, but it does get magnified a lot in posterior probability.
On the plus side, note that there is no particular reason in this measure to expect you are in a very big universe or multiverse, so this defuses the “presumptuous philosopher” objection (as well as some technical problems if the weight is dominated by infinite universes). Large universes will tend to correspond to many copies of you (high nE(p)) but also to a large number of execution steps t. What matters is the density of observers (hence the computronium problem) rather than the total size.