I’m trying to wrap my head around this. Would the following be an accurate restatement of the argument?
Start with the Dr. Evil thought experiment, which shows that it’s possible to be coerced into doing something by an agent who has no physical access to you, other than communication.
We can extend this to the case where the agents are in two separate universes, if we suppose that (a) the communication can be replaced with an acausal negotation, with each agent deducing the existence and motives of the other; and that (b) the Earthlings (the ones coercing Dr. Evil) care about what goes on in Dr. Evil’s universe.
Argument for (a): With sufficient computing power, one can run simulations of another universe to figure out what agents live within that universe.
Argument for (b): For example, the Earthlings might want Dr. Evil to write embodied replicas of them in his own universe, thus increasing the measure of their own consciousness. This is not different in kind from you wanting to increase the probability of your own survival—in both cases, the goal is to increase the measure of worlds in which you live.
To promote their goal, when the Earthlings run their simulation of Dr. Evil, they will intervene in the simulation to punish/reward the simulated Dr. Evil depending on whether he does what they (the Earthlings) want.
For his own part, Dr. Evil, if he is using the Solomonoff prior to predict what happens next in his universe, must give some probability to the hypothesis that him being in such a simulation is in fact what explains all of his experiences up till that point (rather than him being a ground-level being). And if that hypothesis is true, then Dr. Evil will expect to be rewarded/punished based on whether he carries out the wishes of the Earthlings. So, he will modify his actions accordingly.
The probability of the simulation hypothesis may be non-negligible, because the Solomonoff prior considers only the complexity of the hypothesis and not that of the computation unfolding from it. In fact, the hypothesis “There is a universe with laws A+B+C, which produces Earthlings who run a simulation with laws X+Y+Z which produces Dr. Evil, but then intervene in the simulation as described in #3” may actually be simpler (and thus more probable) than “There is a universe with laws X+Y+Z which produces Dr. Evil, and those laws hold forever”.
I’m trying to wrap my head around this. Would the following be an accurate restatement of the argument?
Start with the Dr. Evil thought experiment, which shows that it’s possible to be coerced into doing something by an agent who has no physical access to you, other than communication.
We can extend this to the case where the agents are in two separate universes, if we suppose that (a) the communication can be replaced with an acausal negotation, with each agent deducing the existence and motives of the other; and that (b) the Earthlings (the ones coercing Dr. Evil) care about what goes on in Dr. Evil’s universe.
Argument for (a): With sufficient computing power, one can run simulations of another universe to figure out what agents live within that universe.
Argument for (b): For example, the Earthlings might want Dr. Evil to write embodied replicas of them in his own universe, thus increasing the measure of their own consciousness. This is not different in kind from you wanting to increase the probability of your own survival—in both cases, the goal is to increase the measure of worlds in which you live.
To promote their goal, when the Earthlings run their simulation of Dr. Evil, they will intervene in the simulation to punish/reward the simulated Dr. Evil depending on whether he does what they (the Earthlings) want.
For his own part, Dr. Evil, if he is using the Solomonoff prior to predict what happens next in his universe, must give some probability to the hypothesis that him being in such a simulation is in fact what explains all of his experiences up till that point (rather than him being a ground-level being). And if that hypothesis is true, then Dr. Evil will expect to be rewarded/punished based on whether he carries out the wishes of the Earthlings. So, he will modify his actions accordingly.
The probability of the simulation hypothesis may be non-negligible, because the Solomonoff prior considers only the complexity of the hypothesis and not that of the computation unfolding from it. In fact, the hypothesis “There is a universe with laws A+B+C, which produces Earthlings who run a simulation with laws X+Y+Z which produces Dr. Evil, but then intervene in the simulation as described in #3” may actually be simpler (and thus more probable) than “There is a universe with laws X+Y+Z which produces Dr. Evil, and those laws hold forever”.