Consequentialists can reason about situations in which other beings make important decisions using the Solomonoff prior. If the multiple beings are simulated them, they can decide randomly (because having e.g. 1⁄100 of the resources is better than none, which is the expectation of “blind mischievousness”).
An example of this sort of reasoning is Newcomb’s problem with the knowledge that Omega is simulating you. You get to “control” the result of your simulation by controlling how you act, so you can influence whether or not Omega expects you to one-box or two-box, controlling whether there is $1,000,000 in one of the boxes.
Okay, deciding randomly to exploit one possible simulator makes sense.
As for choosing exactly what to see the output cells of the simulation to… I’m still wrapping my head around it. Is recursive simulation the only way to exploit these simulations from within?
Consequentialists can reason about situations in which other beings make important decisions using the Solomonoff prior. If the multiple beings are simulated them, they can decide randomly (because having e.g. 1⁄100 of the resources is better than none, which is the expectation of “blind mischievousness”).
An example of this sort of reasoning is Newcomb’s problem with the knowledge that Omega is simulating you. You get to “control” the result of your simulation by controlling how you act, so you can influence whether or not Omega expects you to one-box or two-box, controlling whether there is $1,000,000 in one of the boxes.
Okay, deciding randomly to exploit one possible simulator makes sense.
As for choosing exactly what to see the output cells of the simulation to… I’m still wrapping my head around it. Is recursive simulation the only way to exploit these simulations from within?