One point confuses me. Maybe I’m missing something. Once the consequentialists in a simulation are contemplating the possibility of simulation, how would they arrive at any useful strategy? They can manipulate the locations that are likely to be the output/measurement of the simulation, but manipulate to what values? They know basically nothing about how the input will be interpreted, what question the simulator is asking, or what universe is doing the simulation. Since their universe is very simple, presumably many simulators are running identical copies of them, with different manipulation strategies being appropriate for each. My understanding of this sounds less like malign and more like blindly mischievous.
TLDR How do the consequentialists guess which direction to bias the output towards?
Consequentialists can reason about situations in which other beings make important decisions using the Solomonoff prior. If the multiple beings are simulated them, they can decide randomly (because having e.g. 1⁄100 of the resources is better than none, which is the expectation of “blind mischievousness”).
An example of this sort of reasoning is Newcomb’s problem with the knowledge that Omega is simulating you. You get to “control” the result of your simulation by controlling how you act, so you can influence whether or not Omega expects you to one-box or two-box, controlling whether there is $1,000,000 in one of the boxes.
Okay, deciding randomly to exploit one possible simulator makes sense.
As for choosing exactly what to see the output cells of the simulation to… I’m still wrapping my head around it. Is recursive simulation the only way to exploit these simulations from within?
Great post. I encountered many new ideas here.
One point confuses me. Maybe I’m missing something. Once the consequentialists in a simulation are contemplating the possibility of simulation, how would they arrive at any useful strategy? They can manipulate the locations that are likely to be the output/measurement of the simulation, but manipulate to what values? They know basically nothing about how the input will be interpreted, what question the simulator is asking, or what universe is doing the simulation. Since their universe is very simple, presumably many simulators are running identical copies of them, with different manipulation strategies being appropriate for each. My understanding of this sounds less like malign and more like blindly mischievous.
TLDR How do the consequentialists guess which direction to bias the output towards?
Consequentialists can reason about situations in which other beings make important decisions using the Solomonoff prior. If the multiple beings are simulated them, they can decide randomly (because having e.g. 1⁄100 of the resources is better than none, which is the expectation of “blind mischievousness”).
An example of this sort of reasoning is Newcomb’s problem with the knowledge that Omega is simulating you. You get to “control” the result of your simulation by controlling how you act, so you can influence whether or not Omega expects you to one-box or two-box, controlling whether there is $1,000,000 in one of the boxes.
Okay, deciding randomly to exploit one possible simulator makes sense.
As for choosing exactly what to see the output cells of the simulation to… I’m still wrapping my head around it. Is recursive simulation the only way to exploit these simulations from within?