to REALLY evaluate that, we technically need to know how long omega runs the simulation for.
now, we have two options: one, assume omega keeps running the simulation indefinitely. two, assume that omega shuts the simulation down once he has the info he’s looking for (and before he has to worry about debugging the simulation.)
in # 1, what we are left with is p(S)=1/3, p(H)=1/3, p(t)=1/3, which means we’re moving 200$/3 from part of our possibility cloud to gain 10,000$/3 in another part. In #2, we’re moving a total of 100⁄2 $ to gain 10000⁄2. The 100$ in the simulation is quantum-virtual.
so, unless you have reason to suspect omega is running a LOT of simulations of you, AND not terminating them after a minute or so...(aka, is not inadvertently simulation-mugging you)...
You can generally treat Omega’s simulation capacity as a dashed causality arrow from one universe to another-sortof like the shadow produced by the simulation...
um… lets see....
to REALLY evaluate that, we technically need to know how long omega runs the simulation for.
now, we have two options: one, assume omega keeps running the simulation indefinitely. two, assume that omega shuts the simulation down once he has the info he’s looking for (and before he has to worry about debugging the simulation.)
in # 1, what we are left with is p(S)=1/3, p(H)=1/3, p(t)=1/3, which means we’re moving 200$/3 from part of our possibility cloud to gain 10,000$/3 in another part.
In #2, we’re moving a total of 100⁄2 $ to gain 10000⁄2. The 100$ in the simulation is quantum-virtual.
so, unless you have reason to suspect omega is running a LOT of simulations of you, AND not terminating them after a minute or so...(aka, is not inadvertently simulation-mugging you)...
You can generally treat Omega’s simulation capacity as a dashed causality arrow from one universe to another-sortof like the shadow produced by the simulation...