Although at that point she could respond in spite by refusing to open the transparent box. Sure it leaves $1000 to burn, but maybe that’s worth it to her just to spit on Omega’s grave by proving it wrong.
…which would force this scenario to also be a simulation.
Meaning that Omega cannot fulfill its directive when predicting spiteful Irene’s actions. She’ll one-box iff Omega predicts she’ll two-box.
I don’t find the simulation argument very compelling. I can conceive of many ways for Omega to arrive at a prediction with high probability of being correct that don’t involve a full, particle-by-particle simulation of the actors.
The underlying question remains the accuracy of the prediction and what sequences of events (if any) can include Omega being incorrect.
In the “strong omega” scenarios, the opaque box is empty in all the universes where Irene opens the transparent box (including after Omega’s death). Yoav’s description seems right to me—Irene opens the opaque box, and is SHOCKED to find it empty, as she only planned to open the one box. But her prediction of her behavior was incorrect, not Omega’s prediction.
In “weak omega” scenarios, who knows what the specifics are? Maybe Omega’s wrong in this case.
In the traditional problem, you have to decide to discard the transparent box before opening the opaque box (single decision step). Here, you’re making sequential choices, so there is a policy that makes “strong Omega” inconsistent (namely, discarding B just when you see that A is empty).
SIMULATION COMPLETE.
RESULT: BOTH BOXES TAKEN.
OMEGA-COMMITMENT CONSISTENT ACTION: OPAQUE BOX EMPTY
Irene promptly walks up to the opaque box and opens it, revealing nothing. She stares in shock.
Beautifully said.
Although at that point she could respond in spite by refusing to open the transparent box. Sure it leaves $1000 to burn, but maybe that’s worth it to her just to spit on Omega’s grave by proving it wrong.
…which would force this scenario to also be a simulation.
Meaning that Omega cannot fulfill its directive when predicting spiteful Irene’s actions. She’ll one-box iff Omega predicts she’ll two-box.
Oh dear.
I don’t find the simulation argument very compelling. I can conceive of many ways for Omega to arrive at a prediction with high probability of being correct that don’t involve a full, particle-by-particle simulation of the actors.
The underlying question remains the accuracy of the prediction and what sequences of events (if any) can include Omega being incorrect.
In the “strong omega” scenarios, the opaque box is empty in all the universes where Irene opens the transparent box (including after Omega’s death). Yoav’s description seems right to me—Irene opens the opaque box, and is SHOCKED to find it empty, as she only planned to open the one box. But her prediction of her behavior was incorrect, not Omega’s prediction.
In “weak omega” scenarios, who knows what the specifics are? Maybe Omega’s wrong in this case.
In the traditional problem, you have to decide to discard the transparent box before opening the opaque box (single decision step). Here, you’re making sequential choices, so there is a policy that makes “strong Omega” inconsistent (namely, discarding B just when you see that A is empty).