I think the idea is that the 4th scenario is the case, and you can’t discern whether you’re the real you or the simulated version, as the simulation is (near-) perfect. In that scenario, you should act in the same way that you’d want the simulated version to. Either (1) you’re a simulation and the real you just won $1,000,000; or (2) you’re the real you and the simulated version of you thought the same way that you did and one-boxed (meaning that you get $1,000,000 if you one-box.)
I agree with you, I just was trying to emphasize that if your the real you your decision doesn’t change anything. At most it can do is if the simulation is extremely accurate is it can reveal what was already chosen since you know that you will make the same decision as you previously made in the simulation. The big difference between me and timeless decision theory is that I contend that the only reason to choose just box B is because you might be in the simulation. This completely gets rid of ridiculous problems like roko’s basilisk. Since we are not currently simulating a AI therefore a future AI cannot affect us. If the AI had the suspicion that it was in a simulation then it might have a incentive to torture people but given that it has no reason to think that, torture is a waste of time and effort.
I think the idea is that the 4th scenario is the case, and you can’t discern whether you’re the real you or the simulated version, as the simulation is (near-) perfect. In that scenario, you should act in the same way that you’d want the simulated version to. Either (1) you’re a simulation and the real you just won $1,000,000; or (2) you’re the real you and the simulated version of you thought the same way that you did and one-boxed (meaning that you get $1,000,000 if you one-box.)
I agree with you, I just was trying to emphasize that if your the real you your decision doesn’t change anything. At most it can do is if the simulation is extremely accurate is it can reveal what was already chosen since you know that you will make the same decision as you previously made in the simulation. The big difference between me and timeless decision theory is that I contend that the only reason to choose just box B is because you might be in the simulation. This completely gets rid of ridiculous problems like roko’s basilisk. Since we are not currently simulating a AI therefore a future AI cannot affect us. If the AI had the suspicion that it was in a simulation then it might have a incentive to torture people but given that it has no reason to think that, torture is a waste of time and effort.