I agree with you, I just was trying to emphasize that if your the real you your decision doesn’t change anything. At most it can do is if the simulation is extremely accurate is it can reveal what was already chosen since you know that you will make the same decision as you previously made in the simulation. The big difference between me and timeless decision theory is that I contend that the only reason to choose just box B is because you might be in the simulation. This completely gets rid of ridiculous problems like roko’s basilisk. Since we are not currently simulating a AI therefore a future AI cannot affect us. If the AI had the suspicion that it was in a simulation then it might have a incentive to torture people but given that it has no reason to think that, torture is a waste of time and effort.
I agree with you, I just was trying to emphasize that if your the real you your decision doesn’t change anything. At most it can do is if the simulation is extremely accurate is it can reveal what was already chosen since you know that you will make the same decision as you previously made in the simulation. The big difference between me and timeless decision theory is that I contend that the only reason to choose just box B is because you might be in the simulation. This completely gets rid of ridiculous problems like roko’s basilisk. Since we are not currently simulating a AI therefore a future AI cannot affect us. If the AI had the suspicion that it was in a simulation then it might have a incentive to torture people but given that it has no reason to think that, torture is a waste of time and effort.