It seems that most the discussion here is caught up on Omega being able to “predict” your decision would require reverse-time causality which some models of reality cannot allow to exist.
Assuming that Omega is a “sufficiently advanced” powerful being, then the boxes could act in exactly the way that the “reverse time” model stipulates without requiring any such bending of causality through technology that can destroy the contents of a box faster than human perception time or use the classical many-worlds interpretation method of ending the universe where things don’t work out the way you want (the universe doesn’t even need to end, something like a quantum vacuum collapse would have the same effect of stopping any information leakage in non-conforming universes).
This makes the not-quite-a-rationalist argument of “the boxes are already what they are so my decision doesn’t matter, I’ll take both” no longer hold true.
Your assumptions mean that the more likely answer is “Omega is sufficiently powerful to mess with me any way it likes; why am I playing this game?”
That is, problems containing Omega are more contrived and less relevant to anything resembling real life the more one looks at them.
Note that thinking too much about Omega can lead to losing in real life, as one forgets that Omega is hypothetical and cannot possibly exist, and actually goes so far as to attributes the qualities of Omega to what is in fact a manipulative human. One example that I found quite jawdropping. This is a case I think could quite fairly be described as reasoning oneself more ineffective. People who act like that are a reason to get out of the situation, not to invoke TDT.
It seems that most the discussion here is caught up on Omega being able to “predict” your decision would require reverse-time causality which some models of reality cannot allow to exist.
Assuming that Omega is a “sufficiently advanced” powerful being, then the boxes could act in exactly the way that the “reverse time” model stipulates without requiring any such bending of causality through technology that can destroy the contents of a box faster than human perception time or use the classical many-worlds interpretation method of ending the universe where things don’t work out the way you want (the universe doesn’t even need to end, something like a quantum vacuum collapse would have the same effect of stopping any information leakage in non-conforming universes).
This makes the not-quite-a-rationalist argument of “the boxes are already what they are so my decision doesn’t matter, I’ll take both” no longer hold true.
Your assumptions mean that the more likely answer is “Omega is sufficiently powerful to mess with me any way it likes; why am I playing this game?”
That is, problems containing Omega are more contrived and less relevant to anything resembling real life the more one looks at them.
Note that thinking too much about Omega can lead to losing in real life, as one forgets that Omega is hypothetical and cannot possibly exist, and actually goes so far as to attributes the qualities of Omega to what is in fact a manipulative human. One example that I found quite jawdropping. This is a case I think could quite fairly be described as reasoning oneself more ineffective. People who act like that are a reason to get out of the situation, not to invoke TDT.