It makes sense to one-box ONLY if you calculate EV by that assigns a significant probability to causality violation
It only makes sense to two-box if you believe that your decision is causally isolated from history in every way that Omega can discern. That is, that you can “just do it” without it being possible for Omega to have predicted that you will “just do it” any better than chance. Unfortunately this violates the conditions of the scenario (and everyday reality).
It only makes sense to two-box if you believe that your decision is causally isolated from history in every way that Omega can discern.
Right. That’s why CDT is broken. I suspect from the “disagree” score that people didn’t realize that I do, in fact, assert that causality is upstream of agent decisions (including Omega, for that matter) and that “free will” is an illusion.
It only makes sense to two-box if you believe that your decision is causally isolated from history in every way that Omega can discern. That is, that you can “just do it” without it being possible for Omega to have predicted that you will “just do it” any better than chance. Unfortunately this violates the conditions of the scenario (and everyday reality).
Right. That’s why CDT is broken. I suspect from the “disagree” score that people didn’t realize that I do, in fact, assert that causality is upstream of agent decisions (including Omega, for that matter) and that “free will” is an illusion.