No. It reveals a (potential) flaw in the causality modeling in CDT (Causal Decision Theory). It’s not a flaw in the general expectation-maximizing framework across many (most? all serious?) decision theories. And it’s not necessarily a flaw at all if Omega is impossible (if that level of prediction can’t be done on the agent in question).
In fact, CDT and those who advocate two-boxing as getting the most money are just noting the CONTRADICTION between the problem and their beliefs about decisions, and choosing to believe that their choice is a root cause, rather than the result of previous states. In other words, they’re denying that Omega’s prediction applies to this decision.
Whether that’s a “deep flaw” or not is unclear. I personally suspect that free will is mostly illusory, and if someone builds a powerful enough Omega, it will prove that my choice isn’t free and the question “what would you do” is meaningless. If someone CANNOT build a powerful enough Omega, it will leave the question open.
No. It reveals a (potential) flaw in the causality modeling in CDT (Causal Decision Theory). It’s not a flaw in the general expectation-maximizing framework across many (most? all serious?) decision theories. And it’s not necessarily a flaw at all if Omega is impossible (if that level of prediction can’t be done on the agent in question).
In fact, CDT and those who advocate two-boxing as getting the most money are just noting the CONTRADICTION between the problem and their beliefs about decisions, and choosing to believe that their choice is a root cause, rather than the result of previous states. In other words, they’re denying that Omega’s prediction applies to this decision.
Whether that’s a “deep flaw” or not is unclear. I personally suspect that free will is mostly illusory, and if someone builds a powerful enough Omega, it will prove that my choice isn’t free and the question “what would you do” is meaningless. If someone CANNOT build a powerful enough Omega, it will leave the question open.