This exposes a hole in the problem statement: what does the Omega’s prize measure? We determined that U0 is the counterfactual where Omega kills you, U1 is the counterfactual where it does nothing, but what is U2=U1+3*(U1-U0)? This seems to be the expected utility of the event where you draw the lucky card, in which case this event contains, in particular, your future decisions to continue drawing cards. But if it’s so, it places a limit on how your utility can be improved further during the latter rounds, since if your utility continues to increase, it contradicts the statement in the first round that your utility is going to be only U2, and no more. Utility can’t change, as each utility is a valuation of a specific event in the sample space.
So, the alternative formulation that removes this contradiction is for Omega to only assert that the expected utility given that you receive a lucky card is no less than U2. In this case the right strategy seems to be continue drawing cards indefinitely, since the utility you receive could be in something other than your own life, now spent drawing cards only.
This however seems to sidestep the issue. What if the only utility you see is in the future actions you do, which don’t include picking cards, and you can’t interleave cards with other actions, that is you must allot a given amount of time to picking cards.
You can recast the problem of choosing each of the infinite number of decisions (or one among all available in some sense infinite sequences of decisions) to the problem of choosing a finite “seed” strategy for making decisions. Say, only a finite number of strategies is available, for example only what fits in the memory of the computer that starts the enterprise, that could since the start of the experiment be expanded, but the first version has a specified limit. In this case, the right program is as close to Busy Beaver is you can get, that is you draw cards as long as possible, but only finitely long, and after that you stop and go on to enjoy the actual life.
This exposes a hole in the problem statement: what does the Omega’s prize measure? We determined that U0 is the counterfactual where Omega kills you, U1 is the counterfactual where it does nothing, but what is U2=U1+3*(U1-U0)? This seems to be the expected utility of the event where you draw the lucky card, in which case this event contains, in particular, your future decisions to continue drawing cards. But if it’s so, it places a limit on how your utility can be improved further during the latter rounds, since if your utility continues to increase, it contradicts the statement in the first round that your utility is going to be only U2, and no more. Utility can’t change, as each utility is a valuation of a specific event in the sample space.
So, the alternative formulation that removes this contradiction is for Omega to only assert that the expected utility given that you receive a lucky card is no less than U2. In this case the right strategy seems to be continue drawing cards indefinitely, since the utility you receive could be in something other than your own life, now spent drawing cards only.
This however seems to sidestep the issue. What if the only utility you see is in the future actions you do, which don’t include picking cards, and you can’t interleave cards with other actions, that is you must allot a given amount of time to picking cards.
You can recast the problem of choosing each of the infinite number of decisions (or one among all available in some sense infinite sequences of decisions) to the problem of choosing a finite “seed” strategy for making decisions. Say, only a finite number of strategies is available, for example only what fits in the memory of the computer that starts the enterprise, that could since the start of the experiment be expanded, but the first version has a specified limit. In this case, the right program is as close to Busy Beaver is you can get, that is you draw cards as long as possible, but only finitely long, and after that you stop and go on to enjoy the actual life.