You’re free to choose to play again if you wish, and the logic for playing is the same as the first time around
This, again, depends on what you mean by “utility”. Here’s a way of framing the problem such that the logic can change.
Assume that we have some function V(x) that maps world histories into (non-negative*) real-valued “valutilons”, and that, with no intervention from Omega, the world history that will play out is valued at V(status quo) = q.
Then Omega turns up and offers you the card deal, with a deck as described above: 90% stars, 10% skulls. Stars give you double V(star)=2c, where c is the value of whatever history is currently slated to play (so c=q when the deal is first offered, but could be higher than that if you’ve played and won before). Skulls give you death: V(skull)=d, and d < q.
If our choices obey the vNM axioms, there will be some function f(x), such that our choices correspond to maximising E[f(x)]. It seems reasonable to assume that f(x) must be (weakly) increasing in V(x). A few questions present themselves:
Is there a function, f(x), such that, for some values of q and d, we should take cards every time this bet is offered?
Yes. f(x)=V(x) gives this result for all d<q.
Is there a function, f(x), such that, for some values of q and d, we should never take the bet?
Yes. Set d=0, q=1000, and f(x) = ln(V(x)+1). The offer gives vNM utility of 0.9ln(2001)~6.8, which is less than ln(1001)~6.9.
Is there a function, f(x), such that, for some values of q and d, we should take cards for some finite number of offers, and then stop?
Yes. Set d=0, q=1, and f(x) = ln(V(x)+1). The first time you get the offer, it’s vNM utility is 0.9ln(3)~1 which is greater than ln(2)~0.7. But at the 10th time you play (assuming you’re still alive), c=512, and the vNM utility of the offer is now 0.9ln(1025)~6.239, which is less than ln(513)~6.240. So you play up until the 10th offer, then stop.
* This is just to ensure that doubling your valutilons cannot make you worse off, as would happen if they were negative. It should be possible to reframe the problem to avoid this, but let’s stick with this for now.
This, again, depends on what you mean by “utility”. Here’s a way of framing the problem such that the logic can change.
Assume that we have some function V(x) that maps world histories into (non-negative*) real-valued “valutilons”, and that, with no intervention from Omega, the world history that will play out is valued at V(status quo) = q.
Then Omega turns up and offers you the card deal, with a deck as described above: 90% stars, 10% skulls. Stars give you double V(star)=2c, where c is the value of whatever history is currently slated to play (so c=q when the deal is first offered, but could be higher than that if you’ve played and won before). Skulls give you death: V(skull)=d, and d < q.
If our choices obey the vNM axioms, there will be some function f(x), such that our choices correspond to maximising E[f(x)]. It seems reasonable to assume that f(x) must be (weakly) increasing in V(x). A few questions present themselves:
Is there a function, f(x), such that, for some values of q and d, we should take cards every time this bet is offered?
Yes. f(x)=V(x) gives this result for all d<q.
Is there a function, f(x), such that, for some values of q and d, we should never take the bet?
Yes. Set d=0, q=1000, and f(x) = ln(V(x)+1). The offer gives vNM utility of 0.9ln(2001)~6.8, which is less than ln(1001)~6.9.
Is there a function, f(x), such that, for some values of q and d, we should take cards for some finite number of offers, and then stop?
Yes. Set d=0, q=1, and f(x) = ln(V(x)+1). The first time you get the offer, it’s vNM utility is 0.9ln(3)~1 which is greater than ln(2)~0.7. But at the 10th time you play (assuming you’re still alive), c=512, and the vNM utility of the offer is now 0.9ln(1025)~6.239, which is less than ln(513)~6.240. So you play up until the 10th offer, then stop.
* This is just to ensure that doubling your valutilons cannot make you worse off, as would happen if they were negative. It should be possible to reframe the problem to avoid this, but let’s stick with this for now.