I think this might just be a rephrasal of what several other commenters have said, but I found this conception somewhat helpful.
Based on intuitive modeling of this scenario and several others like it, I found that I ran into the expected “paradox” in the original statement of the problem, but not in the statement where you roll one dice to determine the 1⁄3 chance of me being offered the wager, and then the original wager. I suspect that the reason why is something like this:
Loosing 1B is a uniquely bad outcome, worse than its monetary utility would imply, because it means that I blame myself for not getting the 24k on top of receiving $0. (It seems fairly accepted the chance of getting money in a counterfactual scenario may have a higher expected utility than getting $0, but the actual outcome of getting $0 in this scenario is slightly utility-negative.)
Now, it may appear that this same logic should apply to the 1% chance of loosing 2B in a scenario where the counterfactual-me in 2A receives 24000 dollars. However, based on self-examination, I think this is the fundamental root of the seeming paradox: not an issue of value of certainty, but an issue of confusing counterfactual with future scenarios. While in the situation where I loose 1B, switching would be guaranteed to prevent that utility loss in either a counterfactual or future scenario, in the case of 2B, switching would only be guaranteed to prevent utility loss in the counterfactual, while in the future scenario, it probably wouldn’t make a difference in outcome, suggesting an implicit substitution of the future as counterfactual. I think this phenomenon is behind other commenters preference changes if this is an iterated vs one-shot game: by making it an iterated game, you get to make an implicit conversion back to counterfactual comparisons through law of large numbers-type effects.
I only have anecdotal evidence for this substitution existing, but I think the inner shame and visceral reaction of “that’s silly” that I feel when wishing I had made a different strategic choice after seeing the results of randomness in boardgames is likely the same thought process.
I think that this lets you dodge a lot of the utility issues around this problem, because it provides a reason to attach greater negative utility to loosing 1B than 2B without having to do silly things like attach utility to outcomes: if you view how much you regret not switching back through a future paradigm, switching in 1B is literally certain to prevent your negative utility, whereas switching in 2B probably won’t do anything. Note that this technically makes the money pump rational behavior, if you incorperate regret into your utility function: after 12:00, you’d like to maximize money, and have a relatively low regret cost, but after 12:05, the risk of regret is far higher, so you should take 1A.
I’d be really interested to see whether this expeirement played out differently if you were allowed to see the number on the die, or everything but the final outcome was hidden.
I think this might just be a rephrasal of what several other commenters have said, but I found this conception somewhat helpful.
Based on intuitive modeling of this scenario and several others like it, I found that I ran into the expected “paradox” in the original statement of the problem, but not in the statement where you roll one dice to determine the 1⁄3 chance of me being offered the wager, and then the original wager. I suspect that the reason why is something like this:
Loosing 1B is a uniquely bad outcome, worse than its monetary utility would imply, because it means that I blame myself for not getting the 24k on top of receiving $0. (It seems fairly accepted the chance of getting money in a counterfactual scenario may have a higher expected utility than getting $0, but the actual outcome of getting $0 in this scenario is slightly utility-negative.)
Now, it may appear that this same logic should apply to the 1% chance of loosing 2B in a scenario where the counterfactual-me in 2A receives 24000 dollars. However, based on self-examination, I think this is the fundamental root of the seeming paradox: not an issue of value of certainty, but an issue of confusing counterfactual with future scenarios. While in the situation where I loose 1B, switching would be guaranteed to prevent that utility loss in either a counterfactual or future scenario, in the case of 2B, switching would only be guaranteed to prevent utility loss in the counterfactual, while in the future scenario, it probably wouldn’t make a difference in outcome, suggesting an implicit substitution of the future as counterfactual. I think this phenomenon is behind other commenters preference changes if this is an iterated vs one-shot game: by making it an iterated game, you get to make an implicit conversion back to counterfactual comparisons through law of large numbers-type effects.
I only have anecdotal evidence for this substitution existing, but I think the inner shame and visceral reaction of “that’s silly” that I feel when wishing I had made a different strategic choice after seeing the results of randomness in boardgames is likely the same thought process.
I think that this lets you dodge a lot of the utility issues around this problem, because it provides a reason to attach greater negative utility to loosing 1B than 2B without having to do silly things like attach utility to outcomes: if you view how much you regret not switching back through a future paradigm, switching in 1B is literally certain to prevent your negative utility, whereas switching in 2B probably won’t do anything. Note that this technically makes the money pump rational behavior, if you incorperate regret into your utility function: after 12:00, you’d like to maximize money, and have a relatively low regret cost, but after 12:05, the risk of regret is far higher, so you should take 1A.
I’d be really interested to see whether this expeirement played out differently if you were allowed to see the number on the die, or everything but the final outcome was hidden.