[LINK] Counterfactual Strategies
Link:
Counterintuitive Counterfactual Strategies
Overview:
Over the weekend, I was thinking about the variant of Newcomb’s Paradox where both boxes are transparent. The one where, unless you precommit to taking a visibly empty box instead of both boxes, omega can self-consistently give you less money.
I was wondering if I could make this kind of “sacrifice yourself for yourself” situation happen without involving a predictor guessing your choice before you made it. Turns out you can.
My game theory textbook had a simple explanation of this in terms of poker. If you play aggressively on strong hands, but don’t bluff on weak hands, then everyone will fold whenever you try to play aggressively, and you never win any money. The Nash equlibrium recommends that you bluff a lot, so your behavior on strong and weak hands is indistinguishable.
Yes, the advantage comes from being hard to predict. I just wanted to find a game where the information denial benefits were counterfactual (unlike poker).
(Note that the goal is not perfect indistinguishability. If it was, then you could play optimally by just flipping a coin when deciding to bet or call.)
If I recall correctly, the recommendation was to fold on average hands, and play aggressively on strong and weak hands. You don’t need to flip a coin, because your cards can already be viewed as a kind of coin that your opponent can’t see.
Could you clarify the description of the newcomb variant, please?
What does Omega do in the case when my strategy is “Take one box if the second box is empty, take both boxes if the second box is full”? Omega is then unable to set up the boxes in accordance with what I do.
The variant with the clear boxes goes like so:
You are going to walk into a room with two boxes, A and B, both transparent. You’ll be given the opportunity to enter a room with both boxes, their contents visible, where can either take both boxes or just box A.
Omega, the superintelligence from another galaxy that is never wrong, has predicted whether you will take one box or two boxes. If it predicted you were going to take just box A, then box A will contain a million dollars and box B will contain a thousand dollars. If it predicted you were going to take both, then box A will be empty and box B will contain a thousand dollars.
If Omega predicts that you will purposefully contradict its prediction no matter what, the room will contain hornets. Lots and lots of hornets.
Case 1: You walk into the room. You see a million dollars in box A. Do you take both, or just A?
Case 2: You walk into the room. You see no dollars in box A. Do you take both, or just A?
If Omega is making its predictions by simulating what you would do in each case and picking a self-consistent prediction, then you can eliminate case 2 by leaving the thousand dollars behind.
edit Fixed not having a thousand in box B in both cases.
In Gary’s original version of this problem, Omega tries to predict what the agent would do if box A was filled. Also, I think box B is supposed to be always filled.
Whoops, box B was supposed to have a thousand in both cases.
I did have in mind the variant where Omega picks the self-consistent case, instead of using only the box A prediction, though.
The game tree doesn’t adequately describe the game.
The attacker has a 50⁄50 chance of seeing a calm sea or a stormy sea, and then must choose between attacking by land or by sea.
The defender has the choice to fight or to run. He doesn’t know if the seas were stormy or not when it’s time to make the decision, but he does know if the attack was by sea or by land.
If the attack was by land, it’s always better for the defender to fight. This results in a draw (0 points).
If the attack was by sea and the defender chooses to fight, the defender wins a fight if the sea was stormy (-20 points for the attacker) and loses both the fight and the city if the sea was calm. (+30 points for the attacker.) If the defender runs away, the defender loses the city (+20 points for the attacker).
As calculated in the link, the optimal strategy for the attacker is to always attack by sea when the sea is calm and attack by sea 1⁄3 of the time when the sea is stormy.
If you want to know the significance of that fact, follow the link yourself. ;)
Did you mean to have the result of fighting on stormy seas to be −10 points for the attacker? As it stands, I don’t believe the math works out exactly.
Thanks for the clarification. I removed the game tree image from the overview because it was misleading readers into thinking it was the entirety of the content.
I don’t understand what you’re saying here.
It might help if you gave the utility for the defender, so we can see what’s a good strategy for them.
You followed the link? The game tree image is a decent reference, but a bad introduction.
The answer to your question is that it’s a zero sum game. The defender wants to minimize the score. The attacker wants to maximize it.
I hadn’t followed it.
I think you should either have the entire thing, or none of it (maybe just the conclusion). If I can’t understand what’s going on from your overview, I don’t see the point of it being there.
In this example, it happens because the predictor guesses your strategy. It might not actually be before you choose the strategy, but since they can’t exactly take advantage of you choosing first and look at it, it’s functionally the same as them guessing your strategy.
What’s the point of the −11 for asking fate? You can’t choose not to. Having eleven fewer util no matter what doesn’t change anything.
Alright, I removed the game tree from the summary.
The −11 was chosen to give a small, but not empty, area of positive-returns in the strategy space. You’re right that it doesn’t affect which strategies are optimal, but in my mind it affects whether finding an optimal strategy is fun/satisfying.