I remain confused about Newcomb-style problems in general, and counterfactual mugging in particular. They seem to typically be fairly trivially self-contradictory when you look at them from the lens of computability theory.
Omega, a perfect predictor, flips a coin. If it comes up tails Omega asks you for $100. If it comes up heads, Omega pays you $10,000 if it predicts that you would have paid if it had come up tails.
Either the agent must be running a decidable algorithm, or there is no such Omega satisfying the requirements[1]. In the former case, there is no contradiction[2]. In the latter case, the problem is a moot point.
Suppose you had an algorithm[3] for Omega. Then I feed your Omega an agent that does this:
If the prediction from 1 is that I would pay if the coinflip comes up tails:
If the coinflip came up tails:
Don’t pay.
Otherwise (that is, if the prediction from 1 is that I would not pay if the coinflip comes up tails):
If the coinflip came up tails:
Pay.
Unfortunately, it’s fairly simple to show that Omega cannot avoid a contraction here. (About the best that Omega can do is loop forever / fail to halt… which still doesn’t fit the problem.)
Jensen’s inequality as used in this context only applies if the agent can do unbounded computation, for one. (Rough sketch of an example with finite computation: suppose I have an agent that does at most C operations. If I ask it for the best outcome over [1] (that is, one outcome with a value of 1), the answer is 1, fairly straightforwardly. If I instead ask it for the best outcome over C outcomes each with a value of ϵ and one outcome with a value of 1, the agent has an expected value of no more than CC+1 , as it doesn’t have enough operations to scan all possible outcomes looking for the good one.)
The usual formulations typically have Omega being “bigger” than your agent, such that Omega can predict your agent, but your agent cannot predict Omega. This is generally taken to be unproblematic because physical agents are always bounded by time + memory constraints, which makes them analogous to finite-state machines rather than full Turing machines. (If unbounded computation were permitted, on the other hand, then you would indeed need a super-Turing version of Omega to pull off the trick.)
Paradoxical decision problems are paradoxical in the colloquial sense (such as Hilbert’s hotel or Bertrand’s paradox), not the literal sense (such as “this sentence is false”). Paradoxicality is in the eye of the beholder. Some people think Newcomb’s problem is paradoxical, some don’t. I agree with you and don’t find it paradoxical.
In that case why are people spending so much effort on it[1]?
Ditto, why is there so much argumentation based around applying game theory to Newcomb’s problem[2] (or variants) when much of game theory does not apply to it?
*****
Paradoxical decision problems are paradoxical in the colloquial sense (such as Hilbert’s hotel or Bertrand’s paradox), not the literal sense (such as “this sentence is false”).
I think that there is a lot of effort being wasted due to not clearly distinguishing counterintuative results and self-contradictory results in general, and this is a prime example.
Such as “Evidential decision theories are inadequate rational decision theories. For either they provide wrong solutions to Newcomb’s problem [...]”, which is a direct quote from the overview of Newcomb’s problem linked in the description of the tag ‘Newcomb’s Problem’ on this site.
I remain confused about Newcomb-style problems in general, and counterfactual mugging in particular. They seem to typically be fairly trivially self-contradictory when you look at them from the lens of computability theory.
Either the agent must be running a decidable algorithm, or there is no such Omega satisfying the requirements[1].
In the former case, there is no contradiction[2].
In the latter case, the problem is a moot point.
Suppose you had an algorithm[3] for Omega. Then I feed your Omega an agent that does this:
Run Omega on myself[4].
If the prediction from 1 is that I would pay if the coinflip comes up tails:
If the coinflip came up tails:
Don’t pay.
Otherwise (that is, if the prediction from 1 is that I would not pay if the coinflip comes up tails):
If the coinflip came up tails:
Pay.
Unfortunately, it’s fairly simple to show that Omega cannot avoid a contraction here. (About the best that Omega can do is loop forever / fail to halt… which still doesn’t fit the problem.)
This is a contradiction, so Omega cannot exist.
Or Omega is super-Turing, in which case all bets are off.
Jensen’s inequality as used in this context only applies if the agent can do unbounded computation, for one. (Rough sketch of an example with finite computation: suppose I have an agent that does at most C operations. If I ask it for the best outcome over [1] (that is, one outcome with a value of 1), the answer is 1, fairly straightforwardly. If I instead ask it for the best outcome over C outcomes each with a value of ϵ and one outcome with a value of 1, the agent has an expected value of no more than CC+1 , as it doesn’t have enough operations to scan all possible outcomes looking for the good one.)
Strictly speaking, Omega is non-deterministic as stated, but you can recover determinism by passing the coinflip result into Omega as an argument.
This is non-trivial, but doable. It requires a vaguely quine-like construct.
The usual formulations typically have Omega being “bigger” than your agent, such that Omega can predict your agent, but your agent cannot predict Omega. This is generally taken to be unproblematic because physical agents are always bounded by time + memory constraints, which makes them analogous to finite-state machines rather than full Turing machines. (If unbounded computation were permitted, on the other hand, then you would indeed need a super-Turing version of Omega to pull off the trick.)
...in which case much of game theory does not apply and the paradoxes, by and large, aren’t actually paradoxical, no?
Paradoxical decision problems are paradoxical in the colloquial sense (such as Hilbert’s hotel or Bertrand’s paradox), not the literal sense (such as “this sentence is false”). Paradoxicality is in the eye of the beholder. Some people think Newcomb’s problem is paradoxical, some don’t. I agree with you and don’t find it paradoxical.
In that case why are people spending so much effort on it[1]?
Ditto, why is there so much argumentation based around applying game theory to Newcomb’s problem[2] (or variants) when much of game theory does not apply to it?
*****
I think that there is a lot of effort being wasted due to not clearly distinguishing counterintuative results and self-contradictory results in general, and this is a prime example.
The LessWrong “Newcomb’s Problem” tag has 41 entries with a combined total of over 1,000 comments, for instance.
See also any argument[3] that says that X is invalid because it provides ‘the wrong solution’ to Newcomb’s Problem.
Such as “Evidential decision theories are inadequate rational decision theories. For either they provide wrong solutions to Newcomb’s problem [...]”, which is a direct quote from the overview of Newcomb’s problem linked in the description of the tag ‘Newcomb’s Problem’ on this site.