You cannot do that without breaking Rice’s theorem. If you assume you can find out the answer from someone else’s source code → instant contradiction.
You cannot work around Rice’s theorem or around causality by specifying 50.5% accuracy independently of modeled system, any accuracy higher than 50%+epsilon is equivalent to indefinitely good accuracy by repeatedly predicting (standard cryptographic result), and 50%+epsilon doesn’t cause the paradox.
Give me one serious math model of Newcomb-like problems where the paradox emerges while preserving causality. Here are some examples. Then you model it, you either get trivial solution to one-box, or causality break, or omega loses.
You decide first what you would do in every situation, omega decides second, and now you only implement your initial decision table and are not allowed to switch. Game theory says you should implement one-boxing.
You decide first what you would do in every situation, omega decides second, and now you are allowed to switch. Game theory says you should precommit to one-box, then implement two-boxing, omega loses.
You decide first what you would do in every situation, omega decides second, and now you are allowed to switch. If omega always decides correctly, then he bases his decision on your switch, which either turns it into model #1 (you cannot really switch, precommitment is binding), or breaks causality.
Rice’s theorem says you can’t predict every possible algorithm in general. Plenty of particular algorithms can be predictable. If you’re running on a classical computer and Omega has a copy of you, you are perfectly predictable.
And all of your choices are just as real as they ever were, see the OB sequence on free will (I think someone referred to it already).
And the argument that omega just needs predictive power of 50.5% to cause the paradox only works if it works against ANY arbitrary algorithm. Having that power against any arbitrary algorithm breaks Rice’s Theorem, having that power (or even 100%) against just limited subset of algorithms doesn’t cause the paradox.
If you take strict decision tree precommitment interpretation, then you fix causality. You decide first, omega decides second, game theory says one-box, problem solved.
Decision tree precommitment is never a problem in game theory, as precommitment of the entire tree commutes with decisions by other agents:
A decides what f(X), f(Y) to do if B does X or Y. B does X. A does f(X)
B does X. A decides what f(X), f(Y) to do if B does X or Y. A does f(X)
are identical, as B cannot decide based on f. So the changing your mind problem never occurs.
With omega:
A decides what f(X), f(Y) to do if B does X or Y. B does X. A does f(X) - B can answer depending on f
B does X. A decides what f(X), f(Y) to do if B does X or Y. A does f(X) - somehow not allowed any more
I don’t think the paradox exist in any plausible mathematization of the problem. It looks to me like another of those philosophical problems that exist because of sloppiness of natural language and very little more, I’m just surprised that OB/LW crowd cares about this one and not about others. OK, I admit I really enjoyed it the first time I saw it but just as something fun, nothing more than that.
You cannot do that without breaking Rice’s theorem. If you assume you can find out the answer from someone else’s source code → instant contradiction.
You cannot work around Rice’s theorem or around causality by specifying 50.5% accuracy independently of modeled system, any accuracy higher than 50%+epsilon is equivalent to indefinitely good accuracy by repeatedly predicting (standard cryptographic result), and 50%+epsilon doesn’t cause the paradox.
Give me one serious math model of Newcomb-like problems where the paradox emerges while preserving causality. Here are some examples. Then you model it, you either get trivial solution to one-box, or causality break, or omega loses.
You decide first what you would do in every situation, omega decides second, and now you only implement your initial decision table and are not allowed to switch. Game theory says you should implement one-boxing.
You decide first what you would do in every situation, omega decides second, and now you are allowed to switch. Game theory says you should precommit to one-box, then implement two-boxing, omega loses.
You decide first what you would do in every situation, omega decides second, and now you are allowed to switch. If omega always decides correctly, then he bases his decision on your switch, which either turns it into model #1 (you cannot really switch, precommitment is binding), or breaks causality.
Rice’s theorem says you can’t predict every possible algorithm in general. Plenty of particular algorithms can be predictable. If you’re running on a classical computer and Omega has a copy of you, you are perfectly predictable.
And all of your choices are just as real as they ever were, see the OB sequence on free will (I think someone referred to it already).
And the argument that omega just needs predictive power of 50.5% to cause the paradox only works if it works against ANY arbitrary algorithm. Having that power against any arbitrary algorithm breaks Rice’s Theorem, having that power (or even 100%) against just limited subset of algorithms doesn’t cause the paradox.
If you take strict decision tree precommitment interpretation, then you fix causality. You decide first, omega decides second, game theory says one-box, problem solved.
Decision tree precommitment is never a problem in game theory, as precommitment of the entire tree commutes with decisions by other agents:
A decides what f(X), f(Y) to do if B does X or Y. B does X. A does f(X)
B does X. A decides what f(X), f(Y) to do if B does X or Y. A does f(X)
are identical, as B cannot decide based on f. So the changing your mind problem never occurs.
With omega:
A decides what f(X), f(Y) to do if B does X or Y. B does X. A does f(X) - B can answer depending on f
B does X. A decides what f(X), f(Y) to do if B does X or Y. A does f(X) - somehow not allowed any more
I don’t think the paradox exist in any plausible mathematization of the problem. It looks to me like another of those philosophical problems that exist because of sloppiness of natural language and very little more, I’m just surprised that OB/LW crowd cares about this one and not about others. OK, I admit I really enjoyed it the first time I saw it but just as something fun, nothing more than that.
I don’t know why nobody mentioned this at the time, but that’s hardly an unpopular view around here (as I’m sure you’ve noticed by now).
The interesting thing about Newcomb had nothing to do with thinking it was a genuine paradox—just counterintuitive for some.