And the argument that omega just needs predictive power of 50.5% to cause the paradox only works if it works against ANY arbitrary algorithm. Having that power against any arbitrary algorithm breaks Rice’s Theorem, having that power (or even 100%) against just limited subset of algorithms doesn’t cause the paradox.
If you take strict decision tree precommitment interpretation, then you fix causality. You decide first, omega decides second, game theory says one-box, problem solved.
Decision tree precommitment is never a problem in game theory, as precommitment of the entire tree commutes with decisions by other agents:
A decides what f(X), f(Y) to do if B does X or Y. B does X. A does f(X)
B does X. A decides what f(X), f(Y) to do if B does X or Y. A does f(X)
are identical, as B cannot decide based on f. So the changing your mind problem never occurs.
With omega:
A decides what f(X), f(Y) to do if B does X or Y. B does X. A does f(X) - B can answer depending on f
B does X. A decides what f(X), f(Y) to do if B does X or Y. A does f(X) - somehow not allowed any more
I don’t think the paradox exist in any plausible mathematization of the problem. It looks to me like another of those philosophical problems that exist because of sloppiness of natural language and very little more, I’m just surprised that OB/LW crowd cares about this one and not about others. OK, I admit I really enjoyed it the first time I saw it but just as something fun, nothing more than that.
And the argument that omega just needs predictive power of 50.5% to cause the paradox only works if it works against ANY arbitrary algorithm. Having that power against any arbitrary algorithm breaks Rice’s Theorem, having that power (or even 100%) against just limited subset of algorithms doesn’t cause the paradox.
If you take strict decision tree precommitment interpretation, then you fix causality. You decide first, omega decides second, game theory says one-box, problem solved.
Decision tree precommitment is never a problem in game theory, as precommitment of the entire tree commutes with decisions by other agents:
A decides what f(X), f(Y) to do if B does X or Y. B does X. A does f(X)
B does X. A decides what f(X), f(Y) to do if B does X or Y. A does f(X)
are identical, as B cannot decide based on f. So the changing your mind problem never occurs.
With omega:
A decides what f(X), f(Y) to do if B does X or Y. B does X. A does f(X) - B can answer depending on f
B does X. A decides what f(X), f(Y) to do if B does X or Y. A does f(X) - somehow not allowed any more
I don’t think the paradox exist in any plausible mathematization of the problem. It looks to me like another of those philosophical problems that exist because of sloppiness of natural language and very little more, I’m just surprised that OB/LW crowd cares about this one and not about others. OK, I admit I really enjoyed it the first time I saw it but just as something fun, nothing more than that.
I don’t know why nobody mentioned this at the time, but that’s hardly an unpopular view around here (as I’m sure you’ve noticed by now).
The interesting thing about Newcomb had nothing to do with thinking it was a genuine paradox—just counterintuitive for some.