In fact, Newcomb-like problems fall naturally out of any ability to simulate and predict the actions of other agents. Omega as described is essentially the limit as predictive power goes to infinity.
This gives me the intuition that trying to decide whether to one-box or two box on newcomb is like trying to decide what 0^0 is; you get your intuition by following a limit process, but that limit process produces different results depending on the path you take.
It would be interesting to look at finitely good predictors. Perhaps we can find something analogous to the result that lim_(x, y -->0) (x^y) is path dependent.
If we define an imperfect predictor as a perfect predictor plus noise, i.e. produces the correct prediction with probability p regardless of the cognition algorithm it’s trying to predict, then Newcomb-like problems are very robust to imperfect prediction: for any p > .5 there is some payoff ratio great enough to preserve the paradox, and the required ratio goes down as the prediction improves. e.g. if 1-boxing gets 100 utilons and 2-boxing gets 1 utilon, then the predictor only needs to be more than 50.5% accurate. So the limit in that direction favors 1-boxing.
What other direction could there be? If the prediction accuracy depends on the algorithm-to-be-predicted (as it would in the real world), then you could try to be an algorithm that is mispredicted in your favor… but a misprediction in your favor can only occur if you actually 2-box, so it only takes a modicum of accuracy before a 1-boxer who tries to be predictable is better off than a 2-boxer who tries to be unpredictable.
I can’t see any other way for the limit to turn out.
If you have two agents trying to precommit not to be blackmailed by each other / precommit not to pay attention to the others precommitment, then any attempt to take a limit of this Newcomblike problem does depend on how you approach the limit. (I don’t know how to solve this problem.)
The value(s) for which the limit is being taken here is unidirectional predictive power, which is loosely a function of the difference in intelligence between the two agents; intuitively, I think a case could be made that (assuming ideal rationality) the total accuracy of mutual behavior prediction between two agents is conserved in some fashion, that doubling the predictive power of one unavoidably would roughly halve the predictive power of the other. Omega represents an entity with a delta-g so large vs. us that predictive power is essentially completely one-sided.
From that basis, allowing the unidirectional predictive power of both agents to go to infinity is probably inherently ill-defined and there’s no reason to expect the problem to have a solution.
In fact, Newcomb-like problems fall naturally out of any ability to simulate and predict the actions of other agents. Omega as described is essentially the limit as predictive power goes to infinity.
This gives me the intuition that trying to decide whether to one-box or two box on newcomb is like trying to decide what 0^0 is; you get your intuition by following a limit process, but that limit process produces different results depending on the path you take.
It would be interesting to look at finitely good predictors. Perhaps we can find something analogous to the result that lim_(x, y -->0) (x^y) is path dependent.
If we define an imperfect predictor as a perfect predictor plus noise, i.e. produces the correct prediction with probability p regardless of the cognition algorithm it’s trying to predict, then Newcomb-like problems are very robust to imperfect prediction: for any p > .5 there is some payoff ratio great enough to preserve the paradox, and the required ratio goes down as the prediction improves. e.g. if 1-boxing gets 100 utilons and 2-boxing gets 1 utilon, then the predictor only needs to be more than 50.5% accurate. So the limit in that direction favors 1-boxing.
What other direction could there be? If the prediction accuracy depends on the algorithm-to-be-predicted (as it would in the real world), then you could try to be an algorithm that is mispredicted in your favor… but a misprediction in your favor can only occur if you actually 2-box, so it only takes a modicum of accuracy before a 1-boxer who tries to be predictable is better off than a 2-boxer who tries to be unpredictable.
I can’t see any other way for the limit to turn out.
If you have two agents trying to precommit not to be blackmailed by each other / precommit not to pay attention to the others precommitment, then any attempt to take a limit of this Newcomblike problem does depend on how you approach the limit. (I don’t know how to solve this problem.)
The value(s) for which the limit is being taken here is unidirectional predictive power, which is loosely a function of the difference in intelligence between the two agents; intuitively, I think a case could be made that (assuming ideal rationality) the total accuracy of mutual behavior prediction between two agents is conserved in some fashion, that doubling the predictive power of one unavoidably would roughly halve the predictive power of the other. Omega represents an entity with a delta-g so large vs. us that predictive power is essentially completely one-sided.
From that basis, allowing the unidirectional predictive power of both agents to go to infinity is probably inherently ill-defined and there’s no reason to expect the problem to have a solution.