What some comments are missing is that not only UDT is a perfect predictor, but apparently the human is, too. The human fully and correctly predicts the UDT response.
A doesn’t have perfect predictive accuracy. A merely knows that B has perfect predictive accuracy. If A is pitted against a different agent with sufficient small predictive accuracy, then A cannot predict that agent’s actions well enough to cause outcomes like the one in this problem.
Consider a variant in which A is replaced by DefectBot. It seems rational for UDT to cooperate. The parameters of the decision problem are not conditional on UDT’s own decision algorithm (all agents in this scenario must choose between payoffs of $0 and $1), and cooperating maximizes expected utility. But what we have just described is a very different game. DefectBot always defects, but AFAICT, it can be shown that A behaves precisely as an agent would if it cooperates with an arbitrary agent C if P(C predicts A defects | A defects) is less than 0.5, is indifferent if that probability equals 0.5, and defects if that probability is greater than 0.5.
Suppose that C’s predictive accuracy is greater than 50 percent. Then the expected utility of A defecting is 2p + 0(1 - p), and the expected utility of A cooperating is 1p + 1(1 - p). Plug in numbers if you need to. There are similar proofs that If B’s predictions are random, then A is indifferent, and if B’s predictive ability is less than 50 percent, then A cooperates.
If we played an iterated variant of this game, then the expected value of the sequence of payoffs to A will almost surely exceed the expected value of the sequence of payoffs to B. The important thing is that in our game, UDT seems to be penalized for its predictive accuracy when it plays against agents like A despite dominating other decision theories that ‘win’ on this problem in other problem classes.
When described in this way, I am reminded that I would be very interested to see this sort of problem examined in the modal agents framework. I have to flag that I lack a technical understanding of this sort of thing, but it seems like we can imagine the agents as formal systems, with B stronger than A, and A forcing B to prove that A defects by making it provable in A that A defects, and since B is stronger than A, it is also provable in B that A defects.
Take away the ability to predict from the human (e.g. by adding randomness to UDT decisions) and see if it’s still optimal to put 9 down.
I’m not sure what precisely you mean by “add randomness”, but if you mean “give UDT less than perfect predictive accuracy,” then as I have shown above, and as in Newcomb’s problem, there are variants of this game in which UDT has predictive accuracy greater than 50% but less than 100% and in which the same outcome obtains. Any other interpretation of “add randomness” that I can think of simply results in an agent that we call a UDT agent but that is not one.
A doesn’t have perfect predictive accuracy. A merely knows that B has perfect predictive accuracy. If A is pitted against a different agent with sufficient small predictive accuracy, then A cannot predict that agent’s actions well enough to cause outcomes like the one in this problem.
Consider a variant in which A is replaced by DefectBot. It seems rational for UDT to cooperate. The parameters of the decision problem are not conditional on UDT’s own decision algorithm (all agents in this scenario must choose between payoffs of $0 and $1), and cooperating maximizes expected utility. But what we have just described is a very different game. DefectBot always defects, but AFAICT, it can be shown that A behaves precisely as an agent would if it cooperates with an arbitrary agent C if P(C predicts A defects | A defects) is less than 0.5, is indifferent if that probability equals 0.5, and defects if that probability is greater than 0.5.
Suppose that C’s predictive accuracy is greater than 50 percent. Then the expected utility of A defecting is 2p + 0(1 - p), and the expected utility of A cooperating is 1p + 1(1 - p). Plug in numbers if you need to. There are similar proofs that If B’s predictions are random, then A is indifferent, and if B’s predictive ability is less than 50 percent, then A cooperates.
If we played an iterated variant of this game, then the expected value of the sequence of payoffs to A will almost surely exceed the expected value of the sequence of payoffs to B. The important thing is that in our game, UDT seems to be penalized for its predictive accuracy when it plays against agents like A despite dominating other decision theories that ‘win’ on this problem in other problem classes.
When described in this way, I am reminded that I would be very interested to see this sort of problem examined in the modal agents framework. I have to flag that I lack a technical understanding of this sort of thing, but it seems like we can imagine the agents as formal systems, with B stronger than A, and A forcing B to prove that A defects by making it provable in A that A defects, and since B is stronger than A, it is also provable in B that A defects.
I’m not sure what precisely you mean by “add randomness”, but if you mean “give UDT less than perfect predictive accuracy,” then as I have shown above, and as in Newcomb’s problem, there are variants of this game in which UDT has predictive accuracy greater than 50% but less than 100% and in which the same outcome obtains. Any other interpretation of “add randomness” that I can think of simply results in an agent that we call a UDT agent but that is not one.