What some comments are missing is that not only UDT is a perfect predictor, but apparently the human is, too. The human fully and correctly predicts the UDT response.
A doesn’t have perfect predictive accuracy. A merely knows that B has perfect predictive accuracy. If A is pitted against a different agent with sufficiently small predictive accuracy, then A cannot predict that agent’s actions well enough to cause outcomes like the one in this problem.
Consider a variant in which A is replaced by DefectBot. It seems rational for UDT to cooperate. The parameters of the decision problem are not conditional on UDT’s own decision algorithm (all agents in this scenario must choose between payoffs of $0 and $1), and cooperating maximizes expected utility. But what we have just described is a very different game. DefectBot always defects, but AFAICT, it can be shown that A behaves precisely as an agent would if it cooperates with an arbitrary agent C if P(C predicts A defects | A defects) is less than 0.5, is indifferent if that probability equals 0.5, and defects if that probability is greater than 0.5.
Suppose that C’s predictive accuracy is greater than 50 percent. Then the expected utility of A defecting is 2p + 0(1 - p), and the expected utility of A cooperating is 1p + 1(1 - p), and the expected utility of defection is greater than that of cooperation. Plug in numbers if you need to. There are similar proofs that If B’s predictions are random, then A is indifferent, and if B’s predictive ability is less than 50 percent, then A cooperates.
If we played an iterated variant of this game, then the expected value of the sequence of payoffs to A will almost surely exceed the expected value of the sequence of payoffs to B. The important thing is that in our game, UDT seems to be penalized for its predictive accuracy when it plays against agents like A despite dominating other decision theories that ‘win’ on this problem in other problem classes.
When described in this way, I am reminded that I would be very interested to see this sort of problem examined in the modal agents framework. I have to flag that I lack a technical understanding of this sort of thing, but it seems like we can imagine the agents as formal systems, with B stronger than A, and A forcing B to prove that A defects by making it provable in A that A defects, and since B is stronger than A, it is also provable in B that A defects.
Take away the ability to predict from the human (e.g. by adding randomness to UDT decisions) and see if it’s still optimal to put 9 down.
I’m not sure what precisely you mean by “add randomness”, but if you mean “give UDT less than perfect predictive accuracy,” then as I have shown above, and as in Newcomb’s problem, there are variants of this game in which UDT has predictive accuracy greater than 50% but less than 100% and in which the same outcome obtains. Any other interpretation of “add randomness” that I can think of simply results in an agent that we call a UDT agent but that is not one.
Say that agent A is zonkerly predictive and agent B is pleglishly predictive. A’s knowledge about B’s predictive accuracy allows A to make an inference that leads from that knowledge to a deduction that B will cooperate if A defects. B can predict every action that A will take. It’s the difference between you reasoning abstractly about how else a program must work given your current understanding of how it works, and running the program.
Not sure I understand your question… It’s provable that the agents behave differently, so there you have a mathy explanation. As for non-mathy explanations, I think the best one is Gary’s original description of the ASP problem.
A doesn’t have perfect predictive accuracy. A merely knows that B has perfect predictive accuracy. If A is pitted against a different agent with sufficiently small predictive accuracy, then A cannot predict that agent’s actions well enough to cause outcomes like the one in this problem.
Consider a variant in which A is replaced by DefectBot. It seems rational for UDT to cooperate. The parameters of the decision problem are not conditional on UDT’s own decision algorithm (all agents in this scenario must choose between payoffs of $0 and $1), and cooperating maximizes expected utility. But what we have just described is a very different game. DefectBot always defects, but AFAICT, it can be shown that A behaves precisely as an agent would if it cooperates with an arbitrary agent C if P(C predicts A defects | A defects) is less than 0.5, is indifferent if that probability equals 0.5, and defects if that probability is greater than 0.5.
Suppose that C’s predictive accuracy is greater than 50 percent. Then the expected utility of A defecting is 2p + 0(1 - p), and the expected utility of A cooperating is 1p + 1(1 - p), and the expected utility of defection is greater than that of cooperation. Plug in numbers if you need to. There are similar proofs that If B’s predictions are random, then A is indifferent, and if B’s predictive ability is less than 50 percent, then A cooperates.
If we played an iterated variant of this game, then the expected value of the sequence of payoffs to A will almost surely exceed the expected value of the sequence of payoffs to B. The important thing is that in our game, UDT seems to be penalized for its predictive accuracy when it plays against agents like A despite dominating other decision theories that ‘win’ on this problem in other problem classes.
When described in this way, I am reminded that I would be very interested to see this sort of problem examined in the modal agents framework. I have to flag that I lack a technical understanding of this sort of thing, but it seems like we can imagine the agents as formal systems, with B stronger than A, and A forcing B to prove that A defects by making it provable in A that A defects, and since B is stronger than A, it is also provable in B that A defects.
I’m not sure what precisely you mean by “add randomness”, but if you mean “give UDT less than perfect predictive accuracy,” then as I have shown above, and as in Newcomb’s problem, there are variants of this game in which UDT has predictive accuracy greater than 50% but less than 100% and in which the same outcome obtains. Any other interpretation of “add randomness” that I can think of simply results in an agent that we call a UDT agent but that is not one.
You know, let’s back up. I’m confused.
What is the framework, the context in which we are examining these problems? What is the actual question we’re trying to answer?
In your setup it does. It is making accurate predictions, doesn’t it? Always?
Say that agent A is zonkerly predictive and agent B is pleglishly predictive. A’s knowledge about B’s predictive accuracy allows A to make an inference that leads from that knowledge to a deduction that B will cooperate if A defects. B can predict every action that A will take. It’s the difference between you reasoning abstractly about how else a program must work given your current understanding of how it works, and running the program.
As long as you are always making accurate predictions, does the distinction matter?
Yes, you can make the distinction mathematically precise, as I did in this post (which is the “Slepnev 2011” reference in the OP).
Yes, I understand that, but my question is why does the distinction matter in this context?
Not sure I understand your question… It’s provable that the agents behave differently, so there you have a mathy explanation. As for non-mathy explanations, I think the best one is Gary’s original description of the ASP problem.
Do you understand the statement and proof in the 2nd half of the post?