I’m not sure it’s really more counterintuitive that (known) ability to predict can be a disadvantage than it is that (known) having more options can be a disadvantage.
I wonder if it might be fruitful to think generally about decision theories in terms of their ability to rule out suboptimal decisions, as opposed to their ability to select the optimal decision.
I also wanted you to read something I wrote below:
When described in this way, I am reminded that I would be very interested to see this sort of problem examined in the modal agents framework. I have to flag that I lack a technical understanding of this sort of thing, but it seems like we can imagine the agents as formal systems, with B stronger than A, and A forcing B to prove that A defects by making it provable in A that A defects, and since B is stronger than A, it is also provable in B that A defects.
Also, there are variants with imperfect predictors:
It can be shown that A behaves precisely as an agent would if it cooperates with an arbitrary agent C if P(C predicts A defects | A defects) is less than 0.5, is indifferent if that probability equals 0.5, and defects if that probability is greater than 0.5.
Suppose that B’s predictive accuracy is greater than 50 percent. Then the expected utility of A defecting is 2p + 0(1 - p), and the expected utility of A cooperating is 1p + 1(1 - p), and the expected utility of defection is greater than that of cooperation. Plug in numbers if you need to. There are similar proofs that if B’s predictions are random, then A is indifferent, and if B’s predictive ability is less than 50 percent, then A cooperates.
I wonder if it might be fruitful to think generally about decision theories in terms of their ability to rule out suboptimal decisions, as opposed to their ability to select the optimal decision.
I also wanted you to read something I wrote below:
Also, there are variants with imperfect predictors: