Adding communication to the mix creates a non-zero chance you can convince your opponent to cooperate—which increases the utility of defecting.
There is a question of what will actually happen, but also more relevant questions of what will happen if you do X, for various values of X. If you convince the opponent to cooperate, it’s one thing, not related to the case of convincing your opponent to cooperate if you cooperate.
the case of convincing your opponent to cooperate if you cooperate.
Determine what kinds of control influence your opponent, appear to also be influenced by the same, and then defect when they think you are forced into cooperating because they are forced into cooperating?
Is that a legitimate strategy, or am I misunderstanding what you mean by convincing your opponent to cooperate if you cooperate?
Determine what kinds of control influence your opponent, appear to also be influenced by the same, and then defect when they think you are forced into cooperating because they are forced into cooperating?
[W]hat [do] you mean by convincing your opponent to cooperate if you cooperate?
It’s not in general possible to predict what you’ll actually do, since if it were possible, you could take such predictions into consideration in deciding what to do, in particular you could decide differently as a result, invalidating the “prediction”. Similarly, it’s not in general possible to predict what will actually happen, without assuming what you’ll decide first. It’s better to ask, what is likely to happen if you decide X, than to ask just what is likely to happen. It’s more useful too, since it gives you information about (acausal) consequences of your actions that can be used as basis for making decisions.
In the case of Prisoner’s Dilemma, it’s not very helpful to ask, what will your opponent do. What your opponent will do generally depends on what you’ll do, and assuming that it doesn’t is a mistake that leads to the classical conclusion that defecting is always the better option (falsified by the case of identical players that always make the same decision, with cooperation the better one). If you ask instead, what will your opponent do (1) if you cooperate, and (2) if you defect, that can sometimes give you interesting answers, such that cooperating suddenly becomes the better option. When you talk to the opponent with the intention of “convincing” them, again you are affecting both predictions about what they’ll do, on both sides of your possible decision, and not just the monolithic prediction of what they’ll do unconditionally. In particular, you might want to influence the probability of your opponent cooperating with you if you cooperate, without similarly affecting the probability of your opponent cooperating with you if you defect. If you affect both probabilities in the same way, then you are correct, such influence makes the decision of defecting more profitable than before. But if you affect these probabilities to a different degree, then it might turn out that the opposite is true, that the influence in question makes cooperating more profitable.
Ah, I see! I have been butting my head against various ideas that lead to cooperating in one-shot PDs and the like and not making any progress, it was because while I had the idea of splitting my actions into groups conditional on the opponent’s action, I didn’t have the concept of doing the same for my opponent.
With that in mind, I can no longer parse my previous comment either. I think I meant that I would increase their probability of cooperating if I cooperated, and have them increase my probability of cooperating if they cooperated (thus decreasing both of our probabilities of defecting if the other cooperates), and then when the probabilities have moved far enough to tell us both to cooperate, I would defect, knowing that I would score a defect-against-cooperate. But yeah, it doesn’t make any sense at all, because the probabilities tell us both to cooperate.
Thanks for taking the time to explain this concept to me.
(Note that probability of you making a given decision is not knowable, when you are considering it yourself while allowing this consideration to influence the decision.)
There is a question of what will actually happen, but also more relevant questions of what will happen if you do X, for various values of X. If you convince the opponent to cooperate, it’s one thing, not related to the case of convincing your opponent to cooperate if you cooperate.
Determine what kinds of control influence your opponent, appear to also be influenced by the same, and then defect when they think you are forced into cooperating because they are forced into cooperating?
Is that a legitimate strategy, or am I misunderstanding what you mean by convincing your opponent to cooperate if you cooperate?
Couldn’t parse.
It’s not in general possible to predict what you’ll actually do, since if it were possible, you could take such predictions into consideration in deciding what to do, in particular you could decide differently as a result, invalidating the “prediction”. Similarly, it’s not in general possible to predict what will actually happen, without assuming what you’ll decide first. It’s better to ask, what is likely to happen if you decide X, than to ask just what is likely to happen. It’s more useful too, since it gives you information about (acausal) consequences of your actions that can be used as basis for making decisions.
In the case of Prisoner’s Dilemma, it’s not very helpful to ask, what will your opponent do. What your opponent will do generally depends on what you’ll do, and assuming that it doesn’t is a mistake that leads to the classical conclusion that defecting is always the better option (falsified by the case of identical players that always make the same decision, with cooperation the better one). If you ask instead, what will your opponent do (1) if you cooperate, and (2) if you defect, that can sometimes give you interesting answers, such that cooperating suddenly becomes the better option. When you talk to the opponent with the intention of “convincing” them, again you are affecting both predictions about what they’ll do, on both sides of your possible decision, and not just the monolithic prediction of what they’ll do unconditionally. In particular, you might want to influence the probability of your opponent cooperating with you if you cooperate, without similarly affecting the probability of your opponent cooperating with you if you defect. If you affect both probabilities in the same way, then you are correct, such influence makes the decision of defecting more profitable than before. But if you affect these probabilities to a different degree, then it might turn out that the opposite is true, that the influence in question makes cooperating more profitable.
Ah, I see! I have been butting my head against various ideas that lead to cooperating in one-shot PDs and the like and not making any progress, it was because while I had the idea of splitting my actions into groups conditional on the opponent’s action, I didn’t have the concept of doing the same for my opponent.
With that in mind, I can no longer parse my previous comment either. I think I meant that I would increase their probability of cooperating if I cooperated, and have them increase my probability of cooperating if they cooperated (thus decreasing both of our probabilities of defecting if the other cooperates), and then when the probabilities have moved far enough to tell us both to cooperate, I would defect, knowing that I would score a defect-against-cooperate. But yeah, it doesn’t make any sense at all, because the probabilities tell us both to cooperate.
Thanks for taking the time to explain this concept to me.
(Note that probability of you making a given decision is not knowable, when you are considering it yourself while allowing this consideration to influence the decision.)