Meta-game the PD so that each player is allowed a peek at the answer before the answer is finalized and are given the option to stay or switch. If either player switches, repeat the process. Once both players stay, input their decisions and dole out rewards.
The scenario would look like this:
Player 1 chooses C Player 2 chooses D
During the peek, Player 1 notices he will lose so he will switch to D. During the next peek, both players are satisfied and stay at D. Result: D, D
OR
Player 1 chooses C Player 2 chooses C
During the peek, neither switch Result: C, C
OR
Player 1 chooses C Player 2 chooses C
During the peek, Player 2 switches to D During the next peek, Player 1 switches to D During the third peek, neither player switches Result: D, D
The caveat with this is that it requires a trustworthy system of holding an answer and understanding the rules of how to fully submit them and allowing them to change their answer. Under such a system it is perfectly logical to choose Cooperate because there is no risk of losing to a Defector.
This directly answers your question of how players can “convince each other that Player1.C ⇔ Player2.C.”
I do not understand how your example of an SI’s claim on their source-code constitutes a Prisoner’s Dilemma.
It answers that question by replacing the Prisoner’s Dilemma with an entirely different game, which doesn’t have the awkward feature that makes the Prisoner’s Dilemma interesting.
Wei_Dai (if I’ve understood him right) is not claiming that an SI’s claim on its source code constitutes a PD, but that one obvious (but inconvenient) way for it to arrange for mutual cooperation in a PD is to demonstrate that its behaviour satisfies the condition “I’ll cooperate iff you do”, which requires some sort of way for it to specify what it does and prove it.
Meta-game the PD so that each player is allowed a peek at the answer before the answer is finalized and are given the option to stay or switch. If either player switches, repeat the process. Once both players stay, input their decisions and dole out rewards.
The scenario would look like this:
Player 1 chooses C
Player 2 chooses D
During the peek, Player 1 notices he will lose so he will switch to D.
During the next peek, both players are satisfied and stay at D.
Result: D, D
OR
Player 1 chooses C
Player 2 chooses C
During the peek, neither switch
Result: C, C
OR
Player 1 chooses C
Player 2 chooses C
During the peek, Player 2 switches to D
During the next peek, Player 1 switches to D
During the third peek, neither player switches
Result: D, D
The caveat with this is that it requires a trustworthy system of holding an answer and understanding the rules of how to fully submit them and allowing them to change their answer. Under such a system it is perfectly logical to choose Cooperate because there is no risk of losing to a Defector.
This directly answers your question of how players can “convince each other that Player1.C ⇔ Player2.C.”
I do not understand how your example of an SI’s claim on their source-code constitutes a Prisoner’s Dilemma.
It answers that question by replacing the Prisoner’s Dilemma with an entirely different game, which doesn’t have the awkward feature that makes the Prisoner’s Dilemma interesting.
Wei_Dai (if I’ve understood him right) is not claiming that an SI’s claim on its source code constitutes a PD, but that one obvious (but inconvenient) way for it to arrange for mutual cooperation in a PD is to demonstrate that its behaviour satisfies the condition “I’ll cooperate iff you do”, which requires some sort of way for it to specify what it does and prove it.
Off-topic: is there a
EDIT: Yeah, that worked. Two spaces at the end of the line. Thanks.
I think you put two spaces at the end of each line. Cthulhu knows why.