This policy cannot change if it observes the counterparty always defecting
If A observes the other party B defecting even once after verifying the other party B believed A to be a copy of B (assuming sufficient scanning tech to read each others’ minds reliably, eg for simplicity this could be on the same computer in an open source game theory test environment), then A can reliably and therefore must always conclude B is not actually a copy, but a copy with some modification (such as random noise) that induces defection.
then A can reliably and therefore must always conclude B is not actually a copy, but a copy with some modification
What you describe can work, though now the policy is more complicated. It now has conditions where you renege when there is a certain level of confidence that the counterparty isn’t a cloned peer.
Obviously “will ALWAYS cooperate” you know the other party isn’t a peer the instant they defect. This policy collapses to grim trigger and actually you didn’t need the peer detection.
In more complex and interesting environments there’s now a “defection margin”. Since you know the exact threshold the other parties will decide you aren’t a peer at, you can exploit them so long as you don’t provide sufficient evidence that you are an outlaw*. (In this case, outlaw means “not an identical clone”)
A lot of these updateless cooperation scenarios are asynchronous, past/future, separated by distance and firewalls. Lots of opportunity to defect and not be punished.
Real life example: shoplift $1 less than the felony threshold. Where a felony conviction is “grim trigger”, society will always defect against you from then on.
If A observes the other party B defecting even once after verifying the other party B believed A to be a copy of B (assuming sufficient scanning tech to read each others’ minds reliably, eg for simplicity this could be on the same computer in an open source game theory test environment), then A can reliably and therefore must always conclude B is not actually a copy, but a copy with some modification (such as random noise) that induces defection.
What you describe can work, though now the policy is more complicated. It now has conditions where you renege when there is a certain level of confidence that the counterparty isn’t a cloned peer.
Obviously “will ALWAYS cooperate” you know the other party isn’t a peer the instant they defect. This policy collapses to grim trigger and actually you didn’t need the peer detection.
In more complex and interesting environments there’s now a “defection margin”. Since you know the exact threshold the other parties will decide you aren’t a peer at, you can exploit them so long as you don’t provide sufficient evidence that you are an outlaw*. (In this case, outlaw means “not an identical clone”)
A lot of these updateless cooperation scenarios are asynchronous, past/future, separated by distance and firewalls. Lots of opportunity to defect and not be punished.
Real life example: shoplift $1 less than the felony threshold. Where a felony conviction is “grim trigger”, society will always defect against you from then on.