Hm, I may not quite have gotten the point across: I think you may be thinking of the argument that humans have free will, so they can’t force future versions of themselves to do something that would be against that future version’s given its information, but that isn’t the argument I was trying to explain. The idea I was refering to works precisely the same way with deterministic algorithms, as long as the players only get to observe each others’ actions, not each others’ source (though of course its proponents don’t think in those terms). The point is that if the other player looks at you severely and suggestively taps their baseball bat and tells you about how they’ve beaten up people who have defected in the past, that still doesn’t mean that they’re actually going to beat you up—since if such threats were effective on you, then making them would be the smart thing to do even if the other player has no intention of actually beating you up (and risk going to jail) if for some reason you end up defecting. (Compare AI-in-the-box...) (Of course, this argument only works if you’re reasonably sure that the other player is a classical game theorist; if you think you might be playing against someone who will, “irrationally”, actually punish you, like a timeless decision theorist, then you should not defect, and they won’t have to punish you...)
Now, if you had actual information about what this player had done in similar situations in the past, like police reports of beaten-up defectors, this argument wouldn’t work, but then (the standard argument continues) you have the wrong game-theoretical model; the correct model includes all of the punisher’s previous interactions, and in that game, it might well be a SPE to punish. (Though only if the exact number of “rounds” is not certain, for the same reason as in the finitely iterated Prisoner’s Dilemma: in the last round the punisher has no more reason to punish because there are no future targets to impress, so you defect no matter what they did in previous rounds, so they have no reason to punish in the second-to-last round, etc.)
I think you may be thinking of the argument that humans have free will, so they can’t force future versions of themselves to do something that would be against that future version’s given its information
That is not what I was thinking of. Here, let me re-quote the whole sentence:
The classical game theorist assumes you can’t look into people’s heads, so whatever you say or do before the cheating, you’re always free to not punish during the punishment round
The funny implication here is that if someone did look into your head, you would no longer be “free.” Like a lightswitch :P And then if they erased their memory of what they saw, you’re free again. Freedom on, freedom off.
And though that is a fine idea to define, to mix it up with an algorithmic use of “freedom” seems to just be used to argue “by definition.”
Ok, sorry I misread you. “Free” was just my word rather than part of the standard explanation, so alas we don’t have anybody we can attribute that belief to :-)
Hm, I may not quite have gotten the point across: I think you may be thinking of the argument that humans have free will, so they can’t force future versions of themselves to do something that would be against that future version’s given its information, but that isn’t the argument I was trying to explain. The idea I was refering to works precisely the same way with deterministic algorithms, as long as the players only get to observe each others’ actions, not each others’ source (though of course its proponents don’t think in those terms). The point is that if the other player looks at you severely and suggestively taps their baseball bat and tells you about how they’ve beaten up people who have defected in the past, that still doesn’t mean that they’re actually going to beat you up—since if such threats were effective on you, then making them would be the smart thing to do even if the other player has no intention of actually beating you up (and risk going to jail) if for some reason you end up defecting. (Compare AI-in-the-box...) (Of course, this argument only works if you’re reasonably sure that the other player is a classical game theorist; if you think you might be playing against someone who will, “irrationally”, actually punish you, like a timeless decision theorist, then you should not defect, and they won’t have to punish you...)
Now, if you had actual information about what this player had done in similar situations in the past, like police reports of beaten-up defectors, this argument wouldn’t work, but then (the standard argument continues) you have the wrong game-theoretical model; the correct model includes all of the punisher’s previous interactions, and in that game, it might well be a SPE to punish. (Though only if the exact number of “rounds” is not certain, for the same reason as in the finitely iterated Prisoner’s Dilemma: in the last round the punisher has no more reason to punish because there are no future targets to impress, so you defect no matter what they did in previous rounds, so they have no reason to punish in the second-to-last round, etc.)
(BTW: reference added to grandparent.)
That is not what I was thinking of. Here, let me re-quote the whole sentence:
The funny implication here is that if someone did look into your head, you would no longer be “free.” Like a lightswitch :P And then if they erased their memory of what they saw, you’re free again. Freedom on, freedom off.
And though that is a fine idea to define, to mix it up with an algorithmic use of “freedom” seems to just be used to argue “by definition.”
Ok, sorry I misread you. “Free” was just my word rather than part of the standard explanation, so alas we don’t have anybody we can attribute that belief to :-)