I’ve only had time to read the introduction so far; but if it’s not mentioned in the paper itself, it seems that PrudentBot should not only be “correct” if it defects against CooperateBot, it should also defect against DefectBot. In fact, in a one-shot PD, it seems as if it should defect against any Bot which is unable to analyze its own source code to see how it will react.
It seems as if there’s an important parallel between the Iterated Prisoner’s Dilemma and the One-Shot Prisoner’s Dilemma With Access To Source Code: both versions of the PD provide a set of evidence which each side can use to attempt to predict the other’s behaviour. And since the PD-with-source is, according to the paper, equivalent to Newcomb’s Problem, this suggests that the Iterated-PD is equivalent to a variant of Newcomb’s based on reasonably-available historical evidence rather than Omega-level omniscience about the other player.
This also suggests that an important dividing line between algorithms one should defect against, and algorithms one should cooperate with, is somewhere around “complicated enough to be able to take my own actions into account when deciding its own actions”. For PD-with-source, that means being complicated enough to analyze source code; Iterated-PD’s structure puts that line at tit-for-tat.
This is also implying a certain intuitive leap to me, involving species with complicated enough social interactions to need to think about others’ minds (parrots, dolphins, apes); that the runaway evolutionary process that led to our own species perhaps has to do with such mind-modeling finally becoming complicated enough to model one’s own mind for higher-level social plots… but that’s more likely than not just some college-freshmen-level “say, what if...” musing. It could be just as likely that the big step forward was minds becoming complicated enough to become less-predictable black-boxes than simple predictable “if I cheat on him and he catches me, he’ll peck me painfully” call-and-responses.
I’ve only had time to read the introduction so far; but if it’s not mentioned in the paper itself, it seems that PrudentBot should not only be “correct” if it defects against CooperateBot, it should also defect against DefectBot. In fact, in a one-shot PD, it seems as if it should defect against any Bot which is unable to analyze its own source code to see how it will react.
It seems as if there’s an important parallel between the Iterated Prisoner’s Dilemma and the One-Shot Prisoner’s Dilemma With Access To Source Code: both versions of the PD provide a set of evidence which each side can use to attempt to predict the other’s behaviour. And since the PD-with-source is, according to the paper, equivalent to Newcomb’s Problem, this suggests that the Iterated-PD is equivalent to a variant of Newcomb’s based on reasonably-available historical evidence rather than Omega-level omniscience about the other player.
This also suggests that an important dividing line between algorithms one should defect against, and algorithms one should cooperate with, is somewhere around “complicated enough to be able to take my own actions into account when deciding its own actions”. For PD-with-source, that means being complicated enough to analyze source code; Iterated-PD’s structure puts that line at tit-for-tat.
This is also implying a certain intuitive leap to me, involving species with complicated enough social interactions to need to think about others’ minds (parrots, dolphins, apes); that the runaway evolutionary process that led to our own species perhaps has to do with such mind-modeling finally becoming complicated enough to model one’s own mind for higher-level social plots… but that’s more likely than not just some college-freshmen-level “say, what if...” musing. It could be just as likely that the big step forward was minds becoming complicated enough to become less-predictable black-boxes than simple predictable “if I cheat on him and he catches me, he’ll peck me painfully” call-and-responses.