Well, yeah. My post only wanted to claim that the proof transfers to the symmetric Prisoner’s Dilemma, the one with identical agents. If the other guy is not identical, but a version of you with an innocent-looking modification, should you go ahead and cooperate anyway, even though you can’t prove you won’t be the sucker?
If you want different agents to cooperate, the question is how different you allow them to be. You could come up with parameterized families of agents that pairwise cooperate, etc. But since the basic idea of UDT is essentially single-player (looking for perfect logical correlates of yourself within the world), we’re probably missing some big new insight to solve the multiplayer problems that people want UDT to solve. Some people think Lobian cooperation is a promising alternative, but it has problems as well.
(I apologize in advance if this comment is off the mark. I’ve been staying away from LW and SI-related things.)
Well, yeah. My post only wanted to claim that the proof transfers to the symmetric Prisoner’s Dilemma, the one with identical agents. If the other guy is not identical, but a version of you with an innocent-looking modification, should you go ahead and cooperate anyway, even though you can’t prove you won’t be the sucker?
If you want different agents to cooperate, the question is how different you allow them to be. You could come up with parameterized families of agents that pairwise cooperate, etc. But since the basic idea of UDT is essentially single-player (looking for perfect logical correlates of yourself within the world), we’re probably missing some big new insight to solve the multiplayer problems that people want UDT to solve. Some people think Lobian cooperation is a promising alternative, but it has problems as well.
(I apologize in advance if this comment is off the mark. I’ve been staying away from LW and SI-related things.)