Wei Dai begins by assuming that cooperation on the Prisoner’s Dilemma is not rational, which is the same decision theory that two-boxes on Newcomb’s Problem.
Last I saw, you were only advocating cooperation in one-shot PD for two superintelligences that happen to know each other’s source code (http://lists.extropy.org/pipermail/extropy-chat/2008-May/043379.html). Are you now saying that human beings should also play cooperate in one-shot PD?
What goes on with humans is no proof of what goes on with rational agents. Also, truly one-shot PDs will be very rare among real humans.
Wei Dai begins by assuming that cooperation on the Prisoner’s Dilemma is not rational, which is the same decision theory that two-boxes on Newcomb’s Problem.
Last I saw, you were only advocating cooperation in one-shot PD for two superintelligences that happen to know each other’s source code (http://lists.extropy.org/pipermail/extropy-chat/2008-May/043379.html). Are you now saying that human beings should also play cooperate in one-shot PD?
What goes on with humans is no proof of what goes on with rational agents. Also, truly one-shot PDs will be very rare among real humans.