If I’d been one of the participants on Hofstadter’s original game, I’d have answered him thusly:
“I know where you’re going with this experiment— you want all of us to realize that our reasoning is roughly symmetrical, and that it’s better if we all cooperate than if we all defect. And if I were playing against a bunch of copies of myself, then I’d cooperate without hesitation.
However, if I were playing against a bunch of traditional game theorists, then the sensible thing is to defect, since I know that they’re not going to reason by this line of thought, and so the symmetry is broken. Even if I were playing against a bunch of people who’d cooperate because they think that’s more moral, I ought to defect (if I’m acting according to my own self-interest), because they’re not thinking in these terms either.
So what I really need to do is to make my best guess about how many of the participants are thinking in this reflexive sort of way, and how many are basing their decisions on completely different lines of thought. And then my choice would in effect be choosing for that block of people and not for the rest, and so I’d need to make my best guess whether it’s better for me if I (and the rest of that block) choose to cooperate or if we choose to defect. That depends on how large that block is, how many of the others I expect to cooperate vs. defect, and on the payoff matrix.”
At the time he wrote it, the correct choice would have been to defect, because as Hofstadter noted, none of his friends (as brilliant as they were) took anything like that reflexive line of thought. If it were done now, among a group of Less Wrong veterans, I might be convinced to cooperate.
At the time he wrote it, the correct choice would have been to defect, because as Hofstadter noted, none of his friends (as brilliant as they were) took anything like that reflexive line of thought. If it were done now, among a group of Less Wrong veterans, I might be convinced to cooperate.
I would advocate the opposite: Imagine you have never thought about Newcomb-like scenarios before. Therefore, you also don’t know how others would act in such problems. Now, you come up with this interesting line of thought about determining the others’ choices or correlating with them. Because you are the only data point, your decision should give you a lot of evidence about what others might do, i.e. about whether they will come up with the idea at all and behave in abidance with it.
Now, contrast this with playing the game today. You may have already read studies showing that most philosophers use CDT, that most people one-box in Newcomb’s problem, that LWers tend to cooperate. If anything, your decision now gives you less information about what the others will do.
If I’d been one of the participants on Hofstadter’s original game, I’d have answered him thusly:
“I know where you’re going with this experiment— you want all of us to realize that our reasoning is roughly symmetrical, and that it’s better if we all cooperate than if we all defect. And if I were playing against a bunch of copies of myself, then I’d cooperate without hesitation.
However, if I were playing against a bunch of traditional game theorists, then the sensible thing is to defect, since I know that they’re not going to reason by this line of thought, and so the symmetry is broken. Even if I were playing against a bunch of people who’d cooperate because they think that’s more moral, I ought to defect (if I’m acting according to my own self-interest), because they’re not thinking in these terms either.
So what I really need to do is to make my best guess about how many of the participants are thinking in this reflexive sort of way, and how many are basing their decisions on completely different lines of thought. And then my choice would in effect be choosing for that block of people and not for the rest, and so I’d need to make my best guess whether it’s better for me if I (and the rest of that block) choose to cooperate or if we choose to defect. That depends on how large that block is, how many of the others I expect to cooperate vs. defect, and on the payoff matrix.”
At the time he wrote it, the correct choice would have been to defect, because as Hofstadter noted, none of his friends (as brilliant as they were) took anything like that reflexive line of thought. If it were done now, among a group of Less Wrong veterans, I might be convinced to cooperate.
I would advocate the opposite: Imagine you have never thought about Newcomb-like scenarios before. Therefore, you also don’t know how others would act in such problems. Now, you come up with this interesting line of thought about determining the others’ choices or correlating with them. Because you are the only data point, your decision should give you a lot of evidence about what others might do, i.e. about whether they will come up with the idea at all and behave in abidance with it.
Now, contrast this with playing the game today. You may have already read studies showing that most philosophers use CDT, that most people one-box in Newcomb’s problem, that LWers tend to cooperate. If anything, your decision now gives you less information about what the others will do.