My confidence bounds were 75% and 98% for defect, so my estimate was diametrically opposed to yours. If the admittedly low sample size of these comments is any indication, we were both way off.
Why do you think most would cooperate? I would expect this demographic to do a consequentialist calculation, and find that an isolated cooperation has almost no effect on expected value, whereas an isolated defection almost quadruples expected value.
My confidence bounds were 75% and 98% for defect, so my estimate was diametrically opposed to yours. If the admittedly low sample size of these comments is any indication, we were both way off.
I expected most of the LessWrong comunity to cooperate for two reasons:
I model them as altruistic as in Kurros comment.
I model them as oneboxing in newcombs problem.
One consideration I did not factor into my prediction is, that—judging from the comments—many people refuse to cooperate in transfering money form CFAR/Yvain to a random community member.
You don’t think people here have a term for their survey-completing comrades in their cost function? Since I probably won’t win either way this term dominated my own cost function, so I cooperated. An isolated defection can help only me, whereas an isolated cooperation helps everyone else and so gets a large numerical boost for that reason.
It’s true: if you’re optimizing for altruism, cooperation is clearly better.
I guess it’s not really a “dilemma” as such, since the optimal solution doesn’t depend at all on what anyone else does. If you’re trying to maximize EV, defect. If you’re trying to maximize other people’s EV, cooperate.
My confidence bounds were 75% and 98% for defect, so my estimate was diametrically opposed to yours. If the admittedly low sample size of these comments is any indication, we were both way off.
Why do you think most would cooperate? I would expect this demographic to do a consequentialist calculation, and find that an isolated cooperation has almost no effect on expected value, whereas an isolated defection almost quadruples expected value.
I expected most of the LessWrong comunity to cooperate for two reasons:
I model them as altruistic as in Kurros comment.
I model them as oneboxing in newcombs problem.
One consideration I did not factor into my prediction is, that—judging from the comments—many people refuse to cooperate in transfering money form CFAR/Yvain to a random community member.
You don’t think people here have a term for their survey-completing comrades in their cost function? Since I probably won’t win either way this term dominated my own cost function, so I cooperated. An isolated defection can help only me, whereas an isolated cooperation helps everyone else and so gets a large numerical boost for that reason.
It’s true: if you’re optimizing for altruism, cooperation is clearly better.
I guess it’s not really a “dilemma” as such, since the optimal solution doesn’t depend at all on what anyone else does. If you’re trying to maximize EV, defect. If you’re trying to maximize other people’s EV, cooperate.