It would be surprising, if bad intent were so rare in the relevant sense, that people would be so quick to jump to the conclusion that it is present. Why would that be adaptive?
Human punishment of free riders helps ensure there are few free-riders. Our fear and surprise responses are ridiculously over sensitive, because of the consequences of type 1 vs type 2 errors. Etc...
Evolution, too, is into massive A/B testing with no optimisation target that includes truth.
That seems plausible, and suggests that the low rate of free-riders is causally related to our readiness to call out suspected ones.
This suggests that the right thing to do is to try to reduce the cost, rather than the rate, of false-positives. And surely not to demolish this Chesterton’s Fence without a good replacement fix for the underlying problem.
This suggests it’s more useful to compare human groups and see how they manage the problem, rather than trying to parse the ins and outs of evolutionary psychology.
It goes up at least one important meta level: fraction of the community willing to take on the (potentially high in ambiguous cases) cost of punishing free riders has threshold effects IIRC that determine which attractor you sort in to. Part of my S1 sense that EA will not be able to accomplish much good on an absolute scale (even if much good is done at the margin) is that it does not cross this threshold.
Human punishment of free riders helps ensure there are few free-riders. Our fear and surprise responses are ridiculously over sensitive, because of the consequences of type 1 vs type 2 errors. Etc...
Evolution, too, is into massive A/B testing with no optimisation target that includes truth.
That seems plausible, and suggests that the low rate of free-riders is causally related to our readiness to call out suspected ones.
This suggests that the right thing to do is to try to reduce the cost, rather than the rate, of false-positives. And surely not to demolish this Chesterton’s Fence without a good replacement fix for the underlying problem.
This suggests it’s more useful to compare human groups and see how they manage the problem, rather than trying to parse the ins and outs of evolutionary psychology.
Agreed.
It goes up at least one important meta level: fraction of the community willing to take on the (potentially high in ambiguous cases) cost of punishing free riders has threshold effects IIRC that determine which attractor you sort in to. Part of my S1 sense that EA will not be able to accomplish much good on an absolute scale (even if much good is done at the margin) is that it does not cross this threshold.