Social feedback is an incentive, and the bigger the community gets the more social feedback is possible.
Insofar as Utilitarianism is weird, negative social feedback is a major reason to avoid acting on it, and so early EAs must have been very strongly motivated to implement utilitarianism in order to overcome it. As the community gets bigger, it is less weird and there is more positive support, and so it’s less of a social feedback hit.
This is partially good, because it makes it easier to “get into” trying to implement utilitarianism, but it’s also bad because it means that newer EAs need to care about utilitarianism relatively less.
It seems that saying that incentives don’t matter as long as you remove social-approval-seeking ignores the question of why the remaining incentives would actually push people towards actually trying.
It’s also unclear what’s left of the incentives holding the community together after you remove the social incentives. Yes, talking to each other probably does make it easier to implement utilitarian goals, but at the same time it seems that the accomplishment of utilitarian goals is not in itself a sufficiently powerful incentive, otherwise there wouldn’t be effectiveness problems to begin with. If it were, then EAs would just be incentivized to effectively pursue utilitarian goals.
Social feedback is an incentive, and the bigger the community gets the more social feedback is possible.
Insofar as Utilitarianism is weird, negative social feedback is a major reason to avoid acting on it, and so early EAs must have been very strongly motivated to implement utilitarianism in order to overcome it. As the community gets bigger, it is less weird and there is more positive support, and so it’s less of a social feedback hit.
This is partially good, because it makes it easier to “get into” trying to implement utilitarianism, but it’s also bad because it means that newer EAs need to care about utilitarianism relatively less.
It seems that saying that incentives don’t matter as long as you remove social-approval-seeking ignores the question of why the remaining incentives would actually push people towards actually trying.
It’s also unclear what’s left of the incentives holding the community together after you remove the social incentives. Yes, talking to each other probably does make it easier to implement utilitarian goals, but at the same time it seems that the accomplishment of utilitarian goals is not in itself a sufficiently powerful incentive, otherwise there wouldn’t be effectiveness problems to begin with. If it were, then EAs would just be incentivized to effectively pursue utilitarian goals.