Unless you’re arguing that EA is primarily people who are doing it entirely for the social feedback from people and not at all out of a desire to actually implement utilitarianism. This may be true; if it is, it’s a separate problem from incentives.
I think that the EA system will be both more robust and more effective if it is designed with the assumption that the people in it do not share the system’s utility function, but that win-win trades are possible between the system and the people inside it.
I think that the EA system will be both more robust and more effective if it is designed with the assumption that the people in it do not share the system’s utility function, but that win-win trades are possible between the system and the people inside it.