If all the agents in the situation acted according to utilitarianism, everyone would be better off. To the extent that everyone acting according to common sense morality predictably fails to bring about the best of all possible worlds in this situation, and to the extent that one cares about this fact, this constitutes an argument against common sense morality.
Of course, if decision theory or game theory could make those agents cooperate successfully (so they don’t do predictably worse than other moralities anymore) in all logically possible situations, then the objection disappears. I see no reason to assume this, though.
That’s a game theory/decision theory problem, not a problem with the utility function.
If all the agents in the situation acted according to utilitarianism, everyone would be better off. To the extent that everyone acting according to common sense morality predictably fails to bring about the best of all possible worlds in this situation, and to the extent that one cares about this fact, this constitutes an argument against common sense morality.
Of course, if decision theory or game theory could make those agents cooperate successfully (so they don’t do predictably worse than other moralities anymore) in all logically possible situations, then the objection disappears. I see no reason to assume this, though.