Another way of saying this is that human beings are not expected utility maximizers, not as individuals and certainly not as societies.
They are not perfect expected utility maximizers. However, no expected utility maximizer is perfect. Humans approach the ideal at least as well as other organisms. Fitness maximization is the central explanatory principle in biology—and the underlying idea is the same. The economic framework associated with utilitarianism is general, of broad applicability, and deserves considerable respect.
You can model any agent as in expected utility maximizer—with a few caveats about things such as uncomputability and infinitely complex functions.
You really can reverse-engineer their utility functions too—by considering them as Input-Transform-Output black boxes—and asking what expected utility maximizer would produce the observed transformation.
A utility function is like a program in a Turing-complete language. If the behaviour can be computed at all, it can be computed by a utility function.