But there is no principled way to derive an utility function from something that is not an expected utility maximizer!
You can model any agent as in expected utility maximizer—with a few caveats about things such as uncomputability and infinitely complex functions.
You really can reverse-engineer their utility functions too—by considering them as Input-Transform-Output black boxes—and asking what expected utility maximizer would produce the observed transformation.
A utility function is like a program in a Turing-complete language. If the behaviour can be computed at all, it can be computed by a utility function.
You can model any agent as in expected utility maximizer—with a few caveats about things such as uncomputability and infinitely complex functions.
You really can reverse-engineer their utility functions too—by considering them as Input-Transform-Output black boxes—and asking what expected utility maximizer would produce the observed transformation.
A utility function is like a program in a Turing-complete language. If the behaviour can be computed at all, it can be computed by a utility function.