Having two utility functions is like having no utility function at all, because you don’t have an ordering of preferences.
The only kind of model that needs a global utility function is an optimization process. Obviously, after considering each alternative, there needs to be a way to decide which one to choose… assuming that we do things like considering alternatives and choosing one of them (using an ordering that is represented by the one utility function).
For example, evolution has a global utility function (inclusive genetic fitness). Of course, it may be described in parts (endurance, running speed, attractiveness to mates etc), but in the end it gets summed up (described by whether the genes are multiplied or not).
That said, there are things (such as Kaj’s centrifugal governor or human beings) that aren’t best modelled as optimization processes. Reflexes, for example, don’t optimize, they just work (and a significant proportion of our brains does likewise). The fact that our conscious thinking is a (slightly buggy) implementation of an optimization process (with a more or less consistent utility function) might suggest that whole humans can also be modelled well that way...
The only kind of model that needs a global utility function is an optimization process. Obviously, after considering each alternative, there needs to be a way to decide which one to choose… assuming that we do things like considering alternatives and choosing one of them (using an ordering that is represented by the one utility function).
For example, evolution has a global utility function (inclusive genetic fitness). Of course, it may be described in parts (endurance, running speed, attractiveness to mates etc), but in the end it gets summed up (described by whether the genes are multiplied or not).
That said, there are things (such as Kaj’s centrifugal governor or human beings) that aren’t best modelled as optimization processes. Reflexes, for example, don’t optimize, they just work (and a significant proportion of our brains does likewise). The fact that our conscious thinking is a (slightly buggy) implementation of an optimization process (with a more or less consistent utility function) might suggest that whole humans can also be modelled well that way...