Utility maximization can model any goal-oriented creature, within reason. Familiar, or alien, it makes not the slightest bit of difference to the theory.
Of course it can, just like you can model any computation with a Turing machine, or on top of the game of Life. And modeling humans (or most any living entity) as a utility maximizer is on a par with writing a spreadsheet program to run on a Turing machine. An interesting, perhaps fun or educational but exercise, but mostly futile.
I mean, sure, you could say that utility equals “minimum global error of all control systems”, but it’s rather ludicrous to expect this calculation to predict their actual behavior, since most of their “interests” operate independently. Why go to all the trouble to write a complex utility function when an error function is so much simpler and closer to the territory?
I think you are getting my position. Just as a universal computer can model any other type of machine, so a utilitiarian agent can model any other type of agent. These two concepts are closely analogous.
Of course it can, just like you can model any computation with a Turing machine, or on top of the game of Life. And modeling humans (or most any living entity) as a utility maximizer is on a par with writing a spreadsheet program to run on a Turing machine. An interesting, perhaps fun or educational but exercise, but mostly futile.
I mean, sure, you could say that utility equals “minimum global error of all control systems”, but it’s rather ludicrous to expect this calculation to predict their actual behavior, since most of their “interests” operate independently. Why go to all the trouble to write a complex utility function when an error function is so much simpler and closer to the territory?
I think you are getting my position. Just as a universal computer can model any other type of machine, so a utilitiarian agent can model any other type of agent. These two concepts are closely analogous.
But your choice of platforms is not without efficiency and complexity costs, since maximizers inherently “blow up” more than satisficers.