If you define your utility function in a sufficiently convoluted manner, then everything is a utility maximiser.
Less contrived, I was thinking of stuff like Wentworth’s subagents that identifies decision making with pareto optimality over a set of utility functions.
I think subagents comes very close to being an ideal model of agency and could probably be adapted to be a complete model.
I don’t want to include subagents in my critique at this point.
I think what you want might be “a single fixed utility function over states” or something similar. That captures that you’re excluding from critique:
Agents with multiple internal “utility functions” (subagents)
Agents whose “utility function” is malleably defined
Agents that have trivial utility functions, like over universe-histories
I think what you want might be “a single fixed utility function over states” or something similar. That captures that you’re excluding from critique:
Agents with multiple internal “utility functions” (subagents)
Agents whose “utility function” is malleably defined
Agents that have trivial utility functions, like over universe-histories