Sure, but the whole point of having the concept of a utility function, is that utility functions are supposed to be simple. When you have a set of preferences that isn’t simple, there’s no point in thinking of it as a utility function. You’re better off just thinking of it as a set of preferences—or, in the context of AGI, a toolkit, or a library, or command language, or partial order on heuristics, or whatever else is the most useful way to think about the things this entity does.
Re: “When you have a set of preferences that isn’t simple, there’s no point in thinking of it as a utility function.”
Sure there is—say you want to compare the utility functions of two agents. Or compare the parts of the agents which are independent of the utility function. A general model that covers all goal-directed agents is very useful for such things.
Er, maybe? I would say a utility function is supposed to be simple, but perhaps what I mean by simple is compatible with what you mean by coherent, if we agree that something like ‘morality in general’ or ‘what we want in general’ is not simple/coherent.
Any set of preferances can be represented as a sufficietly complex utility function.
Sure, but the whole point of having the concept of a utility function, is that utility functions are supposed to be simple. When you have a set of preferences that isn’t simple, there’s no point in thinking of it as a utility function. You’re better off just thinking of it as a set of preferences—or, in the context of AGI, a toolkit, or a library, or command language, or partial order on heuristics, or whatever else is the most useful way to think about the things this entity does.
Re: “When you have a set of preferences that isn’t simple, there’s no point in thinking of it as a utility function.”
Sure there is—say you want to compare the utility functions of two agents. Or compare the parts of the agents which are independent of the utility function. A general model that covers all goal-directed agents is very useful for such things.
(Upvoted but) I would say utility functions are supposed to be coherent, albeit complex. Is that compatible with what you are saying?
Er, maybe? I would say a utility function is supposed to be simple, but perhaps what I mean by simple is compatible with what you mean by coherent, if we agree that something like ‘morality in general’ or ‘what we want in general’ is not simple/coherent.