How do you construe a utility function from a psychologically realistic detailed model of a human’s decision process?
It may be an obvious thing to say—but there is an existing research area that deals with this problem: revealed preference theory.
I would say obtaining some kind of utility function from observations is rather trivial—the key problem is compressing the results. However, general-purpose compression is part of the whole project of building machine intelligence anyway. If we can’t compress, we get nowhere, and if we can compress, then we can (probably) compress utility functions.
It may be an obvious thing to say—but there is an existing research area that deals with this problem: revealed preference theory.
I would say obtaining some kind of utility function from observations is rather trivial—the key problem is compressing the results. However, general-purpose compression is part of the whole project of building machine intelligence anyway. If we can’t compress, we get nowhere, and if we can compress, then we can (probably) compress utility functions.
Right. Also, choice modeling in economics and preference extraction in AI / decision support systems.