Okay, I’m with you so far. But what I was actually asking for was an example of a scenario where this wrapping gives us some benefit that we wouldn’t have otherwise.
I don’t think utility functions are a very good tool to use when seeking to clarify one’s goals to yourself. Things like PJ Eby’s writings have given me rather powerful insights to my goals, content which would be pointless to try to convert to the utility function framework.
But what I was actually asking for was an example of a scenario where this wrapping gives us some benefit that we wouldn’t have otherwise.
My original comment on that topic was:
Utility based models are most useful when applying general theorems—or comparing across architectures. For example when comparing the utility function of a human with that of a machine intelligence—or considering the “robustness” of the utility function to environmental perturbations.
Utility-based models are a general framework that can represent any computable intelligent agent. That is the benefit that you don’t otherwise have. Utility-based models let you compare and contrast different agents—and different types of agent.
Okay, I’m with you so far. But what I was actually asking for was an example of a scenario where this wrapping gives us some benefit that we wouldn’t have otherwise.
I don’t think utility functions are a very good tool to use when seeking to clarify one’s goals to yourself. Things like PJ Eby’s writings have given me rather powerful insights to my goals, content which would be pointless to try to convert to the utility function framework.
Personally, I found thinking of myself as a utility maximiser enlightening. However YMMV.
My original comment on that topic was:
Utility-based models are a general framework that can represent any computable intelligent agent. That is the benefit that you don’t otherwise have. Utility-based models let you compare and contrast different agents—and different types of agent.