The outputs from the utility based model would be the same as from the model it was derived from—a bunch of actuator/motor outputs. The difference would be the utility-maximizing action “under the hood”.
Utility based models are most useful when applying general theorems—or comparing across architectures. For example when comparing the utility function of a human with that of a machine intelligence—or considering the “robustness” of the utility function to environmental perturbations.
If you don’t need a general-purpose model, then sure—use a specific one, if it suits your purposes.
Please don’t “bash” utility-based models, though. They are great! Bashers simply don’t appreciate their virtues. There are a lot of utility bashers out there. They make a lot of noise—and AFAICS, it is all pointless and vacuous hot air.
My hypothesis is that they think that their brain being a mechanism-like expected utility maximiser somehow diminishes their awe and majesty. It’s the same thing that makes people believe in souls—just one step removed.
I don’t think I understand what you’re trying to describe here. Could you give an example of a scenario where you usefully transform a model into a utility-based one the way you describe?
I’m not bashing utility-based models, I’m quite aware of their good sides. I’m just saying they shouldn’t be used universally and without criticism. That’s not bashing any more than it’s bashing to say that integrals aren’t the most natural way to do matrix multiplication with.
Could you give an example of a scenario where you usefully transform a model into a utility-based one the way you describe?
Call the original model M.
“Wrap” the model M—by preprocessing its sensory inputs and post-processing its motor outputs.
Then, post-process M’s motor outputs—by enumerating its possible actions at each moment, assign utility 1 to the action corresponding to the action M output, and assign utility 0 to all other actions.
Then output the action with the highest utility.
I’m not bashing utility-based models, I’m quite aware of their good sides.
Check with your subject line. There are plenty of good reasons for applying utility functions to humans. A rather obvious one is figuring out your own utility function—in order to clarify your goals to yourself.
Okay, I’m with you so far. But what I was actually asking for was an example of a scenario where this wrapping gives us some benefit that we wouldn’t have otherwise.
I don’t think utility functions are a very good tool to use when seeking to clarify one’s goals to yourself. Things like PJ Eby’s writings have given me rather powerful insights to my goals, content which would be pointless to try to convert to the utility function framework.
But what I was actually asking for was an example of a scenario where this wrapping gives us some benefit that we wouldn’t have otherwise.
My original comment on that topic was:
Utility based models are most useful when applying general theorems—or comparing across architectures. For example when comparing the utility function of a human with that of a machine intelligence—or considering the “robustness” of the utility function to environmental perturbations.
Utility-based models are a general framework that can represent any computable intelligent agent. That is the benefit that you don’t otherwise have. Utility-based models let you compare and contrast different agents—and different types of agent.
Incidentally, I do not like writing “utility-based model” over and over again. These models should be called “utilitarian”. We should hijack that term away from the ridiculous and useless definition used by the ethicists. They don’t have the rights to this term.
The outputs from the utility based model would be the same as from the model it was derived from—a bunch of actuator/motor outputs. The difference would be the utility-maximizing action “under the hood”.
Utility based models are most useful when applying general theorems—or comparing across architectures. For example when comparing the utility function of a human with that of a machine intelligence—or considering the “robustness” of the utility function to environmental perturbations.
If you don’t need a general-purpose model, then sure—use a specific one, if it suits your purposes.
Please don’t “bash” utility-based models, though. They are great! Bashers simply don’t appreciate their virtues. There are a lot of utility bashers out there. They make a lot of noise—and AFAICS, it is all pointless and vacuous hot air.
My hypothesis is that they think that their brain being a mechanism-like expected utility maximiser somehow diminishes their awe and majesty. It’s the same thing that makes people believe in souls—just one step removed.
I don’t think I understand what you’re trying to describe here. Could you give an example of a scenario where you usefully transform a model into a utility-based one the way you describe?
I’m not bashing utility-based models, I’m quite aware of their good sides. I’m just saying they shouldn’t be used universally and without criticism. That’s not bashing any more than it’s bashing to say that integrals aren’t the most natural way to do matrix multiplication with.
Call the original model M.
“Wrap” the model M—by preprocessing its sensory inputs and post-processing its motor outputs.
Then, post-process M’s motor outputs—by enumerating its possible actions at each moment, assign utility 1 to the action corresponding to the action M output, and assign utility 0 to all other actions.
Then output the action with the highest utility.
Check with your subject line. There are plenty of good reasons for applying utility functions to humans. A rather obvious one is figuring out your own utility function—in order to clarify your goals to yourself.
Okay, I’m with you so far. But what I was actually asking for was an example of a scenario where this wrapping gives us some benefit that we wouldn’t have otherwise.
I don’t think utility functions are a very good tool to use when seeking to clarify one’s goals to yourself. Things like PJ Eby’s writings have given me rather powerful insights to my goals, content which would be pointless to try to convert to the utility function framework.
Personally, I found thinking of myself as a utility maximiser enlightening. However YMMV.
My original comment on that topic was:
Utility-based models are a general framework that can represent any computable intelligent agent. That is the benefit that you don’t otherwise have. Utility-based models let you compare and contrast different agents—and different types of agent.
Incidentally, I do not like writing “utility-based model” over and over again. These models should be called “utilitarian”. We should hijack that term away from the ridiculous and useless definition used by the ethicists. They don’t have the rights to this term.