The actions of any computable agent—including humans—can be expressed using a utility function.
This is a highly questionable statement concerning humans, and the paper linked from that page doesn’t appear to prove it.
Edit: ah, this includes “functions” that anyone else would call a “stupidly complicated state machine” and which may not actually be feasible to calculate.
Yes indeed, and the only way to fit that function to the human state machine is to include a “t” term, over the life of the human in question. Which is pretty much infeasible to calculate unless you invoke “and then a miracle occurs”.
Utility-based models are no more “infeasible to calculate” than any other model. Indeed you can convert any model of an agent into a utility-based model by an I/O-based “wrapper” of it—as described here. The idea that utility-based models of humans are more computationally intractible than other models is just wrong.
Indeed you can convert any model of an agent into a utility-based model by an I/O-based “wrapper” of it—as described here.
You keep repeating this Texas Sharpshooter Utility Function fallacy (earlier appearances in the link you gave, and here and here, of observing what the agent does, and retrospectively labelling that with utility 1 and everything else with utility 0. And as often as you do that, I will point out it’s a fallacy. Something that can only be computed after the action is known cannot be used before the fact to choose the action.
I was talking about wrapping a model of a human—thus converting a non-utility-based model into a utility-based one. That operation is, of course, not circular. If you think the argument is circular, you haven’t grasped the intended purpose of it.
It doesn’t give you a utility-based model. A model is a structure whose parts correspond to parts of the thing modelled, and which interact in the same way as in the thing modelled. This post-hoc utility function does not correspond to anything.
What next? Label with 1 everything that happens and 0 everything that doesn’t and call that a utliity-based model of the universe?
Here, I made it pretty clear from the beginning that I was starting with an existing model—and then modifying it. A model with a few bits strapped onto it is still a model.
Well, I would say “utilitarian”, but that word seems to be taken. I mean that the model calculates utilities associated with its possible actions—and then picks the action with the highest utility.
But that is exactly what this wrapping in a post-hoc utility function doesn’t do. The model first picks an action in whatever way it does, then labels that with utility 1.
This is a highly questionable statement concerning humans, and the paper linked from that page doesn’t appear to prove it.
Edit: ah, this includes “functions” that anyone else would call a “stupidly complicated state machine” and which may not actually be feasible to calculate.
The term “function”—as used on the page—is a technical term with a clearly-established meaning.
Yes indeed, and the only way to fit that function to the human state machine is to include a “t” term, over the life of the human in question. Which is pretty much infeasible to calculate unless you invoke “and then a miracle occurs”.
Utility-based models are no more “infeasible to calculate” than any other model. Indeed you can convert any model of an agent into a utility-based model by an I/O-based “wrapper” of it—as described here. The idea that utility-based models of humans are more computationally intractible than other models is just wrong.
You keep repeating this Texas Sharpshooter Utility Function fallacy (earlier appearances in the link you gave, and here and here, of observing what the agent does, and retrospectively labelling that with utility 1 and everything else with utility 0. And as often as you do that, I will point out it’s a fallacy. Something that can only be computed after the action is known cannot be used before the fact to choose the action.
I was talking about wrapping a model of a human—thus converting a non-utility-based model into a utility-based one. That operation is, of course, not circular. If you think the argument is circular, you haven’t grasped the intended purpose of it.
It doesn’t give you a utility-based model. A model is a structure whose parts correspond to parts of the thing modelled, and which interact in the same way as in the thing modelled. This post-hoc utility function does not correspond to anything.
What next? Label with 1 everything that happens and 0 everything that doesn’t and call that a utliity-based model of the universe?
Here, I made it pretty clear from the beginning that I was starting with an existing model—and then modifying it. A model with a few bits strapped onto it is still a model.
If I stick a hamburger on my car, the car is still a car—but the hamburger plays no part in what makes it a car.
AFAICS, I never made the corresponding claim—that the utility function was part of what made the model a model.
How else can I understand your words “utility-based models”? This is no more a utility-based model than a hamburger on a car is a hamburger-based car.
Well, I would say “utilitarian”, but that word seems to be taken. I mean that the model calculates utilities associated with its possible actions—and then picks the action with the highest utility.
But that is exactly what this wrapping in a post-hoc utility function doesn’t do. The model first picks an action in whatever way it does, then labels that with utility 1.