Indeed you can convert any model of an agent into a utility-based model by an I/O-based “wrapper” of it—as described here.
You keep repeating this Texas Sharpshooter Utility Function fallacy (earlier appearances in the link you gave, and here and here, of observing what the agent does, and retrospectively labelling that with utility 1 and everything else with utility 0. And as often as you do that, I will point out it’s a fallacy. Something that can only be computed after the action is known cannot be used before the fact to choose the action.
I was talking about wrapping a model of a human—thus converting a non-utility-based model into a utility-based one. That operation is, of course, not circular. If you think the argument is circular, you haven’t grasped the intended purpose of it.
It doesn’t give you a utility-based model. A model is a structure whose parts correspond to parts of the thing modelled, and which interact in the same way as in the thing modelled. This post-hoc utility function does not correspond to anything.
What next? Label with 1 everything that happens and 0 everything that doesn’t and call that a utliity-based model of the universe?
Here, I made it pretty clear from the beginning that I was starting with an existing model—and then modifying it. A model with a few bits strapped onto it is still a model.
Well, I would say “utilitarian”, but that word seems to be taken. I mean that the model calculates utilities associated with its possible actions—and then picks the action with the highest utility.
But that is exactly what this wrapping in a post-hoc utility function doesn’t do. The model first picks an action in whatever way it does, then labels that with utility 1.
You keep repeating this Texas Sharpshooter Utility Function fallacy (earlier appearances in the link you gave, and here and here, of observing what the agent does, and retrospectively labelling that with utility 1 and everything else with utility 0. And as often as you do that, I will point out it’s a fallacy. Something that can only be computed after the action is known cannot be used before the fact to choose the action.
I was talking about wrapping a model of a human—thus converting a non-utility-based model into a utility-based one. That operation is, of course, not circular. If you think the argument is circular, you haven’t grasped the intended purpose of it.
It doesn’t give you a utility-based model. A model is a structure whose parts correspond to parts of the thing modelled, and which interact in the same way as in the thing modelled. This post-hoc utility function does not correspond to anything.
What next? Label with 1 everything that happens and 0 everything that doesn’t and call that a utliity-based model of the universe?
Here, I made it pretty clear from the beginning that I was starting with an existing model—and then modifying it. A model with a few bits strapped onto it is still a model.
If I stick a hamburger on my car, the car is still a car—but the hamburger plays no part in what makes it a car.
AFAICS, I never made the corresponding claim—that the utility function was part of what made the model a model.
How else can I understand your words “utility-based models”? This is no more a utility-based model than a hamburger on a car is a hamburger-based car.
Well, I would say “utilitarian”, but that word seems to be taken. I mean that the model calculates utilities associated with its possible actions—and then picks the action with the highest utility.
But that is exactly what this wrapping in a post-hoc utility function doesn’t do. The model first picks an action in whatever way it does, then labels that with utility 1.