Although this “just so” utility function is valid, it doesn’t peek inside the skull—it’s not useful as a model of humans.
It’s a model of any computable agent. The point of a utility-based framework capable of modelling any agent is that it allows comparisons between agents of any type. Generality is sometimes a virtue. You can’t easily compare the values of different creatures if you can’t even model those values in the same framework.
The reason I wouldn’t call all agents “utility maximizers” is because I want utility maximizers to have a certain causal structure—if you change the probability balance of two options and leave everything else equal, you want it to respond thus.
Well, you can define your terms however you like—if you explain what you are doing. “Utility” and “maximizer” are ordinary English words, though.
It seems to be impossible to act as though you don’t have a utility function, (as was originally claimed) though. “Utility function” is a perfectly general concept which can be used to model any agent. There may be slightly more concise methods of modelling some agents—that seems to be roughly the concept that you are looking for.
So: it would be possible to say that an agent acts in a manner such that utility maximisation is not the most parsimonious explanation of its behaviour.
Although this “just so” utility function is valid, it doesn’t peek inside the skull—it’s not useful as a model of humans.
It’s a model of any computable agent.
Sorry, replace “model” with “emulation you can use to predict the emulated thing.”
There may be slightly more concise methods of modelling some agents—that seems to be roughly the concept that you are looking for.
I’m talking about looking inside someone’s head and finding the right algorithms running. Rather than “what utility function fits their actions,” I think the point here is “what’s in their skull?”
I’m talking about looking inside someone’s head and finding the right algorithms running. Rather than “what utility function fits their actions,” I think the point here is “what’s in their skull?”
The point made by the O.P. was:
Suppose it turned out that humans violate the axioms of VNM rationality (and therefore don’t act like they have utility functions)
It discussed actions—not brain states. My comments were made in that context.
It’s a model of any computable agent. The point of a utility-based framework capable of modelling any agent is that it allows comparisons between agents of any type. Generality is sometimes a virtue. You can’t easily compare the values of different creatures if you can’t even model those values in the same framework.
Well, you can define your terms however you like—if you explain what you are doing. “Utility” and “maximizer” are ordinary English words, though.
It seems to be impossible to act as though you don’t have a utility function, (as was originally claimed) though. “Utility function” is a perfectly general concept which can be used to model any agent. There may be slightly more concise methods of modelling some agents—that seems to be roughly the concept that you are looking for.
So: it would be possible to say that an agent acts in a manner such that utility maximisation is not the most parsimonious explanation of its behaviour.
Sorry, replace “model” with “emulation you can use to predict the emulated thing.”
I’m talking about looking inside someone’s head and finding the right algorithms running. Rather than “what utility function fits their actions,” I think the point here is “what’s in their skull?”
The point made by the O.P. was:
It discussed actions—not brain states. My comments were made in that context.