The way I know to assign a utility function to an arbitrary agent is to say “I assign what the agent does utility 1, and everything else utility less than one.” Although this “just so” utility function is valid, it doesn’t peek inside the skull—it’s not useful as a model of humans.
What I meant by “how humans make decisions” is a causal model of human decision-making. The reason I wouldn’t call all agents “utility maximizers” is because I want utility maximizers to have a certain causal structure—if you change the probability balance of two options and leave everything else equal, you want it to respond thus. As gwern recently reminded me by linking to that article on Causality, this sort of structure can be tested in experiments.
Although this “just so” utility function is valid, it doesn’t peek inside the skull—it’s not useful as a model of humans.
It’s a model of any computable agent. The point of a utility-based framework capable of modelling any agent is that it allows comparisons between agents of any type. Generality is sometimes a virtue. You can’t easily compare the values of different creatures if you can’t even model those values in the same framework.
The reason I wouldn’t call all agents “utility maximizers” is because I want utility maximizers to have a certain causal structure—if you change the probability balance of two options and leave everything else equal, you want it to respond thus.
Well, you can define your terms however you like—if you explain what you are doing. “Utility” and “maximizer” are ordinary English words, though.
It seems to be impossible to act as though you don’t have a utility function, (as was originally claimed) though. “Utility function” is a perfectly general concept which can be used to model any agent. There may be slightly more concise methods of modelling some agents—that seems to be roughly the concept that you are looking for.
So: it would be possible to say that an agent acts in a manner such that utility maximisation is not the most parsimonious explanation of its behaviour.
Although this “just so” utility function is valid, it doesn’t peek inside the skull—it’s not useful as a model of humans.
It’s a model of any computable agent.
Sorry, replace “model” with “emulation you can use to predict the emulated thing.”
There may be slightly more concise methods of modelling some agents—that seems to be roughly the concept that you are looking for.
I’m talking about looking inside someone’s head and finding the right algorithms running. Rather than “what utility function fits their actions,” I think the point here is “what’s in their skull?”
I’m talking about looking inside someone’s head and finding the right algorithms running. Rather than “what utility function fits their actions,” I think the point here is “what’s in their skull?”
The point made by the O.P. was:
Suppose it turned out that humans violate the axioms of VNM rationality (and therefore don’t act like they have utility functions)
It discussed actions—not brain states. My comments were made in that context.
Er, what are you talking about? Did you not understand what was wrong with Luke’s sentence? Or what are you trying to say?
The way I know to assign a utility function to an arbitrary agent is to say “I assign what the agent does utility 1, and everything else utility less than one.” Although this “just so” utility function is valid, it doesn’t peek inside the skull—it’s not useful as a model of humans.
What I meant by “how humans make decisions” is a causal model of human decision-making. The reason I wouldn’t call all agents “utility maximizers” is because I want utility maximizers to have a certain causal structure—if you change the probability balance of two options and leave everything else equal, you want it to respond thus. As gwern recently reminded me by linking to that article on Causality, this sort of structure can be tested in experiments.
It’s a model of any computable agent. The point of a utility-based framework capable of modelling any agent is that it allows comparisons between agents of any type. Generality is sometimes a virtue. You can’t easily compare the values of different creatures if you can’t even model those values in the same framework.
Well, you can define your terms however you like—if you explain what you are doing. “Utility” and “maximizer” are ordinary English words, though.
It seems to be impossible to act as though you don’t have a utility function, (as was originally claimed) though. “Utility function” is a perfectly general concept which can be used to model any agent. There may be slightly more concise methods of modelling some agents—that seems to be roughly the concept that you are looking for.
So: it would be possible to say that an agent acts in a manner such that utility maximisation is not the most parsimonious explanation of its behaviour.
Sorry, replace “model” with “emulation you can use to predict the emulated thing.”
I’m talking about looking inside someone’s head and finding the right algorithms running. Rather than “what utility function fits their actions,” I think the point here is “what’s in their skull?”
The point made by the O.P. was:
It discussed actions—not brain states. My comments were made in that context.