Although this “just so” utility function is valid, it doesn’t peek inside the skull—it’s not useful as a model of humans.
It’s a model of any computable agent.
Sorry, replace “model” with “emulation you can use to predict the emulated thing.”
There may be slightly more concise methods of modelling some agents—that seems to be roughly the concept that you are looking for.
I’m talking about looking inside someone’s head and finding the right algorithms running. Rather than “what utility function fits their actions,” I think the point here is “what’s in their skull?”
I’m talking about looking inside someone’s head and finding the right algorithms running. Rather than “what utility function fits their actions,” I think the point here is “what’s in their skull?”
The point made by the O.P. was:
Suppose it turned out that humans violate the axioms of VNM rationality (and therefore don’t act like they have utility functions)
It discussed actions—not brain states. My comments were made in that context.
Sorry, replace “model” with “emulation you can use to predict the emulated thing.”
I’m talking about looking inside someone’s head and finding the right algorithms running. Rather than “what utility function fits their actions,” I think the point here is “what’s in their skull?”
The point made by the O.P. was:
It discussed actions—not brain states. My comments were made in that context.