Utility maximisation is not really a theory about how humans work. AFAIK, nobody thinks that humans have an internal representation of utility which they strive to maximise. Those that entertain this idea are usually busy constructing a straw-man critique.
It is like how you can model catching a ball with PDEs. You can build a pretty good model like that—even though it bears little relationship to the actual internal operation.
[2011 edit: hmm—the mind actually works a lot more like that than I previously thought!]
It is like how you can model catching a ball with PDEs. You can build a pretty good model like that—even though it bears little relationship to the actual internal operation.
That’s kind of ironic that you mention PDE’s, since PCT actually proposes that we do use something very like an evolutionary algorithm to satisfice our multi-goal controller setups. IOW, I don’t think it’s quite accurate to say that PDE’s “bear little relationship to the actual internal operation.”
Utility maximisation is not really a theory about how humans work. AFAIK, nobody thinks that humans have an internal representation of utility which they strive to maximise. Those that entertain this idea are usually busy constructing a straw-man critique.
It is like how you can model catching a ball with PDEs. You can build a pretty good model like that—even though it bears little relationship to the actual internal operation.
[2011 edit: hmm—the mind actually works a lot more like that than I previously thought!]
That’s kind of ironic that you mention PDE’s, since PCT actually proposes that we do use something very like an evolutionary algorithm to satisfice our multi-goal controller setups. IOW, I don’t think it’s quite accurate to say that PDE’s “bear little relationship to the actual internal operation.”