Humans regularly use utilitly-based agents—to do things like play the stockmarket. They seem to work OK to me. Nor do I agree with you about utility-based models of humans. Basically, most of your objections seem irrelevant to me.
When studying the stock market, we use the convenient approximation that people are utility maximizers (where the utility function is expected profit). But this is only an approximation, useful in this limited domain. Would you commit murder for money? No? Then your utility function isn’t really expected profit. Nor, as it turns out, is it anything else that can be written down—other than “the sum total of all my preferences”, at which point we have to acknowledge that we are not utility maximizers in any useful sense of the term.
Right, I hadn’t read your comments in the other thread, but they are perfectly clear, and I’m not asking you to rephrase. But the key term in my last comment is in any useful sense. I do reject utility-based frameworks in this context because their usefulness has been left far behind.
Personally, I think a utilitarian approach is very useful for understanding behaviour. One can model most organisms pretty well as expected fitness maximisers with limited resources. That idea is the foundation of much evolutionary psychology.
The question isn’t whether the model is predictively useful with respect to most organisms, it’s whether it is predictively useful with respect to a hypothetical algorithm which replicates salient human powers such as epistemic hunger, model building, hierarchical goal seeking, and so on.
Say we’re looking to explain the process of inferring regularities (such as physical laws) by observing one’s environment—what does modeling this as “maximizing a utility function” buy us?
The main virtues of utility-based models are that they are general—and so allow comparisons across agents—and that they abstract goal-seeking behaviour away from the implementation details of finite memories, processing speed, etc—which helps if you are interested in focusing on either of those areas.
Humans regularly use utilitly-based agents—to do things like play the stockmarket. They seem to work OK to me. Nor do I agree with you about utility-based models of humans. Basically, most of your objections seem irrelevant to me.
When studying the stock market, we use the convenient approximation that people are utility maximizers (where the utility function is expected profit). But this is only an approximation, useful in this limited domain. Would you commit murder for money? No? Then your utility function isn’t really expected profit. Nor, as it turns out, is it anything else that can be written down—other than “the sum total of all my preferences”, at which point we have to acknowledge that we are not utility maximizers in any useful sense of the term.
“We” don’t have to acknowledge that.
I’ve gone over my views on this issue before—e.g. here:
http://lesswrong.com/lw/1qk/applying_utility_functions_to_humans_considered/1kfj
If you reject utility-based frameworks in this context, then fine—but I am not planning to rephrase my point for you.
Right, I hadn’t read your comments in the other thread, but they are perfectly clear, and I’m not asking you to rephrase. But the key term in my last comment is in any useful sense. I do reject utility-based frameworks in this context because their usefulness has been left far behind.
Personally, I think a utilitarian approach is very useful for understanding behaviour. One can model most organisms pretty well as expected fitness maximisers with limited resources. That idea is the foundation of much evolutionary psychology.
The question isn’t whether the model is predictively useful with respect to most organisms, it’s whether it is predictively useful with respect to a hypothetical algorithm which replicates salient human powers such as epistemic hunger, model building, hierarchical goal seeking, and so on.
Say we’re looking to explain the process of inferring regularities (such as physical laws) by observing one’s environment—what does modeling this as “maximizing a utility function” buy us?
In comparison with what?
The main virtues of utility-based models are that they are general—and so allow comparisons across agents—and that they abstract goal-seeking behaviour away from the implementation details of finite memories, processing speed, etc—which helps if you are interested in focusing on either of those areas.