Actually, in fairness to pjeby, I did a pretty good job of confusing them in my comment. If you look again, you will see that I was saying that standard utility maximization does a pretty good job on both the descriptive and the normative tasks.
And, of course as the whole structure of LW teaches us, utility maximization is only an approximation to the correct descriptive theory. I would claim that it is a good approximation—an approximation which keeps getting better as more and more cognitive resources are invested in any particular decision by the decision maker. But an approximation nonetheless.
So, what I am saying is that pjeby criticized me on descriptive grounds because that is where it seemed I had pitched my camp.
He made a “corollary” about the normative sense of utility maximization, right after an argument about its descriptive sense.
The choice of how you represent a computation is not value-neutral, even if all you care about is the computation speed.
The notion of a single utility function is computationally much better suited to machines than humans—but that’s because it’s a much poorer representation of human values!
Conversely, single utility functions are much more poorly-suited to processing on humans’ cognifitive architecture, because our brains don’t really work that way.
Ergo, if you want to think about how humans will behave and what they will prefer, you are doing it suboptimally by using utility functions. You will have to think much harder to get worse answers than you would by thinking in terms of satisficing perceptual differences.
(IOW, the descriptive and normative aspects are pretty thoroughly intertwined, because the thing being described is also the thing that needs to be used to do the computation!)
Actually, in fairness to pjeby, I did a pretty good job of confusing them in my comment. If you look again, you will see that I was saying that standard utility maximization does a pretty good job on both the descriptive and the normative tasks.
And, of course as the whole structure of LW teaches us, utility maximization is only an approximation to the correct descriptive theory. I would claim that it is a good approximation—an approximation which keeps getting better as more and more cognitive resources are invested in any particular decision by the decision maker. But an approximation nonetheless.
So, what I am saying is that pjeby criticized me on descriptive grounds because that is where it seemed I had pitched my camp.
He made a “corollary” about the normative sense of utility maximization, right after an argument about its descriptive sense. Hence, confusion.
The choice of how you represent a computation is not value-neutral, even if all you care about is the computation speed.
The notion of a single utility function is computationally much better suited to machines than humans—but that’s because it’s a much poorer representation of human values!
Conversely, single utility functions are much more poorly-suited to processing on humans’ cognifitive architecture, because our brains don’t really work that way.
Ergo, if you want to think about how humans will behave and what they will prefer, you are doing it suboptimally by using utility functions. You will have to think much harder to get worse answers than you would by thinking in terms of satisficing perceptual differences.
(IOW, the descriptive and normative aspects are pretty thoroughly intertwined, because the thing being described is also the thing that needs to be used to do the computation!)