He made a “corollary” about the normative sense of utility maximization, right after an argument about its descriptive sense.
The choice of how you represent a computation is not value-neutral, even if all you care about is the computation speed.
The notion of a single utility function is computationally much better suited to machines than humans—but that’s because it’s a much poorer representation of human values!
Conversely, single utility functions are much more poorly-suited to processing on humans’ cognifitive architecture, because our brains don’t really work that way.
Ergo, if you want to think about how humans will behave and what they will prefer, you are doing it suboptimally by using utility functions. You will have to think much harder to get worse answers than you would by thinking in terms of satisficing perceptual differences.
(IOW, the descriptive and normative aspects are pretty thoroughly intertwined, because the thing being described is also the thing that needs to be used to do the computation!)
The choice of how you represent a computation is not value-neutral, even if all you care about is the computation speed.
The notion of a single utility function is computationally much better suited to machines than humans—but that’s because it’s a much poorer representation of human values!
Conversely, single utility functions are much more poorly-suited to processing on humans’ cognifitive architecture, because our brains don’t really work that way.
Ergo, if you want to think about how humans will behave and what they will prefer, you are doing it suboptimally by using utility functions. You will have to think much harder to get worse answers than you would by thinking in terms of satisficing perceptual differences.
(IOW, the descriptive and normative aspects are pretty thoroughly intertwined, because the thing being described is also the thing that needs to be used to do the computation!)