I agree that humans are not utility-maximizers or similar goal-oriented agents—not in the sense we can’t be modeled as such things, but in the sense that these models do not compress our preferences to any great degree, which happens to be because they are greatly at odds with our underlying mechanisms for determining preference and behavior.
I agree that humans are not utility-maximizers or similar goal-oriented agents—not in the sense we can’t be modeled as such things, but in the sense that these models do not compress our preferences to any great degree, which happens to be because they are greatly at odds with our underlying mechanisms for determining preference and behavior.