Suppose it turned out that humans violate the axioms of VNM rationality (and therefore don’t act like they have utility functions) because there are three valuation systems in the brain that make conflicting valuations
Humans violate any given set of axioms simply because they are not formally flawless, so such explanations only start being relevant when discussing an idealization, in this case a descriptive one. But properties of descriptive idealizations don’t easily translate into properties of normative idealizations.
Humans violate any given set of axioms simply because they are not formally flawless, so such explanations only start being relevant when discussing an idealization, in this case a descriptive one. But properties of descriptive idealizations don’t easily translate into properties of normative idealizations.