Humans don’t make decisions based primarily on utility functions. To the extent that the Wise Master presented that as a descriptive fact rather than a prescriptive exhortation, he was just wrong on the facts. You can model behavior with a set of values and a utility function, but that model will not fully capture human behavior, or else will be so overfit that it ceases to be descriptive at all (e.g. “I have utility infinity for doing the stuff I do and utility zero for everything else” technically predicts your actions but is practically useless.)
You say that if humans don’t implement utility functions there’s no point to reading Less Wrong. I disagree, but in any case, that doesn’t seem like an argument that humans implement utility functions. This argument seems more like an appeal to emotion, we are Less Wrongers who have some fraction of our identity connected to this site, so you want us to reject this proposition because of the emotional cost of the conclusions it brings about. Logically, though, it makes little sense to take the meaningfulness of Less Wrong as given and use that to reason about human cognition. That’s begging the question.
Nobody said that humans implement utility functions. Since I already said this, all I can do is say it again: Values, and utility functions, are both models we construct to explain why we do what we do. Whether or not any mechanism inside your brain does computations homomorphic to utility computations is irrelevant. [New edit uses different wording.]
Saying that humans don’t implement utility functions is like saying that the ocean doesn’t simulate fluid flow, or that a satellite doesn’t compute a trajectory.
You could equally well analyze the utils and the fuzzies, and find subcategories of those, and say they are not exchangable.
The task of modeling a utility function is the task of finding how these different things are exchangeable. We know they are exchangable, because people have preferences between situations. They eventually do one thing or the other.
Humans don’t make decisions based primarily on utility functions. To the extent that the Wise Master presented that as a descriptive fact rather than a prescriptive exhortation, he was just wrong on the facts. You can model behavior with a set of values and a utility function, but that model will not fully capture human behavior, or else will be so overfit that it ceases to be descriptive at all (e.g. “I have utility infinity for doing the stuff I do and utility zero for everything else” technically predicts your actions but is practically useless.)
You say that if humans don’t implement utility functions there’s no point to reading Less Wrong. I disagree, but in any case, that doesn’t seem like an argument that humans implement utility functions. This argument seems more like an appeal to emotion, we are Less Wrongers who have some fraction of our identity connected to this site, so you want us to reject this proposition because of the emotional cost of the conclusions it brings about. Logically, though, it makes little sense to take the meaningfulness of Less Wrong as given and use that to reason about human cognition. That’s begging the question.
Nobody said that humans implement utility functions. Since I already said this, all I can do is say it again: Values, and utility functions, are both models we construct to explain why we do what we do. Whether or not any mechanism inside your brain does computations homomorphic to utility computations is irrelevant. [New edit uses different wording.]
Saying that humans don’t implement utility functions is like saying that the ocean doesn’t simulate fluid flow, or that a satellite doesn’t compute a trajectory.
It’s more like saying a pane of glass doesn’t simulate fluid flow, or an electron doesn’t compute a trajectory.
Which would be way off!
Does it flow, or simulate a flow?
Neither.
So how would you define rationality? What are you trying to do, when you’re trying to behave rationally?
Indeed, and a model which treats fuzzies and utils as exchangeable is a poor one.
You could equally well analyze the utils and the fuzzies, and find subcategories of those, and say they are not exchangable.
The task of modeling a utility function is the task of finding how these different things are exchangeable. We know they are exchangable, because people have preferences between situations. They eventually do one thing or the other.