you can always use meta factors to argue why your revealed preferences actually were coherent.
Three observations. First, those aren’t meta factors, those are just normal positive terms in the utility function that one formulation ignores and another one includes. Second, “you can always use” does not necessarily imply that the argument is wrong. Third, we are not arguing about coherency—why would the claim that, say, I value the perception of myself as someone who votes for X more than 10c be incoherent?
we know that humans are born being somewhat farther-than-optimal from the ideal utility maximizer, and practicing the art of rationality adds value to their lives by getting them somewhat closer to the ideal than where they started.
I disagree, both with the claim that getting closer to the ideal of a perfect utility maximizer necessarily adds value to people’s lives, and with the interpretation of the art of rationality as the art of getting people to be more like that utility maximizer.
Besides, there is still the original point: even if you posit some entilty as a perfect utility maximizer, what would its utility function include? Can you use the utility function to figure out which terms should go into the utility function? Colour me doubtful. In crude terms, how do you know what to maximize?
Well I guess I’ll focus on what seems to be our most fundamental disagreement, my claim that getting value from studying rationality usually involves getting yourself to be closer to an ideal utility maximizer (not necessarily all the way there).
Reading the Allais Paradox post can make a reader notice their contradictory preferences, and reflect on it, and subsequently be a little less contradictory, to their benefit. That seems like a good representative example of what studying rationality looks like and how it adds value.
Three observations. First, those aren’t meta factors, those are just normal positive terms in the utility function that one formulation ignores and another one includes. Second, “you can always use” does not necessarily imply that the argument is wrong. Third, we are not arguing about coherency—why would the claim that, say, I value the perception of myself as someone who votes for X more than 10c be incoherent?
I disagree, both with the claim that getting closer to the ideal of a perfect utility maximizer necessarily adds value to people’s lives, and with the interpretation of the art of rationality as the art of getting people to be more like that utility maximizer.
Besides, there is still the original point: even if you posit some entilty as a perfect utility maximizer, what would its utility function include? Can you use the utility function to figure out which terms should go into the utility function? Colour me doubtful. In crude terms, how do you know what to maximize?
Well I guess I’ll focus on what seems to be our most fundamental disagreement, my claim that getting value from studying rationality usually involves getting yourself to be closer to an ideal utility maximizer (not necessarily all the way there).
Reading the Allais Paradox post can make a reader notice their contradictory preferences, and reflect on it, and subsequently be a little less contradictory, to their benefit. That seems like a good representative example of what studying rationality looks like and how it adds value.
You assert this as if it were an axiom. It doesn’t look like one to me. Show me the benefit.
And I still don’t understand why would I want to become an ideal utility maximizer.
For the sake of organization, I suggest discussing such things on the comment threads of Sequence posts.
If you could flip a switch right now that makes you an ideal utility maximizer, you wouldn’t do it?
Who gets to define my utility function? I don’t have one at the moment.
I would never flip a switch like that.