“Picking one charity and sticking to it” would follow from most “functions defined on worlds” that I’m able to imagine, while the firework of meaningless actions that we repeat every day seems to defy any explanation by utility… unless you’re willing to imagine “utility” as a kind of carrot that points in totally different directions depending on the time of day and what you ate for lunch. But in general, yes, I concede I didn’t prove logically that we have no utility function.
Of course, if you’re serious about falsifying the utility theory, just work from any published example of preference reversal in real humans. There’s many to choose from.
I would argue that it does a better job of explaining actual human behaviour than your type 1 theory which as stated would seem to have trouble accounting for me deciding to shut the door or go for a walk to get away from the tempting smell of food because I have a preference for future world states where I am not fat.
Going by the relative frequency of your scenario vs mine, I’d say my theory wins this example hands down. :-) Even if we consider only people with a consciously stated goal of being non-fat.
I think it is potentially useful however to model our decision making process as a process by which our brains evaluate possible future states of the world and prefer some states to others (a ‘utility function’ in a looser sense) and favour actions which are expected to lead to preferred outcomes.
At most you can say that our brains evaluate descriptions of future states and weigh their emotional impact. Eliezer wrote eloquently about one particularly obvious preference reversal of this sort, and of course immediately launched into defending expected utility as a prescriptive rather than descriptive theory. Shut up and multiply, silly humans.
“Picking one charity and sticking to it” would follow from most utility functions I’m able to imagine
I think your imagination is rather limited then. Charitable donations as a signaling activity are one example. If you donate to charity partly to signal to others that you are an altruistic person and use your choice of charity to signal the kinds of things that you care about then donating to multiple charities can make perfect sense. Donating $500 to Oxfam and $500 to the WWF may deliver greater signaling benefits than donating $1000 to one of the two as it will be an effective signal both for third parties who prioritize famine and for third parties who prioritize animal welfare. If you are partly buying ‘fuzzies’ by donating to charity then donating to the two charities may allow you to feel good whenever you encounter news stories about either famine or endangered pandas, for a net benefit greater than feeling slightly more virtuous on encountering a subset of stories.
Between evolutionary psychology, game theory, micro and behavioural economics and public choice theory to name a few research areas I have found a lot of insightful explanations of human behaviour that demonstrate people rationally responding to incentives. The explanations often reveal behaviour that appears irrational according to one version of utility makes perfect sense when you realize what people’s actual goals and preferences are. That’s not to say there aren’t examples of biases and flaws in reasoning but I’ve found considerable practical value in explaining human action through models that assume rational utility maximization.
Incidentally, I don’t believe that demonstrations of preference reversal falsify the kind of model I’m talking about. They only falsify the naive ‘fully conscious rational agent with a static utility function’ model which is not much worth defending anyway.
From the relative frequency of your scenario vs mine, I’d say my theory wins this example hands down. :-)
Your theory fails to account for the exceptions at all though. And I have had great success losing weight by consciously arranging my environment to reduce exposure to temptation. How does your theory account for that kind of behaviour?
Aaah! No, no. I originally used “picking one charity” as a metaphor for following any real-world goal concertedly and monomanically. Foolishly thought it would be transparent to everyone. Sorry.
Yes, incentives do work, and utility-based models do have predictive and explanatory power. Many local areas of human activity are well modeled by utility, but it’s different utilities in different situations, not a One True Utility innate to the person. And I’m very wary of shoehorning stuff into utility theory when it’s an obviously poor fit, like moral judgements or instinctive actions.
My theory doesn’t consider rational behavior impossible—it’s just exceptional. A typical day will contain one rationally optimized decision (if you’re really good; otherwise zero) and thousands of decisions made for you by your tendencies.
At least that’s been my experience; maybe there are super-people who can do better. People who really do shut up and multiply with world-states. I’d be really scared of such people because (warning, Mind-Killer ahead) my country was once drowned in blood by revolutionaries wishing to build a rational, atheistic, goal-directed society. Precisely the kind of calculating altruists who’d never play chess while there was a kid starving anywhere. Of course they ultimately failed. If they’d succeeded, you’d now be living in the happiest utopia that was imaginable in the 19th century: world communism. Let that stand as a kind of “genetic” explanation for my beliefs.
My theory doesn’t consider rational behavior impossible—it’s just exceptional. A typical day will contain one rationally optimized decision (if you’re really good; otherwise zero) and thousands of decisions made for you by your tendencies.
This relates to my earlier comment about ignoring the computational limits on rationality. It wouldn’t be rational to put a lot of effort into rationally optimizing every decision you make during the day. In my opinion any attempts to improve human rationality have to recognize that resource limitations and computational limits are an important constraint. Having an imperfect but reasonable heuristic for most decisions is a rational solution to the problem of making decisions given limited resources. It would be great to figure out how to do better given the constraints but theories that start from an assumption of unlimited resources are going to be of limited practical use.
my country was once drowned in blood by revolutionaries wishing to build a rational, atheistic, goal-directed society.
I can see how conflating communism with rationality would lead you to be distrustful of rationality. I personally think the greatest intellectual failure of communism was failing to recognize the importance of individual incentives and utility maximization or to acknowledge the gap between people’s stated intentions and their actual motivations which means in my view it was never rational. Hayek’s economic calculation problem criticism of socialism is an example of recognizing the importance of computational constraints when trying to improve decisions. I’d agree that there is a danger of people with a naive view of rationality and utility thinking that communism is a good idea though.
“Picking one charity and sticking to it” would follow from most “functions defined on worlds” that I’m able to imagine, while the firework of meaningless actions that we repeat every day seems to defy any explanation by utility… unless you’re willing to imagine “utility” as a kind of carrot that points in totally different directions depending on the time of day and what you ate for lunch. But in general, yes, I concede I didn’t prove logically that we have no utility function.
Of course, if you’re serious about falsifying the utility theory, just work from any published example of preference reversal in real humans. There’s many to choose from.
Going by the relative frequency of your scenario vs mine, I’d say my theory wins this example hands down. :-) Even if we consider only people with a consciously stated goal of being non-fat.
At most you can say that our brains evaluate descriptions of future states and weigh their emotional impact. Eliezer wrote eloquently about one particularly obvious preference reversal of this sort, and of course immediately launched into defending expected utility as a prescriptive rather than descriptive theory. Shut up and multiply, silly humans.
I think your imagination is rather limited then. Charitable donations as a signaling activity are one example. If you donate to charity partly to signal to others that you are an altruistic person and use your choice of charity to signal the kinds of things that you care about then donating to multiple charities can make perfect sense. Donating $500 to Oxfam and $500 to the WWF may deliver greater signaling benefits than donating $1000 to one of the two as it will be an effective signal both for third parties who prioritize famine and for third parties who prioritize animal welfare. If you are partly buying ‘fuzzies’ by donating to charity then donating to the two charities may allow you to feel good whenever you encounter news stories about either famine or endangered pandas, for a net benefit greater than feeling slightly more virtuous on encountering a subset of stories.
Between evolutionary psychology, game theory, micro and behavioural economics and public choice theory to name a few research areas I have found a lot of insightful explanations of human behaviour that demonstrate people rationally responding to incentives. The explanations often reveal behaviour that appears irrational according to one version of utility makes perfect sense when you realize what people’s actual goals and preferences are. That’s not to say there aren’t examples of biases and flaws in reasoning but I’ve found considerable practical value in explaining human action through models that assume rational utility maximization.
Incidentally, I don’t believe that demonstrations of preference reversal falsify the kind of model I’m talking about. They only falsify the naive ‘fully conscious rational agent with a static utility function’ model which is not much worth defending anyway.
Your theory fails to account for the exceptions at all though. And I have had great success losing weight by consciously arranging my environment to reduce exposure to temptation. How does your theory account for that kind of behaviour?
Aaah! No, no. I originally used “picking one charity” as a metaphor for following any real-world goal concertedly and monomanically. Foolishly thought it would be transparent to everyone. Sorry.
Yes, incentives do work, and utility-based models do have predictive and explanatory power. Many local areas of human activity are well modeled by utility, but it’s different utilities in different situations, not a One True Utility innate to the person. And I’m very wary of shoehorning stuff into utility theory when it’s an obviously poor fit, like moral judgements or instinctive actions.
My theory doesn’t consider rational behavior impossible—it’s just exceptional. A typical day will contain one rationally optimized decision (if you’re really good; otherwise zero) and thousands of decisions made for you by your tendencies.
At least that’s been my experience; maybe there are super-people who can do better. People who really do shut up and multiply with world-states. I’d be really scared of such people because (warning, Mind-Killer ahead) my country was once drowned in blood by revolutionaries wishing to build a rational, atheistic, goal-directed society. Precisely the kind of calculating altruists who’d never play chess while there was a kid starving anywhere. Of course they ultimately failed. If they’d succeeded, you’d now be living in the happiest utopia that was imaginable in the 19th century: world communism. Let that stand as a kind of “genetic” explanation for my beliefs.
This relates to my earlier comment about ignoring the computational limits on rationality. It wouldn’t be rational to put a lot of effort into rationally optimizing every decision you make during the day. In my opinion any attempts to improve human rationality have to recognize that resource limitations and computational limits are an important constraint. Having an imperfect but reasonable heuristic for most decisions is a rational solution to the problem of making decisions given limited resources. It would be great to figure out how to do better given the constraints but theories that start from an assumption of unlimited resources are going to be of limited practical use.
I can see how conflating communism with rationality would lead you to be distrustful of rationality. I personally think the greatest intellectual failure of communism was failing to recognize the importance of individual incentives and utility maximization or to acknowledge the gap between people’s stated intentions and their actual motivations which means in my view it was never rational. Hayek’s economic calculation problem criticism of socialism is an example of recognizing the importance of computational constraints when trying to improve decisions. I’d agree that there is a danger of people with a naive view of rationality and utility thinking that communism is a good idea though.