Human beings aren’t goal systems. We DON’T SUM, anymore than a car “sums” the value of its speedometer with the value of the fuel gauge. If we actually summed, you’d get the outcome Eliezer once advocated: every one of us “picking one charity and donating as much to it as he can”.
That seems an obviously fallacious argument to me. Many posts on OB have talked about other motivations behind charitable giving—whether it’s ‘buying fuzzies’ or signalling. You seem to be arguing that because one possible (but naive and inaccurate) model of a person’s utility function would predict different behaviour than what we actually observere, that the observed behaviour is evidence against any utility function being maximized. There are pretty clearly at least two possibilities here: either humans don’t maximize a utility function or they maximize a different utility function from the one you have in mind.
Personally I think humans are imperfect maximizers of utility functions that are sufficiently complex that the ‘function’ terminology is as misleading as it is enlightening but your argument really doesn’t support your conclusion.
Consider a simple human behavior: notice the smell of yummy food from the kitchen where Mom’s cooking, head there to check and grab a bite. Which of the following sounds like a more fitting model:
1) We have a circuit hardwired to react to yummy smells when we’re hungry.
2) We subconsciously sort different world-states according to a utility function that, among numerous other terms, assigns high weight to finding food when we’re hungry. (What?)
If most of our behavior is better explained by arguments of type 1, why shoehorn it into a utility function and what guarantee do you have that a suitable function exists? (Sorry, “shoehorning” is really the best term for e.g. Eliezer’s arguments in favor of SPECKS or against certain kinds of circular preferences. Silly humans, my theory says you must have a coherent utility function on all imaginable worlds—or else you’re defective.) The potential harm from enforcing a total ordering on world states has, I believe, already been convincingly demonstrated; your turn.
I think a few different issues are getting entangled here. I’m going to try and disentangle them a little.
First, my post was primarily addressing the flawed argument that the fact we don’t all ‘pick one charity and donate as much to it as we can’ is evidence against us being utility maximizers for some incompletely known utility function. Any argument that postulates a utility function and then demonstrates how observed human behaviour does not maximize that function and presents this as evidence that we are not utility maximizers is flawed since the observed behaviour could also be explained by maximizing a different utility function. Now you could argue that this makes the theory that we are utility maximizers unfalsifiable, and I think that complaint has some merit, but the original argument is still unsound.
Another issue is what exactly we mean by a utility function. If we’re talking about a function that takes world states as inputs and returns a real number representing utility as an output then it’s pretty clear that our brains do not encode such a function. I think it is potentially useful however to model our decision making process as a process by which our brains evaluate possible future states of the world and prefer some states to others (a ‘utility function’ in a looser sense) and favour actions which are expected to lead to preferred outcomes. If you’d prefer not to call this a utility function then perhaps you can suggest alternative terminology? If you dispute the value of this as a model for human decision making then that’s also a valid position but let’s focus on that discussion.
Despite the flaws of the ‘utility maximizing’ model I think it has a lot of explanatory and predictive power. I would argue that it does a better job of explaining actual human behaviour than your type 1 theory which as stated would seem to have trouble accounting for me deciding to shut the door or go for a walk to get away from the tempting smell of food because I have a preference for future world states where I am not fat.
My biggest problem with more extreme forms of ‘utility maximizing’ arguments is that I think they do not pay enough attention to computation limits that prevent a perfect utility maximizer from being realizable. This doesn’t mean the models aren’t useful—a model of a chess playing computer that attempts to explain/predict its behaviour by postulating that it is trying to optimize for optimal chess outcomes is still useful even if the computer is low powered or poorly coded and so plays sub-optimally.
“Picking one charity and sticking to it” would follow from most “functions defined on worlds” that I’m able to imagine, while the firework of meaningless actions that we repeat every day seems to defy any explanation by utility… unless you’re willing to imagine “utility” as a kind of carrot that points in totally different directions depending on the time of day and what you ate for lunch. But in general, yes, I concede I didn’t prove logically that we have no utility function.
Of course, if you’re serious about falsifying the utility theory, just work from any published example of preference reversal in real humans. There’s many to choose from.
I would argue that it does a better job of explaining actual human behaviour than your type 1 theory which as stated would seem to have trouble accounting for me deciding to shut the door or go for a walk to get away from the tempting smell of food because I have a preference for future world states where I am not fat.
Going by the relative frequency of your scenario vs mine, I’d say my theory wins this example hands down. :-) Even if we consider only people with a consciously stated goal of being non-fat.
I think it is potentially useful however to model our decision making process as a process by which our brains evaluate possible future states of the world and prefer some states to others (a ‘utility function’ in a looser sense) and favour actions which are expected to lead to preferred outcomes.
At most you can say that our brains evaluate descriptions of future states and weigh their emotional impact. Eliezer wrote eloquently about one particularly obvious preference reversal of this sort, and of course immediately launched into defending expected utility as a prescriptive rather than descriptive theory. Shut up and multiply, silly humans.
“Picking one charity and sticking to it” would follow from most utility functions I’m able to imagine
I think your imagination is rather limited then. Charitable donations as a signaling activity are one example. If you donate to charity partly to signal to others that you are an altruistic person and use your choice of charity to signal the kinds of things that you care about then donating to multiple charities can make perfect sense. Donating $500 to Oxfam and $500 to the WWF may deliver greater signaling benefits than donating $1000 to one of the two as it will be an effective signal both for third parties who prioritize famine and for third parties who prioritize animal welfare. If you are partly buying ‘fuzzies’ by donating to charity then donating to the two charities may allow you to feel good whenever you encounter news stories about either famine or endangered pandas, for a net benefit greater than feeling slightly more virtuous on encountering a subset of stories.
Between evolutionary psychology, game theory, micro and behavioural economics and public choice theory to name a few research areas I have found a lot of insightful explanations of human behaviour that demonstrate people rationally responding to incentives. The explanations often reveal behaviour that appears irrational according to one version of utility makes perfect sense when you realize what people’s actual goals and preferences are. That’s not to say there aren’t examples of biases and flaws in reasoning but I’ve found considerable practical value in explaining human action through models that assume rational utility maximization.
Incidentally, I don’t believe that demonstrations of preference reversal falsify the kind of model I’m talking about. They only falsify the naive ‘fully conscious rational agent with a static utility function’ model which is not much worth defending anyway.
From the relative frequency of your scenario vs mine, I’d say my theory wins this example hands down. :-)
Your theory fails to account for the exceptions at all though. And I have had great success losing weight by consciously arranging my environment to reduce exposure to temptation. How does your theory account for that kind of behaviour?
Aaah! No, no. I originally used “picking one charity” as a metaphor for following any real-world goal concertedly and monomanically. Foolishly thought it would be transparent to everyone. Sorry.
Yes, incentives do work, and utility-based models do have predictive and explanatory power. Many local areas of human activity are well modeled by utility, but it’s different utilities in different situations, not a One True Utility innate to the person. And I’m very wary of shoehorning stuff into utility theory when it’s an obviously poor fit, like moral judgements or instinctive actions.
My theory doesn’t consider rational behavior impossible—it’s just exceptional. A typical day will contain one rationally optimized decision (if you’re really good; otherwise zero) and thousands of decisions made for you by your tendencies.
At least that’s been my experience; maybe there are super-people who can do better. People who really do shut up and multiply with world-states. I’d be really scared of such people because (warning, Mind-Killer ahead) my country was once drowned in blood by revolutionaries wishing to build a rational, atheistic, goal-directed society. Precisely the kind of calculating altruists who’d never play chess while there was a kid starving anywhere. Of course they ultimately failed. If they’d succeeded, you’d now be living in the happiest utopia that was imaginable in the 19th century: world communism. Let that stand as a kind of “genetic” explanation for my beliefs.
My theory doesn’t consider rational behavior impossible—it’s just exceptional. A typical day will contain one rationally optimized decision (if you’re really good; otherwise zero) and thousands of decisions made for you by your tendencies.
This relates to my earlier comment about ignoring the computational limits on rationality. It wouldn’t be rational to put a lot of effort into rationally optimizing every decision you make during the day. In my opinion any attempts to improve human rationality have to recognize that resource limitations and computational limits are an important constraint. Having an imperfect but reasonable heuristic for most decisions is a rational solution to the problem of making decisions given limited resources. It would be great to figure out how to do better given the constraints but theories that start from an assumption of unlimited resources are going to be of limited practical use.
my country was once drowned in blood by revolutionaries wishing to build a rational, atheistic, goal-directed society.
I can see how conflating communism with rationality would lead you to be distrustful of rationality. I personally think the greatest intellectual failure of communism was failing to recognize the importance of individual incentives and utility maximization or to acknowledge the gap between people’s stated intentions and their actual motivations which means in my view it was never rational. Hayek’s economic calculation problem criticism of socialism is an example of recognizing the importance of computational constraints when trying to improve decisions. I’d agree that there is a danger of people with a naive view of rationality and utility thinking that communism is a good idea though.
That seems an obviously fallacious argument to me. Many posts on OB have talked about other motivations behind charitable giving—whether it’s ‘buying fuzzies’ or signalling. You seem to be arguing that because one possible (but naive and inaccurate) model of a person’s utility function would predict different behaviour than what we actually observere, that the observed behaviour is evidence against any utility function being maximized. There are pretty clearly at least two possibilities here: either humans don’t maximize a utility function or they maximize a different utility function from the one you have in mind.
Personally I think humans are imperfect maximizers of utility functions that are sufficiently complex that the ‘function’ terminology is as misleading as it is enlightening but your argument really doesn’t support your conclusion.
Consider a simple human behavior: notice the smell of yummy food from the kitchen where Mom’s cooking, head there to check and grab a bite. Which of the following sounds like a more fitting model:
1) We have a circuit hardwired to react to yummy smells when we’re hungry.
2) We subconsciously sort different world-states according to a utility function that, among numerous other terms, assigns high weight to finding food when we’re hungry. (What?)
If most of our behavior is better explained by arguments of type 1, why shoehorn it into a utility function and what guarantee do you have that a suitable function exists? (Sorry, “shoehorning” is really the best term for e.g. Eliezer’s arguments in favor of SPECKS or against certain kinds of circular preferences. Silly humans, my theory says you must have a coherent utility function on all imaginable worlds—or else you’re defective.) The potential harm from enforcing a total ordering on world states has, I believe, already been convincingly demonstrated; your turn.
I think a few different issues are getting entangled here. I’m going to try and disentangle them a little.
First, my post was primarily addressing the flawed argument that the fact we don’t all ‘pick one charity and donate as much to it as we can’ is evidence against us being utility maximizers for some incompletely known utility function. Any argument that postulates a utility function and then demonstrates how observed human behaviour does not maximize that function and presents this as evidence that we are not utility maximizers is flawed since the observed behaviour could also be explained by maximizing a different utility function. Now you could argue that this makes the theory that we are utility maximizers unfalsifiable, and I think that complaint has some merit, but the original argument is still unsound.
Another issue is what exactly we mean by a utility function. If we’re talking about a function that takes world states as inputs and returns a real number representing utility as an output then it’s pretty clear that our brains do not encode such a function. I think it is potentially useful however to model our decision making process as a process by which our brains evaluate possible future states of the world and prefer some states to others (a ‘utility function’ in a looser sense) and favour actions which are expected to lead to preferred outcomes. If you’d prefer not to call this a utility function then perhaps you can suggest alternative terminology? If you dispute the value of this as a model for human decision making then that’s also a valid position but let’s focus on that discussion.
Despite the flaws of the ‘utility maximizing’ model I think it has a lot of explanatory and predictive power. I would argue that it does a better job of explaining actual human behaviour than your type 1 theory which as stated would seem to have trouble accounting for me deciding to shut the door or go for a walk to get away from the tempting smell of food because I have a preference for future world states where I am not fat.
My biggest problem with more extreme forms of ‘utility maximizing’ arguments is that I think they do not pay enough attention to computation limits that prevent a perfect utility maximizer from being realizable. This doesn’t mean the models aren’t useful—a model of a chess playing computer that attempts to explain/predict its behaviour by postulating that it is trying to optimize for optimal chess outcomes is still useful even if the computer is low powered or poorly coded and so plays sub-optimally.
“Picking one charity and sticking to it” would follow from most “functions defined on worlds” that I’m able to imagine, while the firework of meaningless actions that we repeat every day seems to defy any explanation by utility… unless you’re willing to imagine “utility” as a kind of carrot that points in totally different directions depending on the time of day and what you ate for lunch. But in general, yes, I concede I didn’t prove logically that we have no utility function.
Of course, if you’re serious about falsifying the utility theory, just work from any published example of preference reversal in real humans. There’s many to choose from.
Going by the relative frequency of your scenario vs mine, I’d say my theory wins this example hands down. :-) Even if we consider only people with a consciously stated goal of being non-fat.
At most you can say that our brains evaluate descriptions of future states and weigh their emotional impact. Eliezer wrote eloquently about one particularly obvious preference reversal of this sort, and of course immediately launched into defending expected utility as a prescriptive rather than descriptive theory. Shut up and multiply, silly humans.
I think your imagination is rather limited then. Charitable donations as a signaling activity are one example. If you donate to charity partly to signal to others that you are an altruistic person and use your choice of charity to signal the kinds of things that you care about then donating to multiple charities can make perfect sense. Donating $500 to Oxfam and $500 to the WWF may deliver greater signaling benefits than donating $1000 to one of the two as it will be an effective signal both for third parties who prioritize famine and for third parties who prioritize animal welfare. If you are partly buying ‘fuzzies’ by donating to charity then donating to the two charities may allow you to feel good whenever you encounter news stories about either famine or endangered pandas, for a net benefit greater than feeling slightly more virtuous on encountering a subset of stories.
Between evolutionary psychology, game theory, micro and behavioural economics and public choice theory to name a few research areas I have found a lot of insightful explanations of human behaviour that demonstrate people rationally responding to incentives. The explanations often reveal behaviour that appears irrational according to one version of utility makes perfect sense when you realize what people’s actual goals and preferences are. That’s not to say there aren’t examples of biases and flaws in reasoning but I’ve found considerable practical value in explaining human action through models that assume rational utility maximization.
Incidentally, I don’t believe that demonstrations of preference reversal falsify the kind of model I’m talking about. They only falsify the naive ‘fully conscious rational agent with a static utility function’ model which is not much worth defending anyway.
Your theory fails to account for the exceptions at all though. And I have had great success losing weight by consciously arranging my environment to reduce exposure to temptation. How does your theory account for that kind of behaviour?
Aaah! No, no. I originally used “picking one charity” as a metaphor for following any real-world goal concertedly and monomanically. Foolishly thought it would be transparent to everyone. Sorry.
Yes, incentives do work, and utility-based models do have predictive and explanatory power. Many local areas of human activity are well modeled by utility, but it’s different utilities in different situations, not a One True Utility innate to the person. And I’m very wary of shoehorning stuff into utility theory when it’s an obviously poor fit, like moral judgements or instinctive actions.
My theory doesn’t consider rational behavior impossible—it’s just exceptional. A typical day will contain one rationally optimized decision (if you’re really good; otherwise zero) and thousands of decisions made for you by your tendencies.
At least that’s been my experience; maybe there are super-people who can do better. People who really do shut up and multiply with world-states. I’d be really scared of such people because (warning, Mind-Killer ahead) my country was once drowned in blood by revolutionaries wishing to build a rational, atheistic, goal-directed society. Precisely the kind of calculating altruists who’d never play chess while there was a kid starving anywhere. Of course they ultimately failed. If they’d succeeded, you’d now be living in the happiest utopia that was imaginable in the 19th century: world communism. Let that stand as a kind of “genetic” explanation for my beliefs.
This relates to my earlier comment about ignoring the computational limits on rationality. It wouldn’t be rational to put a lot of effort into rationally optimizing every decision you make during the day. In my opinion any attempts to improve human rationality have to recognize that resource limitations and computational limits are an important constraint. Having an imperfect but reasonable heuristic for most decisions is a rational solution to the problem of making decisions given limited resources. It would be great to figure out how to do better given the constraints but theories that start from an assumption of unlimited resources are going to be of limited practical use.
I can see how conflating communism with rationality would lead you to be distrustful of rationality. I personally think the greatest intellectual failure of communism was failing to recognize the importance of individual incentives and utility maximization or to acknowledge the gap between people’s stated intentions and their actual motivations which means in my view it was never rational. Hayek’s economic calculation problem criticism of socialism is an example of recognizing the importance of computational constraints when trying to improve decisions. I’d agree that there is a danger of people with a naive view of rationality and utility thinking that communism is a good idea though.