Previously, Taw published an article entitled “Post your utility function”, after having tried (apparently unsuccessfully) to work out “what his utility function was”. I suspect that there is something to be gained by trying to work out what your priorities are in life, but I am not sure that people on this site are helping themselves very much by assigning dollar values, probabilities and discount rates. If you haven’t done so already, you can learn why people like the utility function formalism on wikipedia. I will say one thing about the expected utility theorem, though. An assignment of expected utilities to outcomes is (modulo renormalizing utilities by some set of affine transformations) equivalent to a preference over probabilistic combinations of outcomes; utilities are NOT properties of the outcomes you are talking about, they are properties of your mind. Goodness, like confusion, is in the mind.
In this article, I will claim that trying to run your life based upon expected utility maximization is not a good idea, and thus asking “what your utility function is” is also not a useful question to try and answer.
There are many problems with using expected utility maximization to run your life: firstly, the size of the set of outcomes that one must consider in order to rigorously apply the theory is ridiculous: one must consider all probabilistic mixtures of possible histories of the universe from now to whatever your time horizon is. Even identifying macroscopically identical histories, this set is huge. Humans naturally describe world-histories in terms of deontological rules, such as “if someone is nice to me, I want to be nice back to them”, or “if I fall in love, I want to treat my partner well (unless s/he betrays me)”, “I want to achieve something meaningful and be well-renowned with my life”, “I want to help other people”. In order to translate these deontological rules into utilities attached to world-histories, you would have to assign a dollar utility to every possible world-history with all variants of who you fall in love with, where you settle, what career you have, what you do with your friends, etc, etc. Describing your function as a linear sum of independent terms will not work in general because, for example, whether accounting is a good career for you will depend upon the kind of personal life you want to live (i.e. different aspects of your life interact). You can, of course, emulate deontological rules such as “I want to help other people” in a complex utility function—that is what the process of enumerating human-distinguishable world-histories is—but it is nowhere near as efficient a representation as the usual deontological rules of thumb that people live by, particularly given that the human mind is well-adapted to representing deontological preferences (such as “I must be nice to people”—as was discussed before, there is a large amount of hidden complexity behind this simple english sentence) and very poor at representing and manipulating floating point numbers.
Toby Ord’s BPhil thesis has some interesting critiques of naive consequentialism, and would probably provide an entry point to the literature:
‘An uncomplicated illustration is provided by the security which lovers or friends produce in one another by being guided, and being seen to be guided, by maxims of virtually unconditional fidelity. Adherence to such maxims is justified by this prized effect, since any retreat from it will undermine the effect, being inevitably detectable within a close relationship. This is so whether the retreat takes the form of intruding calculation or calculative monitoring. The point scarcely needs emphasis.’
There are many other pitfalls: One is thinking that you know what is of value in your life, and forgetting what the most important things are (such as youth, health, friendship, family, humour, a sense of personal dignity, a sense of moral pureness for yourself, acceptance by your peers, social status, etc) because they’ve always been there so you took them for granted. Another is that since we humans are under the influence of a considerable number of delusions about the nature of our own lives, (in particular: that our actions are influenced exclusively by our long-term plans rather than by the situations we find ourselves in or our base animal desires) we often find that our actions have unintended consequences. Human life is naturally complicated enough that this would happen anyway, but attempting to optimize your life whilst under the influence of systematic delusions about the way it really works is likely to make it worse than if you just stick to default behaviour.
What, then is the best decision procedure for deciding how to improve your life? Certainly I would steer clear of dollar values and expected utility calculations, because this formalism is a huge leap away from our intuitive decision procedure. It seems wiser to me to make small incrermental changes to your decision procedure for getting things done. For example, if you currently decide what to do based completely upon your whims, consider making a vague list of goals in your life (with no particular priorities attached) and updating your progress on them. If you already do this, consider brainstorming for other goals that you might have ignored, and then attach priorities based upon the assumption that you will certainly achieve or not achieve each of these goals, ignoring what probabilistic mixtures you would accept (because your mind probably won’t be able to handle the probabilistic aspect in a numerical way anyway).
Expected futility for humans
Previously, Taw published an article entitled “Post your utility function”, after having tried (apparently unsuccessfully) to work out “what his utility function was”. I suspect that there is something to be gained by trying to work out what your priorities are in life, but I am not sure that people on this site are helping themselves very much by assigning dollar values, probabilities and discount rates. If you haven’t done so already, you can learn why people like the utility function formalism on wikipedia. I will say one thing about the expected utility theorem, though. An assignment of expected utilities to outcomes is (modulo renormalizing utilities by some set of affine transformations) equivalent to a preference over probabilistic combinations of outcomes; utilities are NOT properties of the outcomes you are talking about, they are properties of your mind. Goodness, like confusion, is in the mind.
In this article, I will claim that trying to run your life based upon expected utility maximization is not a good idea, and thus asking “what your utility function is” is also not a useful question to try and answer.
There are many problems with using expected utility maximization to run your life: firstly, the size of the set of outcomes that one must consider in order to rigorously apply the theory is ridiculous: one must consider all probabilistic mixtures of possible histories of the universe from now to whatever your time horizon is. Even identifying macroscopically identical histories, this set is huge. Humans naturally describe world-histories in terms of deontological rules, such as “if someone is nice to me, I want to be nice back to them”, or “if I fall in love, I want to treat my partner well (unless s/he betrays me)”, “I want to achieve something meaningful and be well-renowned with my life”, “I want to help other people”. In order to translate these deontological rules into utilities attached to world-histories, you would have to assign a dollar utility to every possible world-history with all variants of who you fall in love with, where you settle, what career you have, what you do with your friends, etc, etc. Describing your function as a linear sum of independent terms will not work in general because, for example, whether accounting is a good career for you will depend upon the kind of personal life you want to live (i.e. different aspects of your life interact). You can, of course, emulate deontological rules such as “I want to help other people” in a complex utility function—that is what the process of enumerating human-distinguishable world-histories is—but it is nowhere near as efficient a representation as the usual deontological rules of thumb that people live by, particularly given that the human mind is well-adapted to representing deontological preferences (such as “I must be nice to people”—as was discussed before, there is a large amount of hidden complexity behind this simple english sentence) and very poor at representing and manipulating floating point numbers.
Toby Ord’s BPhil thesis has some interesting critiques of naive consequentialism, and would probably provide an entry point to the literature:
There are many other pitfalls: One is thinking that you know what is of value in your life, and forgetting what the most important things are (such as youth, health, friendship, family, humour, a sense of personal dignity, a sense of moral pureness for yourself, acceptance by your peers, social status, etc) because they’ve always been there so you took them for granted. Another is that since we humans are under the influence of a considerable number of delusions about the nature of our own lives, (in particular: that our actions are influenced exclusively by our long-term plans rather than by the situations we find ourselves in or our base animal desires) we often find that our actions have unintended consequences. Human life is naturally complicated enough that this would happen anyway, but attempting to optimize your life whilst under the influence of systematic delusions about the way it really works is likely to make it worse than if you just stick to default behaviour.
What, then is the best decision procedure for deciding how to improve your life? Certainly I would steer clear of dollar values and expected utility calculations, because this formalism is a huge leap away from our intuitive decision procedure. It seems wiser to me to make small incrermental changes to your decision procedure for getting things done. For example, if you currently decide what to do based completely upon your whims, consider making a vague list of goals in your life (with no particular priorities attached) and updating your progress on them. If you already do this, consider brainstorming for other goals that you might have ignored, and then attach priorities based upon the assumption that you will certainly achieve or not achieve each of these goals, ignoring what probabilistic mixtures you would accept (because your mind probably won’t be able to handle the probabilistic aspect in a numerical way anyway).