In general terms there are two, conflicting, ways to come up with utility functions, and these seem to imply different metrics of success.
The first assumes that “utility” corresponds to something real in the world, such as some sort of emotional or cognitive state. On this view, the goal, when specifying your utility function, is to get numbers that reflect this reality as closely as possible. You say “I think x will give me 2 emotilons”, and “I think y will give me 3 emotilons”; you test this by giving yourself x, and y; and success is if the results seem to match up.
The second assumes that we already have a set of preferences, and “utility” is just a number we use to represent these, such that xPy ⇔ u(x)>u(y), where xPy means “x is preferred to y”. (More generally, when x and y may be gambles, we want: xPy ⇔ E[u(x)]>E[u(y)]).
It’s less clear what the point of specifying a utility function is supposed to be in the second case. Once you have preferences, specifying the utility function has no additional information content: it’s just a way of representing them with a real number. I guess “success” in this case simply consists in coming up with a utility function at all: if your preferences are inconsistent (e.g. incomplete, intransitive, …) then you won’t be able to do it, so being able to do it is a good sign.
Much of the discussion about utility functions on this site seems to me to conflate these two distinct senses of “utility”, with the result that it’s often difficult to tell what people really mean.
When I teach decision analysis, I don’t use the word “utility” for exactly this reason. I separate the “value model” from the “u-curve.”
The value model is what translates all the possible outcomes of the world into a number representing value. For example, a business decision analysis might have inputs like volume, price, margin, development costs, etc., and the value model would translate all of those into NPV.
You only use the u-curve when uncertainty is involved. For example, distributions on the inputs lead to a distribution on NPV, and the u-curve would determine how to assign a value that represents the distribution. Some companies are more risk averse than others, so they would value the same distribution on NPV differently.
Without a u-curve, you can’t make decisions under uncertainty. If all you have is a value model, then you can’t decide e.g. if you would like a deal with a 50-50 shot at winning $100 vs losing $50. That depends on risk aversion, which is encoded into a u-curve, not a value model.
What counts as a “successful” utility function?
In general terms there are two, conflicting, ways to come up with utility functions, and these seem to imply different metrics of success.
The first assumes that “utility” corresponds to something real in the world, such as some sort of emotional or cognitive state. On this view, the goal, when specifying your utility function, is to get numbers that reflect this reality as closely as possible. You say “I think x will give me 2 emotilons”, and “I think y will give me 3 emotilons”; you test this by giving yourself x, and y; and success is if the results seem to match up.
The second assumes that we already have a set of preferences, and “utility” is just a number we use to represent these, such that xPy ⇔ u(x)>u(y), where xPy means “x is preferred to y”. (More generally, when x and y may be gambles, we want: xPy ⇔ E[u(x)]>E[u(y)]).
It’s less clear what the point of specifying a utility function is supposed to be in the second case. Once you have preferences, specifying the utility function has no additional information content: it’s just a way of representing them with a real number. I guess “success” in this case simply consists in coming up with a utility function at all: if your preferences are inconsistent (e.g. incomplete, intransitive, …) then you won’t be able to do it, so being able to do it is a good sign.
Much of the discussion about utility functions on this site seems to me to conflate these two distinct senses of “utility”, with the result that it’s often difficult to tell what people really mean.
When I teach decision analysis, I don’t use the word “utility” for exactly this reason. I separate the “value model” from the “u-curve.”
The value model is what translates all the possible outcomes of the world into a number representing value. For example, a business decision analysis might have inputs like volume, price, margin, development costs, etc., and the value model would translate all of those into NPV.
You only use the u-curve when uncertainty is involved. For example, distributions on the inputs lead to a distribution on NPV, and the u-curve would determine how to assign a value that represents the distribution. Some companies are more risk averse than others, so they would value the same distribution on NPV differently.
Without a u-curve, you can’t make decisions under uncertainty. If all you have is a value model, then you can’t decide e.g. if you would like a deal with a 50-50 shot at winning $100 vs losing $50. That depends on risk aversion, which is encoded into a u-curve, not a value model.
Does this make sense?
Totally. ;)