The only thing that is required for an agent to provably have a utility function is that it has coherent preferences. The only thing that means is that the agent obeys the four axioms of VNM-rationality (which are really simple and would be really weird if they weren’t satisfied). The Von Neumann—Morgenstern utility theorem states than B is preferred over A if and only if E(U(A)) < E(U(B)). And while it’s true that when humans are presented with a choice between two lotteries, they sometimes pick the lottery that has a lower expected payout, this doesn’t mean humans don’t have a utility function, it just means that our utility function in those scenarios is not based entirely on the expected dollar value (it could instead be based on the expected payout where higher probabilities are given greater weight, for example).
The only thing that means is that the agent obeys the four axioms of VNM-rationality (which are really simple and would be really weird if they weren’t satisfied).
Not really. There are reasonable decision procedures that violate the axioms (by necessity, since they aren’t equivalent to a utility function). For example, anything that makes a decision based on the “5% outcome” of its decisions (known as “VaR”). Or something that strictly optimizes for one characteristic and then optimizes for another among all things optimizing the first.
I don’t think it’s hard to argue that the first process isn’t a good idea, and plenty of people in finance argue that. However, for the second one, who cares that the lexicographic ordering on pairs of real numbers can’t be embedded into the usual ordering on real numbers?
Why do some humans buy lottery tickets and others not? If humans don’t all have the same utility function how do they get one? Isn’t the process of acquisition and change of utility function (or whatever we use to approximate one) more important to our understanding of intelligence and the future of intelligence than the function itself?
People buy lottery tickets because no one can accurately “feel” or intuit incredibly small probabilities. We (by definition) experience very few or no events with those probabilities, so we have nothing on which to build that intuition. Thus we approximate negligible but non zero probabilities as small but non negligible. And that “feeling” is worth the price of the lottery ticket for some people. Some people learn to calibrate their intuitions over time so negligible probabilities “feel” like zero, and so they don’t buy lottery tickets. The problem is less about utility functions and more about accurate processing of small probabilities.
I’m not sure you noticed but I bought up lotteries because it directly contradicts “it could instead be based on the expected payout where higher probabilities are given greater weight, for example” because we see an example of a very very low probability be given a high weight (if our brains even do that).
The only thing that is required for an agent to provably have a utility function is that it has coherent preferences. The only thing that means is that the agent obeys the four axioms of VNM-rationality (which are really simple and would be really weird if they weren’t satisfied). The Von Neumann—Morgenstern utility theorem states than B is preferred over A if and only if E(U(A)) < E(U(B)). And while it’s true that when humans are presented with a choice between two lotteries, they sometimes pick the lottery that has a lower expected payout, this doesn’t mean humans don’t have a utility function, it just means that our utility function in those scenarios is not based entirely on the expected dollar value (it could instead be based on the expected payout where higher probabilities are given greater weight, for example).
Not really. There are reasonable decision procedures that violate the axioms (by necessity, since they aren’t equivalent to a utility function). For example, anything that makes a decision based on the “5% outcome” of its decisions (known as “VaR”). Or something that strictly optimizes for one characteristic and then optimizes for another among all things optimizing the first.
I don’t think it’s hard to argue that the first process isn’t a good idea, and plenty of people in finance argue that. However, for the second one, who cares that the lexicographic ordering on pairs of real numbers can’t be embedded into the usual ordering on real numbers?
Why buy lottery tickets?
Why stop buying lottery tickets?
Why do some humans buy lottery tickets and others not? If humans don’t all have the same utility function how do they get one? Isn’t the process of acquisition and change of utility function (or whatever we use to approximate one) more important to our understanding of intelligence and the future of intelligence than the function itself?
People buy lottery tickets because no one can accurately “feel” or intuit incredibly small probabilities. We (by definition) experience very few or no events with those probabilities, so we have nothing on which to build that intuition. Thus we approximate negligible but non zero probabilities as small but non negligible. And that “feeling” is worth the price of the lottery ticket for some people. Some people learn to calibrate their intuitions over time so negligible probabilities “feel” like zero, and so they don’t buy lottery tickets. The problem is less about utility functions and more about accurate processing of small probabilities.
I’m not sure you noticed but I bought up lotteries because it directly contradicts “it could instead be based on the expected payout where higher probabilities are given greater weight, for example” because we see an example of a very very low probability be given a high weight (if our brains even do that).