Yes. (Well, it’s a bit more complicated than that; VNM utility theory doesn’t extend to choices with an infinite number of possible outcomes, so I reject the whole system.) I discussed this in more detail in the comments in the linked article. In brief, there is a chance that my utility function is bounded, but I am definitely not willing to bet the universe on it.
VNM definitely does extend to the case of infinitely many outcomes. It requires a continuous utility function, and thus continuous preferences and a topology in outcome space. Why is this additional modeling assumption any more problematic than other VNM axioms?
In short, because utilities may not converge. The axioms do not assert themselves able to be applied an infinite number of times; if they did, they would run into all the usual problems with infinite series. There are modifications of the VNM theorem that extend infinitely, but they all either must only work for certain infinite sets or must require bounded utility.
This is exactly the stuff I was talking about. I mean, basic measure theory determines what functions you can even talk about. If you have a probability measure P, then utilities that are not in L^{1}_{P}(outcome domain) make no sense. You may need some more restrictions than that, but one can’t talk about expected utility if the utility is not at least L1. You cannot define a function w.r.t. a probability measure than has a support set of infinite Lebesgue measure, is unbounded, and has a defined expectation (the L1 norm)… unless you know that the rate of growth of the unbounded utility function behaves in certain nice ways when compared to the decay of the probability measure. You might be already saying this, but this much simply can’t be changed, no matter what you do. If your utility function is unbounded, then the probabilities for certain outcomes must decay faster than your utility grows. Since probabilities are given by nature and utilities (sort of) aren’t, my guess would be that utilities have to decay quickly (or, conversely, probabilities have to decay super quickly).
If your utility function is unbounded, then the probabilities for certain outcomes must decay faster than your utility grows. Since probabilities are given by nature and utilities (sort of) aren’t, my guess would be that utilities have to decay quickly (or, conversely, probabilities have to decay super quickly).
Nature does not require that it is possible to make utility function converge at all. Also, nature neither requires that taking expectations be the only way of comparing choices, nor that utilities be real.
I totally agree and never meant to imply otherwise. But just as any consistent system of degrees of belief can be put into correspondence with the axioms of probability, so there are certain stipulations about what can reasonably called a utility function.
I would argue that if you meet a conscious agent and your model of their utility function says that it doesn’t converge (in the appropriate L1 norm of the appropriate modeled probability space) then something’s wrong with that model of utility function… not with the assumption that utility functions should converge. There are many subtleties, I’m sure, but non-integrable utility functions seem futile to me. If something can be well-modeled by a non-integrable utility function, then I’m fine updating my position, but in years of learning and teaching probability theory, I’ve never encountered anything that would convince me of that.
Yes, good point. Is there any study of the most general objects to which integrability theory applies? Also, are you familiar with Martin Kruskal’s work on generalizing calculus to the surreal numbers? I am having difficulty locating any of his papers.
What comes to my mind are Bochner integrals and random elements. I’m not sure how much integrability theory one can develop outside of a Banach space, although you can get interesting fractal type integrals when dealing with Hausdorff measure. Integrability theory is really just an extension of measure theory, which was pinned down in painstaking detail by Lebesgue, Caratheodory, Perron, Henstock, and Kurzweil (no relation to the singularity Kurzweil). The Henstock-Kurzweil (HK) integral is the most generalized integral over the reals and complexs that preserves certain nice properties, like the fundamental theorem of calculus. The name of the game in integration theory was never an attempt to find the most abstract workable definitions of integration, but rather to see under what general assumptions you could get physically meaningful results, like mean value theorem or fundamental theorem of calculus, to hold. Complex integration theory, especially in higher dimensions shattered a lot of the preconceived notions of how functions should behave.
In looking up surreal numbers, it appears that Conway and Knuth invented them. I was surprised to learn that the hyperreal numbers (developed by Abraham Robinson) are contained in the surreals. To my knowledge, which is a bit limited because I focus more on applied math and so I am probably not as familiar with the literature on something like surreal numbers as other LWers may be, there hasn’t been much work, if any, on defining an integral over the surreals. My guess, though, is that such an integral would wind up being an unsatisfyingly trivial extension of integration over the regular reals, as is the case for hyperreals.
I’ll definitely take a look at Kruskal’s papers and see what he’s come up with.
I was surprised to learn that the hyperreal numbers (developed by Abraham Robinson) are contained in the surreals.
Every ordered field is contained within the surreals, which is why I find them promising for utility theory. The surreals themselves are not a field but a Field, since they form a proper class.
Another point worth noting is that on a set D of finite measure (which any measurable subset of a probability space is), L^{N}(D) is contained in L^{N-1}(D), and so if the first moment fails to exist (non-integrable, no defined expectation) then all higher moments fail and computation of order statistics fails. Of course nature doesn’t have to be modeled by statistics, but you’d be hard pressed to out-perform simple axiomatic formulations that just assume a topolgy, continuous preference functions, and get on with it and have access to higher order moments.
How do you construct utility without the VNM axioms? Are there less strong axioms for which a VNM-like result holds?
EDIT: Sorry if this is covered in the comments in the other article, I’m being a bit lazy here and not reading through all of your comments there in detail.
Okay. If you end up being successful, I would be quite interested to know about it. (A counterexample would also be interesting, actually probably more interesting since it is less expected.)
This is an open problem. I contest certain axioms (P6 and P7).
Do you also contest the Archimedean axiom for von Neumann’s formulation of utility?
Yes. (Well, it’s a bit more complicated than that; VNM utility theory doesn’t extend to choices with an infinite number of possible outcomes, so I reject the whole system.) I discussed this in more detail in the comments in the linked article. In brief, there is a chance that my utility function is bounded, but I am definitely not willing to bet the universe on it.
VNM definitely does extend to the case of infinitely many outcomes. It requires a continuous utility function, and thus continuous preferences and a topology in outcome space. Why is this additional modeling assumption any more problematic than other VNM axioms?
In short, because utilities may not converge. The axioms do not assert themselves able to be applied an infinite number of times; if they did, they would run into all the usual problems with infinite series. There are modifications of the VNM theorem that extend infinitely, but they all either must only work for certain infinite sets or must require bounded utility.
This is exactly the stuff I was talking about. I mean, basic measure theory determines what functions you can even talk about. If you have a probability measure P, then utilities that are not in L^{1}_{P}(outcome domain) make no sense. You may need some more restrictions than that, but one can’t talk about expected utility if the utility is not at least L1. You cannot define a function w.r.t. a probability measure than has a support set of infinite Lebesgue measure, is unbounded, and has a defined expectation (the L1 norm)… unless you know that the rate of growth of the unbounded utility function behaves in certain nice ways when compared to the decay of the probability measure. You might be already saying this, but this much simply can’t be changed, no matter what you do. If your utility function is unbounded, then the probabilities for certain outcomes must decay faster than your utility grows. Since probabilities are given by nature and utilities (sort of) aren’t, my guess would be that utilities have to decay quickly (or, conversely, probabilities have to decay super quickly).
Nature does not require that it is possible to make utility function converge at all. Also, nature neither requires that taking expectations be the only way of comparing choices, nor that utilities be real.
I totally agree and never meant to imply otherwise. But just as any consistent system of degrees of belief can be put into correspondence with the axioms of probability, so there are certain stipulations about what can reasonably called a utility function.
I would argue that if you meet a conscious agent and your model of their utility function says that it doesn’t converge (in the appropriate L1 norm of the appropriate modeled probability space) then something’s wrong with that model of utility function… not with the assumption that utility functions should converge. There are many subtleties, I’m sure, but non-integrable utility functions seem futile to me. If something can be well-modeled by a non-integrable utility function, then I’m fine updating my position, but in years of learning and teaching probability theory, I’ve never encountered anything that would convince me of that.
Doesn’t this all assume that utility functions are real-valued?
No, all of the integrability theory (w.r.t. probability measures) extends straightforwardly to complex valued functions. See this and this.
Yes, good point. Is there any study of the most general objects to which integrability theory applies? Also, are you familiar with Martin Kruskal’s work on generalizing calculus to the surreal numbers? I am having difficulty locating any of his papers.
What comes to my mind are Bochner integrals and random elements. I’m not sure how much integrability theory one can develop outside of a Banach space, although you can get interesting fractal type integrals when dealing with Hausdorff measure. Integrability theory is really just an extension of measure theory, which was pinned down in painstaking detail by Lebesgue, Caratheodory, Perron, Henstock, and Kurzweil (no relation to the singularity Kurzweil). The Henstock-Kurzweil (HK) integral is the most generalized integral over the reals and complexs that preserves certain nice properties, like the fundamental theorem of calculus. The name of the game in integration theory was never an attempt to find the most abstract workable definitions of integration, but rather to see under what general assumptions you could get physically meaningful results, like mean value theorem or fundamental theorem of calculus, to hold. Complex integration theory, especially in higher dimensions shattered a lot of the preconceived notions of how functions should behave.
In looking up surreal numbers, it appears that Conway and Knuth invented them. I was surprised to learn that the hyperreal numbers (developed by Abraham Robinson) are contained in the surreals. To my knowledge, which is a bit limited because I focus more on applied math and so I am probably not as familiar with the literature on something like surreal numbers as other LWers may be, there hasn’t been much work, if any, on defining an integral over the surreals. My guess, though, is that such an integral would wind up being an unsatisfyingly trivial extension of integration over the regular reals, as is the case for hyperreals.
I’ll definitely take a look at Kruskal’s papers and see what he’s come up with.
Every ordered field is contained within the surreals, which is why I find them promising for utility theory. The surreals themselves are not a field but a Field, since they form a proper class.
Another point worth noting is that on a set D of finite measure (which any measurable subset of a probability space is), L^{N}(D) is contained in L^{N-1}(D), and so if the first moment fails to exist (non-integrable, no defined expectation) then all higher moments fail and computation of order statistics fails. Of course nature doesn’t have to be modeled by statistics, but you’d be hard pressed to out-perform simple axiomatic formulations that just assume a topolgy, continuous preference functions, and get on with it and have access to higher order moments.
How do you construct utility without the VNM axioms? Are there less strong axioms for which a VNM-like result holds?
EDIT: Sorry if this is covered in the comments in the other article, I’m being a bit lazy here and not reading through all of your comments there in detail.
I don’t yet. :) I have a few reason to think that it has a good chance of being possible, but it has not been done.
Okay. If you end up being successful, I would be quite interested to know about it. (A counterexample would also be interesting, actually probably more interesting since it is less expected.)