Most of [?] agree that the VNM axioms are reasonable
My problem with VNM-utility is that while in theory it is simple and elegant, it isn’t applicable to real life because you can only assign utility to complex world states (a non-trivial task) and not to limited outcomes. If you have to choose between $1 and a 10% chance of $2, then this isn’t universally solvable in real life because $2 doesn’t necessarily have twice the value of $1, so the completeness axiom doesn’t hold.
Also, assuming you assign utility to lifetime as a function of life quality in such a way that for any constant quality longer life has strictly higher (or lower) utility than shorter life, then either you can’t assign any utility to actually infinite immortality, or you can’t differentiate between higher-quality and lower-quality immortality, or you can’t represent utility as a real number.
Neither of these problems is solved by replacing utility with awesomeness.
Also, assuming you assign utility to lifetime as a function of life quality in such a way that for any constant quality longer life has strictly higher (or lower) utility than shorter life, then either you can’t assign any utility to actually infinite immortality, or you can’t differentiate between higher-quality and lower-quality immortality, or you can’t represent utility as a real number.
Could you explain that? Representing the quality of each day of your life with a real number from a bounded range, and adding them up with exponential discounting to get your utility, seems to meet all those criteria.
If you have to choose between $1 and a 10% chance of $2, then this isn’t universally solvable in real life because $2 doesn’t necessarily have twice the value of $1, so the completeness axiom doesn’t hold.
Do you mean it’s not universally solvable in the sense that there is no “I always prefer the $1”-type solution? Of course there isn’t. That doesn’t break VNM, it just means you aren’t factoring outcomes properly.
either you can’t assign any utility to actually infinite immortality, or you can’t differentiate between higher-quality and lower-quality immortality, or you can’t represent utility as a real number.
Do you mean it’s not universally solvable in the sense that there is no “I always prefer the $1”-type solution? Of course there isn’t. That doesn’t break VNM, it just means you aren’t factoring outcomes properly.
That’s what I mean, and while it doesn’t “break” VNM, it means I can’t apply VNM to situations I would like to, such as torture vs dust specks. If I know the utility of 1000 people getting dust specks in their eyes, I still don’t know the utility of 1001 people getting dust specks in their eyes, except it’s probably higher. I can’t quantify the difference between 49 and 50 years of torture, which means I have no idea whether it’s less than, equal to, or greater than the difference between 50 and 51 years. Likewise, I have no idea how much I would pay to avoid one dust speck (or 1000 dust specks) because there’s no ratio of u($) to u(dust speck), and I have absolutely no concept how to compare dust specks with torture, and even if I had, it wouldn’t be scalable.
VNM is not a complete theory of moral philosophy, and isn’t intended to be. I tried to make that clear in OP by discussing how much work VNM does and does not do (with a focus on what it does not do).
All it does is prevent circular preferences and enforce sanity when dealing with uncertainty. It does not have anything at all to say about torture vs dust specs, the shape of utility curves, (in)dependence of outcome factors, or anything about the structure of your utility function, because none of those are problems of circular preference or risk-sanity.
From wiki:
Thus, the content of the theorem is that the construction of u is possible, and they claim little about its nature.
Nonetheless, people read into it all sorts of prescriptions and abilities that it does not have, and then complain when they discover that it does not actually have such powers, or don’t discover such, and make all sorts of dumb mistakes. Hence the OP.
VNM is a small statement on the perhiphery of a very large, very hard problem. Moral Philosophy is hard, and there are (so far) no silver bullets. Nothing can prevent you from having to actually think about what you prefer.
Yes, I am aware of that. The biggest trouble, as you have elaborately explained in your post, is that people think they can perform mathematical operations in VNM-utility-space to calculate utilities they have not explicitly defined in their system of ethics. I believe Eliezer has fallen into this trap, the sequences are full of that kind of thinking (e.g. torture vs dust specks) and while I realize it’s not supposed to be taken literally, “shut up and multiply” is symptomatic.
Another problem is that you can only use VNM when talking about complete world states. A day where you get a tasty sandwich might be better than a normal day, or it might not be, depending on the world state. If you know there’s a wizard who’ll give you immortality for $1, you’ll chose $1 over any probability<1 of $2, and if the wizard wants $2, the opposite applies.
VNM isn’t bad, it’s just far, far, far too limited. It’s somewhat useful when probabilities are involved, but otherwise it’s literally just the concept of well-ordering your options by preferability.
Assuming you assign utility to lifetime as a function of life quality in such a way that for any constant quality longer life has strictly higher (or lower) utility than shorter life, then either you can’t assign any utility to actually infinite immortality, or you can’t differentiate between higher-quality and lower-quality immortality, or you can’t represent utility as a real number.
Turns out this is not actually true: 1 day is 1, 2 days is 1.5, 3 days is 1.75, etc, immortality is 2, and then you can add quality. Not very surprising in fact, considering immortality is effectively infinity and |ℕ| < |ℝ|. Still, I’m pretty sure the set of all possible world states is of higher cardinality than ℝ, so...
(Also it’s a good illustration why simply assigning utility to 1 day of life and then scaling up is not a bright idea.)
Another problem is that you can only use VNM when talking about complete world states.
You can talk about probability distributions over world-states as well. When I say “tasty sandwich day minus normal day” I mean to refer to the expected marginal utility of the sandwich, including the possibilities with wizards and stuff. This simplifies things a bit, but goes to hell as soon as you include probability updating, or actually have to find that value.
My problem with VNM-utility is that while in theory it is simple and elegant, it isn’t applicable to real life because you can only assign utility to complex world states (a non-trivial task) and not to limited outcomes. If you have to choose between $1 and a 10% chance of $2, then this isn’t universally solvable in real life because $2 doesn’t necessarily have twice the value of $1, so the completeness axiom doesn’t hold.
Also, assuming you assign utility to lifetime as a function of life quality in such a way that for any constant quality longer life has strictly higher (or lower) utility than shorter life, then either you can’t assign any utility to actually infinite immortality, or you can’t differentiate between higher-quality and lower-quality immortality, or you can’t represent utility as a real number.
Neither of these problems is solved by replacing utility with awesomeness.
Could you explain that? Representing the quality of each day of your life with a real number from a bounded range, and adding them up with exponential discounting to get your utility, seems to meet all those criteria.
Indeed, already figured that out here.
Do you mean it’s not universally solvable in the sense that there is no “I always prefer the $1”-type solution? Of course there isn’t. That doesn’t break VNM, it just means you aren’t factoring outcomes properly.
Interesting...
That’s what I mean, and while it doesn’t “break” VNM, it means I can’t apply VNM to situations I would like to, such as torture vs dust specks. If I know the utility of 1000 people getting dust specks in their eyes, I still don’t know the utility of 1001 people getting dust specks in their eyes, except it’s probably higher. I can’t quantify the difference between 49 and 50 years of torture, which means I have no idea whether it’s less than, equal to, or greater than the difference between 50 and 51 years. Likewise, I have no idea how much I would pay to avoid one dust speck (or 1000 dust specks) because there’s no ratio of u($) to u(dust speck), and I have absolutely no concept how to compare dust specks with torture, and even if I had, it wouldn’t be scalable.
VNM is not a complete theory of moral philosophy, and isn’t intended to be. I tried to make that clear in OP by discussing how much work VNM does and does not do (with a focus on what it does not do).
All it does is prevent circular preferences and enforce sanity when dealing with uncertainty. It does not have anything at all to say about torture vs dust specs, the shape of utility curves, (in)dependence of outcome factors, or anything about the structure of your utility function, because none of those are problems of circular preference or risk-sanity.
From wiki:
Nonetheless, people read into it all sorts of prescriptions and abilities that it does not have, and then complain when they discover that it does not actually have such powers, or don’t discover such, and make all sorts of dumb mistakes. Hence the OP.
VNM is a small statement on the perhiphery of a very large, very hard problem. Moral Philosophy is hard, and there are (so far) no silver bullets. Nothing can prevent you from having to actually think about what you prefer.
Yes, I am aware of that. The biggest trouble, as you have elaborately explained in your post, is that people think they can perform mathematical operations in VNM-utility-space to calculate utilities they have not explicitly defined in their system of ethics. I believe Eliezer has fallen into this trap, the sequences are full of that kind of thinking (e.g. torture vs dust specks) and while I realize it’s not supposed to be taken literally, “shut up and multiply” is symptomatic.
Another problem is that you can only use VNM when talking about complete world states. A day where you get a tasty sandwich might be better than a normal day, or it might not be, depending on the world state. If you know there’s a wizard who’ll give you immortality for $1, you’ll chose $1 over any probability<1 of $2, and if the wizard wants $2, the opposite applies.
VNM isn’t bad, it’s just far, far, far too limited. It’s somewhat useful when probabilities are involved, but otherwise it’s literally just the concept of well-ordering your options by preferability.
Turns out this is not actually true: 1 day is 1, 2 days is 1.5, 3 days is 1.75, etc, immortality is 2, and then you can add quality. Not very surprising in fact, considering immortality is effectively infinity and |ℕ| < |ℝ|. Still, I’m pretty sure the set of all possible world states is of higher cardinality than ℝ, so...
(Also it’s a good illustration why simply assigning utility to 1 day of life and then scaling up is not a bright idea.)
You can talk about probability distributions over world-states as well. When I say “tasty sandwich day minus normal day” I mean to refer to the expected marginal utility of the sandwich, including the possibilities with wizards and stuff. This simplifies things a bit, but goes to hell as soon as you include probability updating, or actually have to find that value.