By what criteria do we judge that there should be a common ‘utilon’?
Not VNM, it just says we must be consistent in our assignment of utility to whole monolithic possible worlds. I can be VNM rational and choose specks.
Utilitarianism says so, but as far as I can tell, utilitarianism leads to all sorts of repugnant conclusions, and only repugnant conclusions.
Maybe we are only concerned with unique experience, and all the possible variation in dust-speck-experience-space is covered by the time you get to 1000.
I’m confused. I’m not a mathematician, but I understood this post as saying a good VNM agent has a continuous utility function.
hnnnng. What? Did you link the wrong article? A VNM agent has a utility function (a function from outcomes to reals), but says nothing more. “Continuous” in particular requires your outcome space to have a topology, which it may not, and even if it does, there’s still nothing in VNM that would require continuity.
And my take away from the torture/specks thing was that having a continuous utility function requires choosing torture.
Not necessarily. To choose torture by the usual argument the following must hold:
You can assign partial utilities separately to amount of torture and amount of dust-speck-eyes, where “partial utilities” means roughly that your final utility function is a sum of the partial utilities.
The partial utilities are roughly monotonic overall (increasing or decreasing, as opposed to having a maximum or minimum, or oscillating) and unbounded.
Minor assumptions like more torture is bad, and more dust specks is bad, and there are possibilities in your outcome space with 3^^^^3 (or sufficiently many) dust speck eyes. (if something is not in your outcome space, it better be strictly impossible, or you are fucked).
I am very skeptical of 1. Once you look at functions as “arbitrary map from set A to set B”, special things like this kind of decomposability seem very particular and very special, requiring a lot more evidence to locate than anyone seems to have gathered. As far as I can tell, the linear independence stuff is an artifact of people intutively thinking of the space of functions as the sort of things you can write by composing from primitives (ie computer code or math).
I am also skeptical of 2, because in general, it seems to be that unbounded utility functions produce repugnant conclusions. See all the problems with utilitarianism, and pascals mugging, etc.
As Eliezer says (but doesn’t seem to take seriously), if a utility function gives utility assignments that I disagree with, I shouldn’t use it. It doesn’t matter how many nice arguments you can come up with that declare the beauty of the internal structure of the utility function (which is a type error btw), if it doesn’t encode my idealized preferences, it’s junk.
The only criteria by which a utility function can be judged is the preferences it produces.
That said, it may be that we will have to enforce certain consistencies on our utilities to capture most of our preferences, but those must be done strictly by looking at preference implications. I tried to communicate this in “pinpointing utility”, but it really requires its own post. So many posts to write, and no time!
I assume I’m misunderstanding the terminology somewhere. If you are willing, can you explain my misunderstanding?
You may be confused by the continuity axiom in VNM which is about your preferences over probabilities, not over actual outcomes.
The trouble is, any utility function where 1 doesn’t hold is vulnerable to intuition pumps. If you can’t say which of A, B and C is better (e.g. A > B, B > C, C > A), then I can charge you a penny to switch from C → B, then B → A, then A → C, and you’re three pennies poorer.
I really, really hope my utility function’s “set B” can be mapped to the reals. If not, I’m screwed. (It’s fine if what I want varies with time, so long as it’s not circular at a given point in time.)
By what criteria do we judge that there should be a common ‘utilon’?
Not VNM, it just says we must be consistent in our assignment of utility to whole monolithic possible worlds. I can be VNM rational and choose specks.
Utilitarianism says so, but as far as I can tell, utilitarianism leads to all sorts of repugnant conclusions, and only repugnant conclusions.
Maybe we are only concerned with unique experience, and all the possible variation in dust-speck-experience-space is covered by the time you get to 1000.
I’m confused. I’m not a mathematician, but I understood this post as saying a good VNM agent has a continuous utility function.
And my take away from the torture/specks thing was that having a continuous utility function requires choosing torture.
I assume I’m misunderstanding the terminology somewhere. If you are willing, can you explain my misunderstanding?
hnnnng. What? Did you link the wrong article? A VNM agent has a utility function (a function from outcomes to reals), but says nothing more. “Continuous” in particular requires your outcome space to have a topology, which it may not, and even if it does, there’s still nothing in VNM that would require continuity.
Not necessarily. To choose torture by the usual argument the following must hold:
You can assign partial utilities separately to amount of torture and amount of dust-speck-eyes, where “partial utilities” means roughly that your final utility function is a sum of the partial utilities.
The partial utilities are roughly monotonic overall (increasing or decreasing, as opposed to having a maximum or minimum, or oscillating) and unbounded.
Minor assumptions like more torture is bad, and more dust specks is bad, and there are possibilities in your outcome space with 3^^^^3 (or sufficiently many) dust speck eyes. (if something is not in your outcome space, it better be strictly impossible, or you are fucked).
I am very skeptical of 1. Once you look at functions as “arbitrary map from set A to set B”, special things like this kind of decomposability seem very particular and very special, requiring a lot more evidence to locate than anyone seems to have gathered. As far as I can tell, the linear independence stuff is an artifact of people intutively thinking of the space of functions as the sort of things you can write by composing from primitives (ie computer code or math).
I am also skeptical of 2, because in general, it seems to be that unbounded utility functions produce repugnant conclusions. See all the problems with utilitarianism, and pascals mugging, etc.
As Eliezer says (but doesn’t seem to take seriously), if a utility function gives utility assignments that I disagree with, I shouldn’t use it. It doesn’t matter how many nice arguments you can come up with that declare the beauty of the internal structure of the utility function (which is a type error btw), if it doesn’t encode my idealized preferences, it’s junk.
The only criteria by which a utility function can be judged is the preferences it produces.
That said, it may be that we will have to enforce certain consistencies on our utilities to capture most of our preferences, but those must be done strictly by looking at preference implications. I tried to communicate this in “pinpointing utility”, but it really requires its own post. So many posts to write, and no time!
You may be confused by the continuity axiom in VNM which is about your preferences over probabilities, not over actual outcomes.
The trouble is, any utility function where 1 doesn’t hold is vulnerable to intuition pumps. If you can’t say which of A, B and C is better (e.g. A > B, B > C, C > A), then I can charge you a penny to switch from C → B, then B → A, then A → C, and you’re three pennies poorer.
I really, really hope my utility function’s “set B” can be mapped to the reals. If not, I’m screwed. (It’s fine if what I want varies with time, so long as it’s not circular at a given point in time.)