“I mean… if an external objective morality tells you to kill babies, why should you even listen?”
This is an incredibly dangerous argument. Consider this : “I mean… if some moral argument, whatever the source, tells me to prefer 50 years of torture to any number of dust specks, why should I even listen?”
And we have seen many who literally made this argument.
People have been demonstrably willing to make everyone live at a lower standard of living rather than let a tiny minority grow obscenely rich and everyone else be moderately well off. In other words we seem to be willing to pay a price for equality. Why wouldn’t this work in the other direction? Maybe we prefer to induce more suffering overall if this prevents a tiny minority suffering obscenely.
Too many people seem to think perfectly equally weighed altruism (everyone who shares the mystical designation of “person” has a equal weight and after that you just do calculus to maximize overall “goodness”) that sometimes hides under the word “utilitarianism” on this forum, is anything but another grand moral principle that claims to, but fails, to really compactly represent our shards of desire. If you wouldn’t be comfortable building an AI to follow that rule and only that rule, why are so many people keen on solving all their personal moral dilemmas with it?
People have been demonstrably willing to make everyone live at a lower standard of living rather than let a tiny minority grow obscenely rich and everyone else being moderately well off.
You do realize that valuing equality in itself to any extent at all is always (because of opportunity cost at least) an example of this:
People have been demonstrably willing to make everyone live at a lower standard of living rather than let a tiny minority grow obscenely rich and everyone else be moderately well off.
But I agree with you in sense. Historically lots of horrible people have vastly overpaid (often in blood) and overvalued that particular good according to my values too.
You do realize that valuing equality in itself to any extent at all is always (because of opportunity cost at least) an example of this:
Are you sure?
If you take a concave function, such as a log, of the net happiness of each individual, and maximize the sum, you’d always prefer equality to inequality when net happiness is held constant, and you’d always prefer a higher minimum happiness regardless of inequality.
Konkvistador, I applaud your thougtful and weighed approach to the problem of equality. It has been troubling me too, and I’m glad to see that you’re careful not to lean in any one direction before observing the wider picture. That’s a grave matter indeed.
I’m glad I found this comment. I suffer from an intense feeling of cognitive dissonance when I browse LW and read the posts which sound sensible (like this one) and contradictory posts like the dust specks. I hear “don’t use oversimplified morality!” and then I read a post about torturing people because summing utilons told you it was the correct answer. Mind=>blown.
I’m glad I found this comment. I suffer from an intense feeling of cognitive dissonance when I browse LW and read the posts which sound sensible (like this one) and contradictory posts like the dust specks. I hear “don’t use oversimplified morality!” and then I read a post about torturing people because summing utilons told you it was the correct answer. Mind=>blown.
There is no contradiction between this post and Eliezer’s dust specks post.
It would be good to elaborate on this. Whilst they’re not strictly logically contradictory, a few reasonable assumptions here and there when extrapolating and they appear to suggest different courses of action.
The comment was making the opposite point, namely that some people refuse to accept that there is even a common ‘utilon’ with which torture and ‘dust specks’ can be compared.
By what criteria do we judge that there should be a common ‘utilon’?
Not VNM, it just says we must be consistent in our assignment of utility to whole monolithic possible worlds. I can be VNM rational and choose specks.
Utilitarianism says so, but as far as I can tell, utilitarianism leads to all sorts of repugnant conclusions, and only repugnant conclusions.
Maybe we are only concerned with unique experience, and all the possible variation in dust-speck-experience-space is covered by the time you get to 1000.
I’m confused. I’m not a mathematician, but I understood this post as saying a good VNM agent has a continuous utility function.
hnnnng. What? Did you link the wrong article? A VNM agent has a utility function (a function from outcomes to reals), but says nothing more. “Continuous” in particular requires your outcome space to have a topology, which it may not, and even if it does, there’s still nothing in VNM that would require continuity.
And my take away from the torture/specks thing was that having a continuous utility function requires choosing torture.
Not necessarily. To choose torture by the usual argument the following must hold:
You can assign partial utilities separately to amount of torture and amount of dust-speck-eyes, where “partial utilities” means roughly that your final utility function is a sum of the partial utilities.
The partial utilities are roughly monotonic overall (increasing or decreasing, as opposed to having a maximum or minimum, or oscillating) and unbounded.
Minor assumptions like more torture is bad, and more dust specks is bad, and there are possibilities in your outcome space with 3^^^^3 (or sufficiently many) dust speck eyes. (if something is not in your outcome space, it better be strictly impossible, or you are fucked).
I am very skeptical of 1. Once you look at functions as “arbitrary map from set A to set B”, special things like this kind of decomposability seem very particular and very special, requiring a lot more evidence to locate than anyone seems to have gathered. As far as I can tell, the linear independence stuff is an artifact of people intutively thinking of the space of functions as the sort of things you can write by composing from primitives (ie computer code or math).
I am also skeptical of 2, because in general, it seems to be that unbounded utility functions produce repugnant conclusions. See all the problems with utilitarianism, and pascals mugging, etc.
As Eliezer says (but doesn’t seem to take seriously), if a utility function gives utility assignments that I disagree with, I shouldn’t use it. It doesn’t matter how many nice arguments you can come up with that declare the beauty of the internal structure of the utility function (which is a type error btw), if it doesn’t encode my idealized preferences, it’s junk.
The only criteria by which a utility function can be judged is the preferences it produces.
That said, it may be that we will have to enforce certain consistencies on our utilities to capture most of our preferences, but those must be done strictly by looking at preference implications. I tried to communicate this in “pinpointing utility”, but it really requires its own post. So many posts to write, and no time!
I assume I’m misunderstanding the terminology somewhere. If you are willing, can you explain my misunderstanding?
You may be confused by the continuity axiom in VNM which is about your preferences over probabilities, not over actual outcomes.
The trouble is, any utility function where 1 doesn’t hold is vulnerable to intuition pumps. If you can’t say which of A, B and C is better (e.g. A > B, B > C, C > A), then I can charge you a penny to switch from C → B, then B → A, then A → C, and you’re three pennies poorer.
I really, really hope my utility function’s “set B” can be mapped to the reals. If not, I’m screwed. (It’s fine if what I want varies with time, so long as it’s not circular at a given point in time.)
“I mean… if an external objective morality tells you to kill babies, why should you even listen?”
This is an incredibly dangerous argument. Consider this : “I mean… if some moral argument, whatever the source, tells me to prefer 50 years of torture to any number of dust specks, why should I even listen?”
And we have seen many who literally made this argument.
Maybe they are right.
People have been demonstrably willing to make everyone live at a lower standard of living rather than let a tiny minority grow obscenely rich and everyone else be moderately well off. In other words we seem to be willing to pay a price for equality. Why wouldn’t this work in the other direction? Maybe we prefer to induce more suffering overall if this prevents a tiny minority suffering obscenely.
Too many people seem to think perfectly equally weighed altruism (everyone who shares the mystical designation of “person” has a equal weight and after that you just do calculus to maximize overall “goodness”) that sometimes hides under the word “utilitarianism” on this forum, is anything but another grand moral principle that claims to, but fails, to really compactly represent our shards of desire. If you wouldn’t be comfortable building an AI to follow that rule and only that rule, why are so many people keen on solving all their personal moral dilemmas with it?
Sure, horrible people.
mind-killed
You do realize that valuing equality in itself to any extent at all is always (because of opportunity cost at least) an example of this:
But I agree with you in sense. Historically lots of horrible people have vastly overpaid (often in blood) and overvalued that particular good according to my values too.
Are you sure?
If you take a concave function, such as a log, of the net happiness of each individual, and maximize the sum, you’d always prefer equality to inequality when net happiness is held constant, and you’d always prefer a higher minimum happiness regardless of inequality.
Excellent! Thanks for the mathematical model! I’ve been trying to work out how to describe this principle for ages.
Yes.
Ok just checking, surprisingly many people miss this. :)
Konkvistador, I applaud your thougtful and weighed approach to the problem of equality. It has been troubling me too, and I’m glad to see that you’re careful not to lean in any one direction before observing the wider picture. That’s a grave matter indeed.
I’m glad I found this comment. I suffer from an intense feeling of cognitive dissonance when I browse LW and read the posts which sound sensible (like this one) and contradictory posts like the dust specks. I hear “don’t use oversimplified morality!” and then I read a post about torturing people because summing utilons told you it was the correct answer. Mind=>blown.
There is no contradiction between this post and Eliezer’s dust specks post.
It would be good to elaborate on this. Whilst they’re not strictly logically contradictory, a few reasonable assumptions here and there when extrapolating and they appear to suggest different courses of action.
The comment was making the opposite point, namely that some people refuse to accept that there is even a common ‘utilon’ with which torture and ‘dust specks’ can be compared.
By what criteria do we judge that there should be a common ‘utilon’?
Not VNM, it just says we must be consistent in our assignment of utility to whole monolithic possible worlds. I can be VNM rational and choose specks.
Utilitarianism says so, but as far as I can tell, utilitarianism leads to all sorts of repugnant conclusions, and only repugnant conclusions.
Maybe we are only concerned with unique experience, and all the possible variation in dust-speck-experience-space is covered by the time you get to 1000.
I’m confused. I’m not a mathematician, but I understood this post as saying a good VNM agent has a continuous utility function.
And my take away from the torture/specks thing was that having a continuous utility function requires choosing torture.
I assume I’m misunderstanding the terminology somewhere. If you are willing, can you explain my misunderstanding?
hnnnng. What? Did you link the wrong article? A VNM agent has a utility function (a function from outcomes to reals), but says nothing more. “Continuous” in particular requires your outcome space to have a topology, which it may not, and even if it does, there’s still nothing in VNM that would require continuity.
Not necessarily. To choose torture by the usual argument the following must hold:
You can assign partial utilities separately to amount of torture and amount of dust-speck-eyes, where “partial utilities” means roughly that your final utility function is a sum of the partial utilities.
The partial utilities are roughly monotonic overall (increasing or decreasing, as opposed to having a maximum or minimum, or oscillating) and unbounded.
Minor assumptions like more torture is bad, and more dust specks is bad, and there are possibilities in your outcome space with 3^^^^3 (or sufficiently many) dust speck eyes. (if something is not in your outcome space, it better be strictly impossible, or you are fucked).
I am very skeptical of 1. Once you look at functions as “arbitrary map from set A to set B”, special things like this kind of decomposability seem very particular and very special, requiring a lot more evidence to locate than anyone seems to have gathered. As far as I can tell, the linear independence stuff is an artifact of people intutively thinking of the space of functions as the sort of things you can write by composing from primitives (ie computer code or math).
I am also skeptical of 2, because in general, it seems to be that unbounded utility functions produce repugnant conclusions. See all the problems with utilitarianism, and pascals mugging, etc.
As Eliezer says (but doesn’t seem to take seriously), if a utility function gives utility assignments that I disagree with, I shouldn’t use it. It doesn’t matter how many nice arguments you can come up with that declare the beauty of the internal structure of the utility function (which is a type error btw), if it doesn’t encode my idealized preferences, it’s junk.
The only criteria by which a utility function can be judged is the preferences it produces.
That said, it may be that we will have to enforce certain consistencies on our utilities to capture most of our preferences, but those must be done strictly by looking at preference implications. I tried to communicate this in “pinpointing utility”, but it really requires its own post. So many posts to write, and no time!
You may be confused by the continuity axiom in VNM which is about your preferences over probabilities, not over actual outcomes.
The trouble is, any utility function where 1 doesn’t hold is vulnerable to intuition pumps. If you can’t say which of A, B and C is better (e.g. A > B, B > C, C > A), then I can charge you a penny to switch from C → B, then B → A, then A → C, and you’re three pennies poorer.
I really, really hope my utility function’s “set B” can be mapped to the reals. If not, I’m screwed. (It’s fine if what I want varies with time, so long as it’s not circular at a given point in time.)