I think any purely utilitarian ethics is subject to “utility monsters” of some kind
Isn’t that fairly easily solved by, as the right honorable Zulu Pineapple says, “let all utilities be positive and let them sum to one”? It would guarantee that no agent can have a preference for any particular outcome that is stronger than one. Nothing can have just generally stronger preferences overall, under that requirement.
It does seem to me that it would require utility functions to be specified over a finite set of worlds (I know you can have a finite integration over an infinite range, but this seems to require clever mathematical tricks that wouldn’t really be applicable to a real world utility function?). I’m not sure how this would work.
Do remember, at some point, being pulled in directions you don’t particularly want to go in under obligation to optimise for utility functions other than yours is literally what utilitarianism (and social compromise in general) is, and if you don’t like pleasing other peoples’ strange desires, you might want to consider conquest and hegemony as an alternative to utilitarianism, at least while it’s still cheaper.
Hmm what if utility monsters don’t exist in nature, and are not permitted to be made, because such a thing would be the equivalent of strategic (non-honest) voting and we have stipulated as a part of the terms of utilitarianism that we have access to the True utility function of our constituents, and that their twisted children Felix and Soba don’t count. Of course, you would need an operational procedure for counting individual agents. You would need some way of saying which are valid and which ephemeral replicas of a mind process will not be allowed to multiply their root’s will arbitrarily.
Going down this line of thought, I started to wonder whether there have ever or will ever exist any True Utility Functions that are not strategic constructions. I think there would have to be at some point, and that constructed strategic agents are easily distinguishable from agents who have, at least at some point, been honest with themselves about what they want, and the constructions could be trivially excluded from a utilitarian society. Probably.
“Soba” referred to the happy drug from Brave New World; that is, to the possibility of “utility superstimulus” on a collective, not individual, level.
“Sum to one” is a really stupid rule for utilities. As a statistician, I can tell you that finding normalizing constants is hard, even if you have an agreed-upon measure; and agreeing on a measure in a politically-contentious situation is impossible. Bounded utility is a better rule, but there are still cases where it clearly fails, and even when it succeeds in the abstract it does nothing to rein in strategic incentives in practice.
As to questions about True Utility Functions and a utopic utilitarian society… those are very interesting, but not at all practical.
those are very interesting, but not at all practical.
In what practical context do we work with utilities as explicit numbers? I don’t understand what context you’re thinking of. If you have some numbers, then you can normalize them and if you don’t have numbers, then how does a utility monster even work?
I can’t find a definition of Soba?
Isn’t that fairly easily solved by, as the right honorable Zulu Pineapple says, “let all utilities be positive and let them sum to one”? It would guarantee that no agent can have a preference for any particular outcome that is stronger than one. Nothing can have just generally stronger preferences overall, under that requirement.
It does seem to me that it would require utility functions to be specified over a finite set of worlds (I know you can have a finite integration over an infinite range, but this seems to require clever mathematical tricks that wouldn’t really be applicable to a real world utility function?). I’m not sure how this would work.
Do remember, at some point, being pulled in directions you don’t particularly want to go in under obligation to optimise for utility functions other than yours is literally what utilitarianism (and social compromise in general) is, and if you don’t like pleasing other peoples’ strange desires, you might want to consider conquest and hegemony as an alternative to utilitarianism, at least while it’s still cheaper.
Hmm what if utility monsters don’t exist in nature, and are not permitted to be made, because such a thing would be the equivalent of strategic (non-honest) voting and we have stipulated as a part of the terms of utilitarianism that we have access to the True utility function of our constituents, and that their twisted children Felix and Soba don’t count. Of course, you would need an operational procedure for counting individual agents. You would need some way of saying which are valid and which ephemeral replicas of a mind process will not be allowed to multiply their root’s will arbitrarily.
Going down this line of thought, I started to wonder whether there have ever or will ever exist any True Utility Functions that are not strategic constructions. I think there would have to be at some point, and that constructed strategic agents are easily distinguishable from agents who have, at least at some point, been honest with themselves about what they want, and the constructions could be trivially excluded from a utilitarian society. Probably.
“Soba” referred to the happy drug from Brave New World; that is, to the possibility of “utility superstimulus” on a collective, not individual, level.
“Sum to one” is a really stupid rule for utilities. As a statistician, I can tell you that finding normalizing constants is hard, even if you have an agreed-upon measure; and agreeing on a measure in a politically-contentious situation is impossible. Bounded utility is a better rule, but there are still cases where it clearly fails, and even when it succeeds in the abstract it does nothing to rein in strategic incentives in practice.
As to questions about True Utility Functions and a utopic utilitarian society… those are very interesting, but not at all practical.
(That’s Soma. I don’t believe the joy of consuming Soba comes close to the joy of Soma, although I’ve never eaten Soba a traditional context.)
Oops, fixed.
In what practical context do we work with utilities as explicit numbers? I don’t understand what context you’re thinking of. If you have some numbers, then you can normalize them and if you don’t have numbers, then how does a utility monster even work?
(I read it as Zu Lupine Apple.)