I’m a classical utilitarian, so I don’t have this problem.
If I were to accept preference utilitarianism, I’d say that fulfilled preferences are worth utility, and by bringing them into being I’d allow them to have fulfilled preferences.
Of course, I’d also say that you should lock people in small, brightly lit spaces to make them prefer big, empty, dark spaces, like most of the universe. Then they’d have really fulfilled preferences. Perhaps I just don’t understand preference utilitarianism.
In general, I think that most desires aren’t fulfilled on a viscerally emotional level by the mere existence of something so much as actually receiving it. I’m not nearly as fulfilled by ice-cream’s existence as I am fulfilled when I’m eating it.
I don’t think those people would prefer having their preferences changed in that way.
I’m not trying to be sophist here, I’m just pointing out that “classical utilitarians” are following a complicated, mostly unspecified utility function. This is ok! There is nothing wrong with it.
But there’s also nothing wrong with having a different, complicated utility function, that captures more of your values. Classical utilitarians do not have some special utility, selected on some abstract simplicity criteria; they’re in there with the rest of us (as long as we are utilitarians of some type).
Most people’s ethics are based on their desires. People’s desires are based on what makes them happy. That’s as far down as it goes.
A somewhat simplistic definition of happiness is positive reinforcement. If you alter your preferences towards what’s happening now, you’re happy. If you alter them away, you’re sad.
A utility function is quantitative, not qualitative.
How would you go about transforming these vague statements into precise mathematical definition?
(I’ll grant you “black box rights”; you can use terms—anger, doubt, etc… - that humans can understand, without having to define them mathematically. So if you come up with a scale of anger with generally understandable anecdotes attached to each level, that will be enough to classify the “anger” term in your overall utility function. Which we will need when we start talking quantitatively about trading anger off against pain, love, pleasure, embarrassment...). Indirect ways of measuring utility—“utility is money” being the most trivial—are also valid if you don’t want to wade into the mess of human psychology, but they come with their own drawbacks (instrumental versus terminal goal, eg).
I don’t think most utilitarians claim to follow (or even know) their utility function so much as assert that utility maximization is the proper way to resolve moral conflicts.
Kind of how like physicists claim that there would be a theory of everything without actually knowing what it is.
I perfectly agree that utility maximisation is indeed the proper way to resolve common moral conflicts.
But utility functions can be as complex as you need them to be! Saying you have a utility function does not constrain you virtually at all. But sometimes total utilitarians like to claim that their version is better because it is “simpler” or “more intuitive”.
First of all, simplicity is not a virtue comparable with, say, human lives or happines, secondly I have different intuitions to them, and thirdly, their actual real utility function, if it were specified, would be unbelievably complex anyway.
I don’t want to pour important moral insights down the drain, based on specious simplicity arguments....
I’m a classical utilitarian, so I don’t have this problem.
If I were to accept preference utilitarianism, I’d say that fulfilled preferences are worth utility, and by bringing them into being I’d allow them to have fulfilled preferences.
Of course, I’d also say that you should lock people in small, brightly lit spaces to make them prefer big, empty, dark spaces, like most of the universe. Then they’d have really fulfilled preferences. Perhaps I just don’t understand preference utilitarianism.
In general, I think that most desires aren’t fulfilled on a viscerally emotional level by the mere existence of something so much as actually receiving it. I’m not nearly as fulfilled by ice-cream’s existence as I am fulfilled when I’m eating it.
I don’t think those people would prefer having their preferences changed in that way.
If you mean they have to get the emotion of a preference being fulfilled, isn’t that happiness?
Care to specify that utility function that you claim to follow? :-)
Maximize pleasure minus pain.
Now I have two undefined terms, rather than one.
I’m not trying to be sophist here, I’m just pointing out that “classical utilitarians” are following a complicated, mostly unspecified utility function. This is ok! There is nothing wrong with it.
But there’s also nothing wrong with having a different, complicated utility function, that captures more of your values. Classical utilitarians do not have some special utility, selected on some abstract simplicity criteria; they’re in there with the rest of us (as long as we are utilitarians of some type).
Thank you for showing me this!
Cheers :-)
Most people’s ethics are based on their desires. People’s desires are based on what makes them happy. That’s as far down as it goes.
A somewhat simplistic definition of happiness is positive reinforcement. If you alter your preferences towards what’s happening now, you’re happy. If you alter them away, you’re sad.
A utility function is quantitative, not qualitative.
How would you go about transforming these vague statements into precise mathematical definition?
(I’ll grant you “black box rights”; you can use terms—anger, doubt, etc… - that humans can understand, without having to define them mathematically. So if you come up with a scale of anger with generally understandable anecdotes attached to each level, that will be enough to classify the “anger” term in your overall utility function. Which we will need when we start talking quantitatively about trading anger off against pain, love, pleasure, embarrassment...). Indirect ways of measuring utility—“utility is money” being the most trivial—are also valid if you don’t want to wade into the mess of human psychology, but they come with their own drawbacks (instrumental versus terminal goal, eg).
Utility is the dot product of the derivative of desires and the observations. Desires are what you attempt to make happen.
If you start trying to make what’s currently happening happen more often, then you’re happy.
I don’t think most utilitarians claim to follow (or even know) their utility function so much as assert that utility maximization is the proper way to resolve moral conflicts.
Kind of how like physicists claim that there would be a theory of everything without actually knowing what it is.
I perfectly agree that utility maximisation is indeed the proper way to resolve common moral conflicts.
But utility functions can be as complex as you need them to be! Saying you have a utility function does not constrain you virtually at all. But sometimes total utilitarians like to claim that their version is better because it is “simpler” or “more intuitive”.
First of all, simplicity is not a virtue comparable with, say, human lives or happines, secondly I have different intuitions to them, and thirdly, their actual real utility function, if it were specified, would be unbelievably complex anyway.
I don’t want to pour important moral insights down the drain, based on specious simplicity arguments....