I’m not trying to be sophist here, I’m just pointing out that “classical utilitarians” are following a complicated, mostly unspecified utility function. This is ok! There is nothing wrong with it.
But there’s also nothing wrong with having a different, complicated utility function, that captures more of your values. Classical utilitarians do not have some special utility, selected on some abstract simplicity criteria; they’re in there with the rest of us (as long as we are utilitarians of some type).
Most people’s ethics are based on their desires. People’s desires are based on what makes them happy. That’s as far down as it goes.
A somewhat simplistic definition of happiness is positive reinforcement. If you alter your preferences towards what’s happening now, you’re happy. If you alter them away, you’re sad.
A utility function is quantitative, not qualitative.
How would you go about transforming these vague statements into precise mathematical definition?
(I’ll grant you “black box rights”; you can use terms—anger, doubt, etc… - that humans can understand, without having to define them mathematically. So if you come up with a scale of anger with generally understandable anecdotes attached to each level, that will be enough to classify the “anger” term in your overall utility function. Which we will need when we start talking quantitatively about trading anger off against pain, love, pleasure, embarrassment...). Indirect ways of measuring utility—“utility is money” being the most trivial—are also valid if you don’t want to wade into the mess of human psychology, but they come with their own drawbacks (instrumental versus terminal goal, eg).
Now I have two undefined terms, rather than one.
I’m not trying to be sophist here, I’m just pointing out that “classical utilitarians” are following a complicated, mostly unspecified utility function. This is ok! There is nothing wrong with it.
But there’s also nothing wrong with having a different, complicated utility function, that captures more of your values. Classical utilitarians do not have some special utility, selected on some abstract simplicity criteria; they’re in there with the rest of us (as long as we are utilitarians of some type).
Thank you for showing me this!
Cheers :-)
Most people’s ethics are based on their desires. People’s desires are based on what makes them happy. That’s as far down as it goes.
A somewhat simplistic definition of happiness is positive reinforcement. If you alter your preferences towards what’s happening now, you’re happy. If you alter them away, you’re sad.
A utility function is quantitative, not qualitative.
How would you go about transforming these vague statements into precise mathematical definition?
(I’ll grant you “black box rights”; you can use terms—anger, doubt, etc… - that humans can understand, without having to define them mathematically. So if you come up with a scale of anger with generally understandable anecdotes attached to each level, that will be enough to classify the “anger” term in your overall utility function. Which we will need when we start talking quantitatively about trading anger off against pain, love, pleasure, embarrassment...). Indirect ways of measuring utility—“utility is money” being the most trivial—are also valid if you don’t want to wade into the mess of human psychology, but they come with their own drawbacks (instrumental versus terminal goal, eg).
Utility is the dot product of the derivative of desires and the observations. Desires are what you attempt to make happen.
If you start trying to make what’s currently happening happen more often, then you’re happy.