*blinks* I’m curious as to what it is you are asking. A utility function is just a way of encoding and organizing one’s preferences/values. Okay, there’re a couple additional requirements like internal consistency (if you prefer A to B and B to C, you’d better prefer A to C) and such, but other than that, it’s just a convenient way of talking about one’s preferences.
The goal isn’t “maximize utility”, but rather “maximizing utility” is a way of stating what it is you’re doing when you’re working to achieve your goals. Or did I completely misunderstand?
I think there has to be more to utility function talk than “convenience”—for one thing, it’s not more convenient than preference talk, in general. Consider an economic utility function, valuing bundles of apples and oranges. If someone’s preferences are summarizable by U(apples, oranges)=sqrt(apples*oranges), that might be convenient, but there’s no free lunch. No compression can be achieved without assumptions about the prior distribution. Believing that preferences tend to have terse expressions in functional talk is a claim about the actual distribution of preferences in the world. The belief that maximizing utility is a perspicuous way of expressing “behave correctly” is something that one has to have evidence for.
My (very partial) understanding of virtue morality is that virtue ethicists believe that “behave correctly” is well expressed in terms of virtues.
I didn’t mean convenient in the sense of compressibility, but convenient in the sense of representing our preference ordering in a form that lets one then talk about stuff like “how can I get the world into the best possible state, where ‘best’ is in terms of my values?” in terms of maximizing utility, and when combined with uncertainty, maximizing expected utility.
I just meant “utility doesn’t automatically imply a specific set of values/virtues. It’s more a way of organizing your virtues so that you can at least formally define optimal actions, giving you a starting point to look for ways to approximately compute such things, etc..”
The phrase “how can I get the world into the best possible state” is explicitly consequentialist. Non-consequentialists (e.g. “The end does not justify the means”) do not admit that correct behavior is getting the world into the best possible state.
Non-utilitarians probably perceive suggestions of maximizing utility, maximizing expected utility, and (in particular) approximating those two as very dangerous and likely to lead to incorrect behavior.
The original poster implied that there is a difference between seeking to maximize utility and (for example) virtue seeking. I’m trying to explain in what sense the original poster had a real point. Not everyone is a utilitarian, and saying “in principle, I could construct a utility function from your preferences” doesn’t make everyone a utilitarian.
Really, the non-consequentialism can be rephrased as a consequentialist philosophy by simply including the means, ie, the history, as part of the “state”… ie, assigning lower value to getting to a certain state by bad methods vs good methods.
Yes, it’s possible to encode the nonconsequentialism or “nonutilitarianism” into the utility function. However, by doing so you’re making the utility function inconvenient to work with. You can’t simultaneously claim that the utility function is “simply” an encoding of people’s preferences and ALSO that the utility function is convenient or preferable.
Then you go and approximate the (uglified) utility function! Put yourself in the virtue theorist’s or Kantian’s shoes. It certainly sounds to me like you’re planning to discard their concerns regarding moral/ethical/correct behavior.
(Note: I don’t actually understand virtue ethics at all, so I might be getting this entirely wrong.) Imagine the virtue ethicist saying “Your concerns can be encoded into the virtue of “achieves a desirable goal”, and will be included in our system along with the other virtues,” Would you want to know WHY the system is being built with virtues at the bottom and consequentialism as an encoding? Would your questions make sense?
It’s “convenient” in the sense of giving us a general way of talking about how to make decisions. It’s “convenient” in that it is set up in such a way to encode not just what you prefer more than other stuff, but how much more, etc...
Lets us then also take advantage of whatever decision theory theorems have been proven, and so on...
As far as “virtue of achieving a desirable goal”, “desirable”, “virtue”, and “achieving” would be doing all the heavy lifting there. :)
But really, my point was simply the original comment was stated in such a way as to imply “maximizing utility” was itself a moral philosophy, ie, the sort of thing that you could say “I consider that immoral, and instead care about personal virtue”. I was simply saying “huh? utility stuff is just a way of talking about whatever values you happen to have. It’s not, on its own, a specific set of values. It’s like, I guess, saying ‘what if I don’t believe in math and instead believe in electromagnetism?’”
You’ll have to forgive me because I am economist by training and mentions of utility have very specific references to Jeremy Bentham.
Your definition of what the term “maximizing utility” means and the Bentham definition (who was the originator) are significantly different; If you don’t know what it is then I will describe it (if you do, sorry for the redundancy).
Jeremy Bentham devised Felicific calculus which is a hedonistic philosophy and seeks as its defining purpose to maximize happiness. He was of the opinion that it was possible in theory to create a literal formula which gives optimized preferences such that it maximized happiness for the individual. This is the foundation for all utilitarian ethics as each seeks to essentially itemize all preferences.
Virtue ethics for those who do not know is the Aristotelian philosophy that posits: each sufficiently differentiated organism or object is naturally optimized for at least one specific purpose above all other purposes. Optimized decision making for a virtue theorist would be doing the things which best express or develop that specific purpose—similar to how specialty tools are best used for their specialty. Happiness is said to spring forth from this as a consequence, not as it’s goal.
I just want to know, if it is the case that he came to follow the former (Bentham) philosophy, how he came to that decision (theoretically it is possible to combine the two).
So in this case, while the term may give an approximation of the optimal decision, if used in that manner is not explicitly clear in how it determines the basis for the decision is in the first place; that is unless, as some have done, it is specified that maximizing happiness is the goal (which I had just assumed people were asserting implicitly anyhow).
Anyways, “maximize happiness above all else” is explicitly not it. And utility, as discussed on this site is a reference to the decision theoretic concept. It is not a specific moral theory at all.
Now, the stuff that we consider morality would include happiness as a term, but certainly not as the only thing.
Virtue ethics, as you describe it, gives me an “eeew” reaction, to be honest. It’s the right thing to do simply because it’s what you were optimized for?
If I somehow bioengineer some sort of sentient living weapon thing, is it actually the proper moral thing for that being to go around committing mass slaughter? After all, that’s what it’s “optimized for”...
As I replied to Tarelton, the not for sake of happiness alone post does not address how he came to the conclusions based on specific decision theoretic optimization. He gives very loose subjective terms for his conclusions:
The best way I can put it, is that my moral intuition appears to require both the objective and subjective component to grant full value.
which is why I worded my question as I did the first time. I don’t think he has done the same amount of thinking on his epistemology as he has on his TDT.
*blinks* I’m curious as to what it is you are asking. A utility function is just a way of encoding and organizing one’s preferences/values. Okay, there’re a couple additional requirements like internal consistency (if you prefer A to B and B to C, you’d better prefer A to C) and such, but other than that, it’s just a convenient way of talking about one’s preferences.
The goal isn’t “maximize utility”, but rather “maximizing utility” is a way of stating what it is you’re doing when you’re working to achieve your goals. Or did I completely misunderstand?
I think there has to be more to utility function talk than “convenience”—for one thing, it’s not more convenient than preference talk, in general. Consider an economic utility function, valuing bundles of apples and oranges. If someone’s preferences are summarizable by U(apples, oranges)=sqrt(apples*oranges), that might be convenient, but there’s no free lunch. No compression can be achieved without assumptions about the prior distribution. Believing that preferences tend to have terse expressions in functional talk is a claim about the actual distribution of preferences in the world. The belief that maximizing utility is a perspicuous way of expressing “behave correctly” is something that one has to have evidence for.
My (very partial) understanding of virtue morality is that virtue ethicists believe that “behave correctly” is well expressed in terms of virtues.
I didn’t mean convenient in the sense of compressibility, but convenient in the sense of representing our preference ordering in a form that lets one then talk about stuff like “how can I get the world into the best possible state, where ‘best’ is in terms of my values?” in terms of maximizing utility, and when combined with uncertainty, maximizing expected utility.
I just meant “utility doesn’t automatically imply a specific set of values/virtues. It’s more a way of organizing your virtues so that you can at least formally define optimal actions, giving you a starting point to look for ways to approximately compute such things, etc..”
Or did I misunderstand your point completely?
The phrase “how can I get the world into the best possible state” is explicitly consequentialist. Non-consequentialists (e.g. “The end does not justify the means”) do not admit that correct behavior is getting the world into the best possible state.
Non-utilitarians probably perceive suggestions of maximizing utility, maximizing expected utility, and (in particular) approximating those two as very dangerous and likely to lead to incorrect behavior.
The original poster implied that there is a difference between seeking to maximize utility and (for example) virtue seeking. I’m trying to explain in what sense the original poster had a real point. Not everyone is a utilitarian, and saying “in principle, I could construct a utility function from your preferences” doesn’t make everyone a utilitarian.
Really, the non-consequentialism can be rephrased as a consequentialist philosophy by simply including the means, ie, the history, as part of the “state”… ie, assigning lower value to getting to a certain state by bad methods vs good methods.
Or am I still not getting it?
Yes, it’s possible to encode the nonconsequentialism or “nonutilitarianism” into the utility function. However, by doing so you’re making the utility function inconvenient to work with. You can’t simultaneously claim that the utility function is “simply” an encoding of people’s preferences and ALSO that the utility function is convenient or preferable.
Then you go and approximate the (uglified) utility function! Put yourself in the virtue theorist’s or Kantian’s shoes. It certainly sounds to me like you’re planning to discard their concerns regarding moral/ethical/correct behavior.
(Note: I don’t actually understand virtue ethics at all, so I might be getting this entirely wrong.) Imagine the virtue ethicist saying “Your concerns can be encoded into the virtue of “achieves a desirable goal”, and will be included in our system along with the other virtues,” Would you want to know WHY the system is being built with virtues at the bottom and consequentialism as an encoding? Would your questions make sense?
It’s “convenient” in the sense of giving us a general way of talking about how to make decisions. It’s “convenient” in that it is set up in such a way to encode not just what you prefer more than other stuff, but how much more, etc...
Lets us then also take advantage of whatever decision theory theorems have been proven, and so on...
As far as “virtue of achieving a desirable goal”, “desirable”, “virtue”, and “achieving” would be doing all the heavy lifting there. :)
But really, my point was simply the original comment was stated in such a way as to imply “maximizing utility” was itself a moral philosophy, ie, the sort of thing that you could say “I consider that immoral, and instead care about personal virtue”. I was simply saying “huh? utility stuff is just a way of talking about whatever values you happen to have. It’s not, on its own, a specific set of values. It’s like, I guess, saying ‘what if I don’t believe in math and instead believe in electromagnetism?’”
You’ll have to forgive me because I am economist by training and mentions of utility have very specific references to Jeremy Bentham.
Your definition of what the term “maximizing utility” means and the Bentham definition (who was the originator) are significantly different; If you don’t know what it is then I will describe it (if you do, sorry for the redundancy).
Jeremy Bentham devised Felicific calculus which is a hedonistic philosophy and seeks as its defining purpose to maximize happiness. He was of the opinion that it was possible in theory to create a literal formula which gives optimized preferences such that it maximized happiness for the individual. This is the foundation for all utilitarian ethics as each seeks to essentially itemize all preferences.
Virtue ethics for those who do not know is the Aristotelian philosophy that posits: each sufficiently differentiated organism or object is naturally optimized for at least one specific purpose above all other purposes. Optimized decision making for a virtue theorist would be doing the things which best express or develop that specific purpose—similar to how specialty tools are best used for their specialty. Happiness is said to spring forth from this as a consequence, not as it’s goal.
I just want to know, if it is the case that he came to follow the former (Bentham) philosophy, how he came to that decision (theoretically it is possible to combine the two).
So in this case, while the term may give an approximation of the optimal decision, if used in that manner is not explicitly clear in how it determines the basis for the decision is in the first place; that is unless, as some have done, it is specified that maximizing happiness is the goal (which I had just assumed people were asserting implicitly anyhow).
Okay, I was talking about utility maximization in the decision theory sense. ie, computations of expected utility, etc etc...
As far as happiness being The One True Virtue, well, that’s been explicitly addressed
Anyways, “maximize happiness above all else” is explicitly not it. And utility, as discussed on this site is a reference to the decision theoretic concept. It is not a specific moral theory at all.
Now, the stuff that we consider morality would include happiness as a term, but certainly not as the only thing.
Virtue ethics, as you describe it, gives me an “eeew” reaction, to be honest. It’s the right thing to do simply because it’s what you were optimized for?
If I somehow bioengineer some sort of sentient living weapon thing, is it actually the proper moral thing for that being to go around committing mass slaughter? After all, that’s what it’s “optimized for”...
As I replied to Tarelton, the not for sake of happiness alone post does not address how he came to the conclusions based on specific decision theoretic optimization. He gives very loose subjective terms for his conclusions:
which is why I worded my question as I did the first time. I don’t think he has done the same amount of thinking on his epistemology as he has on his TDT.
Thanks, I followed up below.