You’ll have to forgive me because I am economist by training and mentions of utility have very specific references to Jeremy Bentham.
Your definition of what the term “maximizing utility” means and the Bentham definition (who was the originator) are significantly different; If you don’t know what it is then I will describe it (if you do, sorry for the redundancy).
Jeremy Bentham devised Felicific calculus which is a hedonistic philosophy and seeks as its defining purpose to maximize happiness. He was of the opinion that it was possible in theory to create a literal formula which gives optimized preferences such that it maximized happiness for the individual. This is the foundation for all utilitarian ethics as each seeks to essentially itemize all preferences.
Virtue ethics for those who do not know is the Aristotelian philosophy that posits: each sufficiently differentiated organism or object is naturally optimized for at least one specific purpose above all other purposes. Optimized decision making for a virtue theorist would be doing the things which best express or develop that specific purpose—similar to how specialty tools are best used for their specialty. Happiness is said to spring forth from this as a consequence, not as it’s goal.
I just want to know, if it is the case that he came to follow the former (Bentham) philosophy, how he came to that decision (theoretically it is possible to combine the two).
So in this case, while the term may give an approximation of the optimal decision, if used in that manner is not explicitly clear in how it determines the basis for the decision is in the first place; that is unless, as some have done, it is specified that maximizing happiness is the goal (which I had just assumed people were asserting implicitly anyhow).
Anyways, “maximize happiness above all else” is explicitly not it. And utility, as discussed on this site is a reference to the decision theoretic concept. It is not a specific moral theory at all.
Now, the stuff that we consider morality would include happiness as a term, but certainly not as the only thing.
Virtue ethics, as you describe it, gives me an “eeew” reaction, to be honest. It’s the right thing to do simply because it’s what you were optimized for?
If I somehow bioengineer some sort of sentient living weapon thing, is it actually the proper moral thing for that being to go around committing mass slaughter? After all, that’s what it’s “optimized for”...
As I replied to Tarelton, the not for sake of happiness alone post does not address how he came to the conclusions based on specific decision theoretic optimization. He gives very loose subjective terms for his conclusions:
The best way I can put it, is that my moral intuition appears to require both the objective and subjective component to grant full value.
which is why I worded my question as I did the first time. I don’t think he has done the same amount of thinking on his epistemology as he has on his TDT.
You’ll have to forgive me because I am economist by training and mentions of utility have very specific references to Jeremy Bentham.
Your definition of what the term “maximizing utility” means and the Bentham definition (who was the originator) are significantly different; If you don’t know what it is then I will describe it (if you do, sorry for the redundancy).
Jeremy Bentham devised Felicific calculus which is a hedonistic philosophy and seeks as its defining purpose to maximize happiness. He was of the opinion that it was possible in theory to create a literal formula which gives optimized preferences such that it maximized happiness for the individual. This is the foundation for all utilitarian ethics as each seeks to essentially itemize all preferences.
Virtue ethics for those who do not know is the Aristotelian philosophy that posits: each sufficiently differentiated organism or object is naturally optimized for at least one specific purpose above all other purposes. Optimized decision making for a virtue theorist would be doing the things which best express or develop that specific purpose—similar to how specialty tools are best used for their specialty. Happiness is said to spring forth from this as a consequence, not as it’s goal.
I just want to know, if it is the case that he came to follow the former (Bentham) philosophy, how he came to that decision (theoretically it is possible to combine the two).
So in this case, while the term may give an approximation of the optimal decision, if used in that manner is not explicitly clear in how it determines the basis for the decision is in the first place; that is unless, as some have done, it is specified that maximizing happiness is the goal (which I had just assumed people were asserting implicitly anyhow).
Okay, I was talking about utility maximization in the decision theory sense. ie, computations of expected utility, etc etc...
As far as happiness being The One True Virtue, well, that’s been explicitly addressed
Anyways, “maximize happiness above all else” is explicitly not it. And utility, as discussed on this site is a reference to the decision theoretic concept. It is not a specific moral theory at all.
Now, the stuff that we consider morality would include happiness as a term, but certainly not as the only thing.
Virtue ethics, as you describe it, gives me an “eeew” reaction, to be honest. It’s the right thing to do simply because it’s what you were optimized for?
If I somehow bioengineer some sort of sentient living weapon thing, is it actually the proper moral thing for that being to go around committing mass slaughter? After all, that’s what it’s “optimized for”...
As I replied to Tarelton, the not for sake of happiness alone post does not address how he came to the conclusions based on specific decision theoretic optimization. He gives very loose subjective terms for his conclusions:
which is why I worded my question as I did the first time. I don’t think he has done the same amount of thinking on his epistemology as he has on his TDT.