Anyways, “maximize happiness above all else” is explicitly not it. And utility, as discussed on this site is a reference to the decision theoretic concept. It is not a specific moral theory at all.
Now, the stuff that we consider morality would include happiness as a term, but certainly not as the only thing.
Virtue ethics, as you describe it, gives me an “eeew” reaction, to be honest. It’s the right thing to do simply because it’s what you were optimized for?
If I somehow bioengineer some sort of sentient living weapon thing, is it actually the proper moral thing for that being to go around committing mass slaughter? After all, that’s what it’s “optimized for”...
As I replied to Tarelton, the not for sake of happiness alone post does not address how he came to the conclusions based on specific decision theoretic optimization. He gives very loose subjective terms for his conclusions:
The best way I can put it, is that my moral intuition appears to require both the objective and subjective component to grant full value.
which is why I worded my question as I did the first time. I don’t think he has done the same amount of thinking on his epistemology as he has on his TDT.
Okay, I was talking about utility maximization in the decision theory sense. ie, computations of expected utility, etc etc...
As far as happiness being The One True Virtue, well, that’s been explicitly addressed
Anyways, “maximize happiness above all else” is explicitly not it. And utility, as discussed on this site is a reference to the decision theoretic concept. It is not a specific moral theory at all.
Now, the stuff that we consider morality would include happiness as a term, but certainly not as the only thing.
Virtue ethics, as you describe it, gives me an “eeew” reaction, to be honest. It’s the right thing to do simply because it’s what you were optimized for?
If I somehow bioengineer some sort of sentient living weapon thing, is it actually the proper moral thing for that being to go around committing mass slaughter? After all, that’s what it’s “optimized for”...
As I replied to Tarelton, the not for sake of happiness alone post does not address how he came to the conclusions based on specific decision theoretic optimization. He gives very loose subjective terms for his conclusions:
which is why I worded my question as I did the first time. I don’t think he has done the same amount of thinking on his epistemology as he has on his TDT.