I agree with this in the abstract, but in all particular situations the ‘morality’ is part of the content of the ‘utility function’ so is directly relevant to whether something really is a better way of maximizing the utility function.
If you’re talking about behaviors, morality is relevant.
I agree with this in the abstract, but if you adopt the view that morality is already factored into your utility function (as I do), then you probably don’t need to pay attention when other people say your behavior is immoral (as many critics of PUA here do). I think when Alice calls Bob’s behavior immoral, she’s not setting out to help Bob maximize his utility function more effectively, she’s trying to enforce a perceived social contract or just score points.
if you adopt the view that morality is already factored into your utility function
(You are not necessarily able to intuitively feel what your “utility function” specifies, and moral arguments can point out to you that you are not paying attention, for example, to its terms that refer to experience of specific other people.)
I disagree, especially here on Lw! When user-Bob tells user-Alice that her behavior is immoral, he’s probably setting out to help her maximize her utility function more effectively.
Or at least, that’s why I do it. A virtue is a trait of character that is good for the person who has it.
ETA: Otherwise, the argument is fully general. For humanity in general, when Alice says x to Bob, she is trying to enforce a perceived social contract, or score points, or signal tribal affiliation. So, you shouldn’t listen to anybody about anything w.r.t. becoming more instrumentally effective. And that seems obviously wrong, at least here.
I disagree, especially here on Lw! When user-Bob tells user-Alice that her behavior is immoral, he’s probably setting out to help her maximize her utility function more effectively.
My historical observations do not support this prediction.
I submit that if I say, “you should x”, and it is not the case that “x is rational”, then I’m doing something wrong. Your putative observations should have been associated with downvotes, and the charitable interpretation remains that comments here are in support of rationality.
I agree with this in the abstract, but in all particular situations the ‘morality’ is part of the content of the ‘utility function’ so is directly relevant to whether something really is a better way of maximizing the utility function.
If you’re talking about behaviors, morality is relevant.
I agree with this in the abstract, but if you adopt the view that morality is already factored into your utility function (as I do), then you probably don’t need to pay attention when other people say your behavior is immoral (as many critics of PUA here do). I think when Alice calls Bob’s behavior immoral, she’s not setting out to help Bob maximize his utility function more effectively, she’s trying to enforce a perceived social contract or just score points.
(You are not necessarily able to intuitively feel what your “utility function” specifies, and moral arguments can point out to you that you are not paying attention, for example, to its terms that refer to experience of specific other people.)
I disagree, especially here on Lw! When user-Bob tells user-Alice that her behavior is immoral, he’s probably setting out to help her maximize her utility function more effectively.
Or at least, that’s why I do it. A virtue is a trait of character that is good for the person who has it.
ETA: Otherwise, the argument is fully general. For humanity in general, when Alice says x to Bob, she is trying to enforce a perceived social contract, or score points, or signal tribal affiliation. So, you shouldn’t listen to anybody about anything w.r.t. becoming more instrumentally effective. And that seems obviously wrong, at least here.
My historical observations do not support this prediction.
I submit that if I say, “you should x”, and it is not the case that “x is rational”, then I’m doing something wrong. Your putative observations should have been associated with downvotes, and the charitable interpretation remains that comments here are in support of rationality.