Hmm… This is reminiscent of Eliezer’s (and my) metaethics¹. In particular, I would say that “the rules that constitute morality” are, by the definition embedded in my brain, some set which I’m not exactly sure of the contents of but which definitely includes {kindness, not murdering, not stealing, allowing freedom, …}. (Well, it may actually be a utility function, but sets are easier to convey in text.)
In that case, “should”, “moral”, “right” and the rest are all just different words for “the object is in the above set (which we call morality)”. And then “being moral” means “following those rules” as a matter of logical necessity, as you’ve said. But this depends on what you mean by “the rules constituting morality”, on which you haven’t said whether you agree.
In particular, I would say that “the rules that constitute morality” are, by the definition embedded in my brain, some set which I’m not exactly sure of the contents of but which definitely includes {kindness, not murdering, not stealing, allowing freedom, …}.
What determines the contents of the set / details of the utility function?
The short answer is: my/our preferences (suitably extrapolated).
The long answer is: it exists as a mathematical object regardless of anyone’s preference, and one can judge things by it even in an empty universe. The reason we happen to care about this particular object is because it embodies our preferences, and we can find out exactly what object we are talking about by examining our preferences. It really adds up to the same thing, but if one only heard the short answer they might think it was about preferences, rather than described by them.
But anyway, I think I’m mostly trying to summarise the metaethics sequence by this point :/ (probably wrongly :p)
I see what you mean, and I don’t think I disagree.
I think one more question will clarify. If your / our preferences were different, would the mathematical set / utility function you consider to be morality be different also? Namely, is the set of “rules that constitute morality” contingent upon what an agent already values (suitably extrapolated)?
No. On the other hand, me!pebble-sorter would have no interest in morality at all, and go on instead about how p-great p-morality is. But I wouldn’t mix up p-morality with morality.
So, you’re defining “morality” as an extrapolation from your preferences now, and if your preferences change in the future, that future person would care about what your present self might call futureYou-morality, even if future you insists on calling it “morality”?
Hmm… This is reminiscent of Eliezer’s (and my) metaethics¹. In particular, I would say that “the rules that constitute morality” are, by the definition embedded in my brain, some set which I’m not exactly sure of the contents of but which definitely includes {kindness, not murdering, not stealing, allowing freedom, …}. (Well, it may actually be a utility function, but sets are easier to convey in text.)
In that case, “should”, “moral”, “right” and the rest are all just different words for “the object is in the above set (which we call morality)”. And then “being moral” means “following those rules” as a matter of logical necessity, as you’ve said. But this depends on what you mean by “the rules constituting morality”, on which you haven’t said whether you agree.
What do you think?
What determines the contents of the set / details of the utility function?
The short answer is: my/our preferences (suitably extrapolated).
The long answer is: it exists as a mathematical object regardless of anyone’s preference, and one can judge things by it even in an empty universe. The reason we happen to care about this particular object is because it embodies our preferences, and we can find out exactly what object we are talking about by examining our preferences. It really adds up to the same thing, but if one only heard the short answer they might think it was about preferences, rather than described by them.
But anyway, I think I’m mostly trying to summarise the metaethics sequence by this point :/ (probably wrongly :p)
I see what you mean, and I don’t think I disagree.
I think one more question will clarify. If your / our preferences were different, would the mathematical set / utility function you consider to be morality be different also? Namely, is the set of “rules that constitute morality” contingent upon what an agent already values (suitably extrapolated)?
No. On the other hand, me!pebble-sorter would have no interest in morality at all, and go on instead about how p-great p-morality is. But I wouldn’t mix up p-morality with morality.
So, you’re defining “morality” as an extrapolation from your preferences now, and if your preferences change in the future, that future person would care about what your present self might call futureYou-morality, even if future you insists on calling it “morality”?