I’m sorry, but if we had no knowledge of brains, cognition, and the nature of preference, then sure, I’d use my feelings of right or wrong as much as the next guy, but that doesn’t make them objectively true.
Likewise, just because I intuitively feel like I have a time-continuous self, that doesn’t make consciousness fundamental.
As an agent, having knowledge of what I am, and what causes my experiences, changes my simple reliance on heuristics to a more accurate scientific exploration of the truth.
I still think it’s a pretty simple case here. Is there a set of preferences which all intelligent agents are compelled by some force to adopt? Not as far as I can tell.
Morality doesn’t work like physical law either. Nobody is compelled to be rational, but people who do reason can agree about certain things. That includes moral reasoning.
I’m saying that in “to be moral you must to follow whatever rules constitute morality”
the “must”
is a matter of logical necessity, as opposed to the two interpretations of compulsion considered by NMJ:
physical necessity, and edict.
You still haven’t explained in this framework why one can talk about how one gets that people “should” be moral anymore than people “should play chess”. If morality is just another game, then it loses all the force you associate with it, and it seems clear that you are distinguishing between chess and morality.
The rules of physics have a special quality of unavoidability, you don’t have an option to avoid them. Likewise people are held morally accountable under most circumstances and cant just avoid culpabability by saying “oh, I don’t play that game”. I don’t think these are aposteriori facts. I think physics is definitionally the science of the fundamental, and morality is definitionally where the evaluative buck stops.
… but they’re held morally accountable by agents whose preferences have been violated. The way you just described it means that morality is just those rules that the people around you currently care enough about to punish you if you break them.
In which case morality is entirely subjective and contingent on what those around you happen to value, no?
It can make sense to say that the person being punished was actually in the right. Were the British right to imprison Gandhi?
Peter, at this point, you seem very confused. You’ve asserted that morality is just like chess apparently comparing it to a game where one has agreed upon rules. You’ve then tried to assert that somehow morality is different and is a somehow more privileged game that people “should” play but the only evidence you’ve given is that in societies with a given moral system people who don’t abide by that moral system suffer. Yet your comment about Gandhi then endorses naive moral realism.
It is possible that there’s a coherent position here and we’re just failing to understand you. But right now that looks unlikely.
As I have pointed out about three times, the comparison with chess was to make a point about obligation, not to make a point about arbitrariness
the only evidence you’ve given is that in societies with a given moral system people who don’t abide by that moral system suffer.
I never gave that, that was someone else characterisation. What I said was that it is an anaytlcal trtuth that morality is where the evaluative buck stops.
I don’t know what you mean by the naive in naive realism. It is a a central characteristic of any kind of realism that you can have truth beyond conventional belief. The idea that there is more to morality than what a particular society wants to punish is a coherent one. It is better as morality, because subjectivism is too subject to get-out clauses. It is better as an explanation, because it can explain how de facto morality in societies and individuals can be overturned for something better.
Hmm… This is reminiscent of Eliezer’s (and my) metaethics¹. In particular, I would say that “the rules that constitute morality” are, by the definition embedded in my brain, some set which I’m not exactly sure of the contents of but which definitely includes {kindness, not murdering, not stealing, allowing freedom, …}. (Well, it may actually be a utility function, but sets are easier to convey in text.)
In that case, “should”, “moral”, “right” and the rest are all just different words for “the object is in the above set (which we call morality)”. And then “being moral” means “following those rules” as a matter of logical necessity, as you’ve said. But this depends on what you mean by “the rules constituting morality”, on which you haven’t said whether you agree.
In particular, I would say that “the rules that constitute morality” are, by the definition embedded in my brain, some set which I’m not exactly sure of the contents of but which definitely includes {kindness, not murdering, not stealing, allowing freedom, …}.
What determines the contents of the set / details of the utility function?
The short answer is: my/our preferences (suitably extrapolated).
The long answer is: it exists as a mathematical object regardless of anyone’s preference, and one can judge things by it even in an empty universe. The reason we happen to care about this particular object is because it embodies our preferences, and we can find out exactly what object we are talking about by examining our preferences. It really adds up to the same thing, but if one only heard the short answer they might think it was about preferences, rather than described by them.
But anyway, I think I’m mostly trying to summarise the metaethics sequence by this point :/ (probably wrongly :p)
I see what you mean, and I don’t think I disagree.
I think one more question will clarify. If your / our preferences were different, would the mathematical set / utility function you consider to be morality be different also? Namely, is the set of “rules that constitute morality” contingent upon what an agent already values (suitably extrapolated)?
No. On the other hand, me!pebble-sorter would have no interest in morality at all, and go on instead about how p-great p-morality is. But I wouldn’t mix up p-morality with morality.
So, you’re defining “morality” as an extrapolation from your preferences now, and if your preferences change in the future, that future person would care about what your present self might call futureYou-morality, even if future you insists on calling it “morality”?
I’m sorry, but if we had no knowledge of brains, cognition, and the nature of preference, then sure, I’d use my feelings of right or wrong as much as the next guy, but that doesn’t make them objectively true.
Likewise, just because I intuitively feel like I have a time-continuous self, that doesn’t make consciousness fundamental.
As an agent, having knowledge of what I am, and what causes my experiences, changes my simple reliance on heuristics to a more accurate scientific exploration of the truth.
Just make sure that the particular piece of knowledge you demand is indeed available, and not, say, just the thing you are trying to figure out.
(Nod)
I still think it’s a pretty simple case here. Is there a set of preferences which all intelligent agents are compelled by some force to adopt? Not as far as I can tell.
Morality doesn’t work like physical law either. Nobody is compelled to be rational, but people who do reason can agree about certain things. That includes moral reasoning.
I think we should move this conversation back out of the other post, where it really doesn’t belong.
Can you clarify what you mean by this?
For what X are you saying “All agents that satisfy X must follow morality.”?
If you’re moving it anyway, I would recommend moving it here instead.
I’m saying that in “to be moral you must to follow whatever rules constitute morality” the “must” is a matter of logical necessity, as opposed to the two interpretations of compulsion considered by NMJ: physical necessity, and edict.
You still haven’t explained in this framework why one can talk about how one gets that people “should” be moral anymore than people “should play chess”. If morality is just another game, then it loses all the force you associate with it, and it seems clear that you are distinguishing between chess and morality.
The rules of physics have a special quality of unavoidability, you don’t have an option to avoid them. Likewise people are held morally accountable under most circumstances and cant just avoid culpabability by saying “oh, I don’t play that game”. I don’t think these are aposteriori facts. I think physics is definitionally the science of the fundamental, and morality is definitionally where the evaluative buck stops.
… but they’re held morally accountable by agents whose preferences have been violated. The way you just described it means that morality is just those rules that the people around you currently care enough about to punish you if you break them.
In which case morality is entirely subjective and contingent on what those around you happen to value, no?
It can make sense to say that the person being punished was actually in the right. Were the British right to imprison Gandhi?
Peter, at this point, you seem very confused. You’ve asserted that morality is just like chess apparently comparing it to a game where one has agreed upon rules. You’ve then tried to assert that somehow morality is different and is a somehow more privileged game that people “should” play but the only evidence you’ve given is that in societies with a given moral system people who don’t abide by that moral system suffer. Yet your comment about Gandhi then endorses naive moral realism.
It is possible that there’s a coherent position here and we’re just failing to understand you. But right now that looks unlikely.
As I have pointed out about three times, the comparison with chess was to make a point about obligation, not to make a point about arbitrariness
I never gave that, that was someone else characterisation. What I said was that it is an anaytlcal trtuth that morality is where the evaluative buck stops.
I don’t know what you mean by the naive in naive realism. It is a a central characteristic of any kind of realism that you can have truth beyond conventional belief. The idea that there is more to morality than what a particular society wants to punish is a coherent one. It is better as morality, because subjectivism is too subject to get-out clauses. It is better as an explanation, because it can explain how de facto morality in societies and individuals can be overturned for something better.
Yes, roughly speaking when the person being punished fits into the category ‘us’ rather than ‘them’. Especially ‘me’.
Hmm… This is reminiscent of Eliezer’s (and my) metaethics¹. In particular, I would say that “the rules that constitute morality” are, by the definition embedded in my brain, some set which I’m not exactly sure of the contents of but which definitely includes {kindness, not murdering, not stealing, allowing freedom, …}. (Well, it may actually be a utility function, but sets are easier to convey in text.)
In that case, “should”, “moral”, “right” and the rest are all just different words for “the object is in the above set (which we call morality)”. And then “being moral” means “following those rules” as a matter of logical necessity, as you’ve said. But this depends on what you mean by “the rules constituting morality”, on which you haven’t said whether you agree.
What do you think?
What determines the contents of the set / details of the utility function?
The short answer is: my/our preferences (suitably extrapolated).
The long answer is: it exists as a mathematical object regardless of anyone’s preference, and one can judge things by it even in an empty universe. The reason we happen to care about this particular object is because it embodies our preferences, and we can find out exactly what object we are talking about by examining our preferences. It really adds up to the same thing, but if one only heard the short answer they might think it was about preferences, rather than described by them.
But anyway, I think I’m mostly trying to summarise the metaethics sequence by this point :/ (probably wrongly :p)
I see what you mean, and I don’t think I disagree.
I think one more question will clarify. If your / our preferences were different, would the mathematical set / utility function you consider to be morality be different also? Namely, is the set of “rules that constitute morality” contingent upon what an agent already values (suitably extrapolated)?
No. On the other hand, me!pebble-sorter would have no interest in morality at all, and go on instead about how p-great p-morality is. But I wouldn’t mix up p-morality with morality.
So, you’re defining “morality” as an extrapolation from your preferences now, and if your preferences change in the future, that future person would care about what your present self might call futureYou-morality, even if future you insists on calling it “morality”?