Saying it’s true-for-me-but-not-for-you conflates two very different things: truth being agent-relative and descriptive statements about agents being true or false depending on the agent they’re referring to. “X is 6 feet tall” is true when X is someone who’s 6 feet tall and false when X is someone who’s 4 feet tall, and in neither case is it subjective, even though the truth-value depends on who X is. Morality is similar—“X is the right thing for TheAncientGeek to do” is an objectively true (or false) statement, regardless of who’s evaluating you. Encountering “X is the right thing to do if you’re Person A and the wrong thing to do if you’re Person B” and thinking moralitry subjective is the same sort of mistake as if you encountered the statement “Person A is 6 feet tall and Person B is not 6 feet tall” and concluded that height is subjective.
It may well, but that’ is a less interesting and comtentious claim. It’s fairly widely accepted that the sum total of ethi.cs is inferrable from (supervenes on) the sum total of facts.
Morality is similar—“X is the right thing for TheAncientGeek to do” is an objectively true (or false) statement, regardless of who’s evaluating you.
Not so! Rather, “X is the right thing for TheAncientGeek to do given TheAncientGeek’s values” is an objectively true (or false) statement. But “X is the right thing for TheAncientGeek to do” tout court is not; it depends on a specific value system being implicitly understood.
“X is the right thing for TheAncientGeek to do” is synonymous with “X is the right thing for TheAncientGeek to do according to his (reflectively consistent) values”. You may not want him to act in accordance with his values, but that doesn’t change the fact that he should—much like in the standard analysis of the prisoner’s dilemma, each prisoner wants the other to cooperate, but has to admit that each of them should defect.
Same mistake, Only actions that affect others are morally relevant, from which it follows that rightness cannot be evaluated from one person’s values alone.
Maximizing ones values solipsitically is hedonism, not morality.
Saying it’s true-for-me-but-not-for-you conflates two very different things: truth being agent-relative and descriptive statements about agents being true or false depending on the agent they’re referring to. “X is 6 feet tall” is true when X is someone who’s 6 feet tall and false when X is someone who’s 4 feet tall, and in neither case is it subjective, even though the truth-value depends on who X is. Morality is similar—“X is the right thing for TheAncientGeek to do” is an objectively true (or false) statement, regardless of who’s evaluating you. Encountering “X is the right thing to do if you’re Person A and the wrong thing to do if you’re Person B” and thinking moralitry subjective is the same sort of mistake as if you encountered the statement “Person A is 6 feet tall and Person B is not 6 feet tall” and concluded that height is subjective.
See my other reply.
Indexing statements about individuals to individuals is harmless. Subjectivity comes in when you index statements about something else to individuals.
Morally relevant actions are actions which potentially affect others
Your morality machine is subjective because I don’t need to feed in anyone else’s preferences, even though my actions will affect them.
Other people’s preferences are part of states of the world, and states of the world are fed into the machine.
Not part of the original spec!!!
Fair enough. In that case, the machine would tell you something like “Find out expected states of the world. If it’s A, do X. If it’s B, do Y”.
It may well, but that’ is a less interesting and comtentious claim. It’s fairly widely accepted that the sum total of ethi.cs is inferrable from (supervenes on) the sum total of facts.
Not so! Rather, “X is the right thing for TheAncientGeek to do given TheAncientGeek’s values” is an objectively true (or false) statement. But “X is the right thing for TheAncientGeek to do” tout court is not; it depends on a specific value system being implicitly understood.
“X is the right thing for TheAncientGeek to do” is synonymous with “X is the right thing for TheAncientGeek to do according to his (reflectively consistent) values”. You may not want him to act in accordance with his values, but that doesn’t change the fact that he should—much like in the standard analysis of the prisoner’s dilemma, each prisoner wants the other to cooperate, but has to admit that each of them should defect.
Same mistake, Only actions that affect others are morally relevant, from which it follows that rightness cannot be evaluated from one person’s values alone.
Maximizing ones values solipsitically is hedonism, not morality.
Notice I didn’t use the term “morality” in the grandparent. Cf. my other comment.
But the umpteenth grandparent was explicitly about morality.