An objective morality machine would tell you the One True Objective Thing TheAncientGeek Should Do, given your values, but this thing need not be the same as The One True Objective Thing Blacktrance Should Do. The calculations it performs are the same in both cases (which is what makes it objective), but the outputs are different.
You are misusing “objective”. How does your usage differ from telling me what i should do subjectively? How can.true-for-me-but-not-for-you clauses fail to indicate subjectivity? How cam it be coherent to say there is one truth, only it is different for everybody?
Subjectivity is multiple truths about one thing, ie multiple claims about one thing, which are indexed to individuals, and which would be contradictory without the indexing.
In this discussion, I understand there to be three positions:
There is one objectively measurable value system.
There is an objectively measurable value system for each agent.
There are not objectively measurable value systems.
The ‘objective’ and ‘subjective’ distinction is not particularly useful for this discussion, because it confuses the separation between ‘measurable’ and ‘unmeasurable’ (1+2 vs. 3) and ‘universal’ and ‘particular’ (1 vs. 2+3).
But even ‘universal’ and ‘particular’ are not quite the right words- Clippy’s particular preference for paperclips is one that Clippy would like to enforce on the entire universe.
No one holds 3. 1 is ambiguous; it depends on whether we’re speaking “in character” or not. If we are, then it follows from 2 (“there is one objectively measurable value system, namely mine”).
The trouble with Eliezer’s “metaethics” sequence is that it’s written in character (as a human), and something called “metaethics” shouldn’t be.
[edit to expand]: I think that when a cognitivist claims “I’m not a relativist,” they need to have a position like 3 to identify as relativism. Perhaps it is an overreach to use ‘value system’ instead of ‘morality’ in the description of 3, which was a choice driven more by my allergy to the word ‘morality’ than to be correct or communicative.
1 is ambiguous; it depends on whether we’re speaking “in character” or not. If we are, then it follows from 2 (“there is one objectively measurable value system, namely mine”).
One could be certain that God’s morality is correct, but be uncertain what God’s morality is.
The trouble with Eliezer’s “metaethics” sequence is that it’s written in character (as a human), and something called “metaethics” shouldn’t be.
Yes. He’s has strong intuitions that his own moral intuitions are really true, combined with strong intuitions that,morality is this very localized .human thing,, that doesn’t exist elsewhere. So he defines morality as what humans.think morality is...what I dont know isn’t knowledge.
The trouble with Eliezer’s “metaethics” sequence is that it’s written in character (as a human), and something called “metaethics” shouldn’t be.
People always write in character. If you try to use some different definition of “morality” than normal for talking about metaethics, you’ll reach the wrong conclusions because, y’know, you’re quite literally not talking about morality any more.
Language is different from metalanguage, even if both are (in) English.
You shouldn’t be using any definition of “morality” when talking about metaethics, because on that level the definition of “morality” isn’t fixed; that’s what makes it meta.
I can’t make sense of that. Isn’t the whole point of metaethics to create an account of what this morality stuff is (if it’s anything at all) and how the word “morality” manages to refer to it? If metaethics wasn’t about morality it wouldn’t be called metaethics, it would be called, I dunno, “decision theory” or something.
And if it is about morality, it’s unclear how you’re supposed to refer to the subject matter (morality) without saying “morality”. Or the other subject matter (the word “morality”) to which you fail to refer if you start talking about a made-up word that’s also spelled “m o r a l i t y” but isn’t the word people actually use.
My complaint about the sequence is that it should have been about the orthogonality thesis, but instead ended up being about rigid designation.
I remember it as being about both. (exhibit 1, exhibit 2. The latter was written before EY had heard of rigid designators, though. It could probably be improved these days.)
In one sense, this is trivial. I have to take you into account when I do something to you, just like I have to take rocks into account when I do something to them. You’re part of a state of the world. (It may be the case that after taking rocks into account, it doesn’t affect my decision in any way. But my decision can still be formulated as taking rocks into account.)
In another sense, whether I should take your well-being into account depends on my values. If I’m Clippy, then I shouldn’t. If I’m me, then I should.
Otherwise you are using morality to mean hedonism.
Hedonism makes action-guiding claims about what you should do, so it’s a form of morality, but it doesn’t by itself mean that I shouldn’t take you into account—it only means that I should take your well-being into account instrumentally, to the degree it gives me pleasure. Also, the fulfillment of one’s values is not synonymous with hedonism. A being incapable of experiencing pleasure, such as some form of Clippy, has values but acting to fulfill them would not be hedonism.
Whether or not or you morally-should take me into account does not depend on your values, it depends on what the correct theory of morality is. “Should” is not an unambiguous term with a free variable for ” to whom”. It is an ambiguous term, and morally-should is not hedonistically-should, is not practically-should....etc.
If the correct theory of morality is that morally-should is the same as practically-should, then “whether or not you morally-should take me into account does not depend on your values” is false.
Saying it’s true-for-me-but-not-for-you conflates two very different things: truth being agent-relative and descriptive statements about agents being true or false depending on the agent they’re referring to. “X is 6 feet tall” is true when X is someone who’s 6 feet tall and false when X is someone who’s 4 feet tall, and in neither case is it subjective, even though the truth-value depends on who X is. Morality is similar—“X is the right thing for TheAncientGeek to do” is an objectively true (or false) statement, regardless of who’s evaluating you. Encountering “X is the right thing to do if you’re Person A and the wrong thing to do if you’re Person B” and thinking moralitry subjective is the same sort of mistake as if you encountered the statement “Person A is 6 feet tall and Person B is not 6 feet tall” and concluded that height is subjective.
It may well, but that’ is a less interesting and comtentious claim. It’s fairly widely accepted that the sum total of ethi.cs is inferrable from (supervenes on) the sum total of facts.
Morality is similar—“X is the right thing for TheAncientGeek to do” is an objectively true (or false) statement, regardless of who’s evaluating you.
Not so! Rather, “X is the right thing for TheAncientGeek to do given TheAncientGeek’s values” is an objectively true (or false) statement. But “X is the right thing for TheAncientGeek to do” tout court is not; it depends on a specific value system being implicitly understood.
“X is the right thing for TheAncientGeek to do” is synonymous with “X is the right thing for TheAncientGeek to do according to his (reflectively consistent) values”. You may not want him to act in accordance with his values, but that doesn’t change the fact that he should—much like in the standard analysis of the prisoner’s dilemma, each prisoner wants the other to cooperate, but has to admit that each of them should defect.
Same mistake, Only actions that affect others are morally relevant, from which it follows that rightness cannot be evaluated from one person’s values alone.
Maximizing ones values solipsitically is hedonism, not morality.
Note the word objective.
An objective morality machine would tell you the One True Objective Thing TheAncientGeek Should Do, given your values, but this thing need not be the same as The One True Objective Thing Blacktrance Should Do. The calculations it performs are the same in both cases (which is what makes it objective), but the outputs are different.
You are misusing “objective”. How does your usage differ from telling me what i should do subjectively? How can.true-for-me-but-not-for-you clauses fail to indicate subjectivity? How cam it be coherent to say there is one truth, only it is different for everybody?
A person’s height is objectively measurable; that does not mean all people have the same height.
“True about person P” is objective.
“True for person P about X” is subjective.
Subjectivity is multiple truths about one thing, ie multiple claims about one thing, which are indexed to individuals, and which would be contradictory without the indexing.
In this discussion, I understand there to be three positions:
There is one objectively measurable value system.
There is an objectively measurable value system for each agent.
There are not objectively measurable value systems.
The ‘objective’ and ‘subjective’ distinction is not particularly useful for this discussion, because it confuses the separation between ‘measurable’ and ‘unmeasurable’ (1+2 vs. 3) and ‘universal’ and ‘particular’ (1 vs. 2+3).
But even ‘universal’ and ‘particular’ are not quite the right words- Clippy’s particular preference for paperclips is one that Clippy would like to enforce on the entire universe.
No one holds 3. 1 is ambiguous; it depends on whether we’re speaking “in character” or not. If we are, then it follows from 2 (“there is one objectively measurable value system, namely mine”).
The trouble with Eliezer’s “metaethics” sequence is that it’s written in character (as a human), and something called “metaethics” shouldn’t be.
It is not obvious to me that this is the case.
[edit to expand]: I think that when a cognitivist claims “I’m not a relativist,” they need to have a position like 3 to identify as relativism. Perhaps it is an overreach to use ‘value system’ instead of ‘morality’ in the description of 3, which was a choice driven more by my allergy to the word ‘morality’ than to be correct or communicative.
One could be certain that God’s morality is correct, but be uncertain what God’s morality is.
I agree with this assessment.
Yes. He’s has strong intuitions that his own moral intuitions are really true, combined with strong intuitions that,morality is this very localized .human thing,, that doesn’t exist elsewhere. So he defines morality as what humans.think morality is...what I dont know isn’t knowledge.
People always write in character. If you try to use some different definition of “morality” than normal for talking about metaethics, you’ll reach the wrong conclusions because, y’know, you’re quite literally not talking about morality any more.
Language is different from metalanguage, even if both are (in) English.
You shouldn’t be using any definition of “morality” when talking about metaethics, because on that level the definition of “morality” isn’t fixed; that’s what makes it meta.
My complaint about the sequence is that it should have been about the orthogonality thesis, but instead ended up being about rigid designation.
You should use a definition, but one that doesn’t beg the question.
I can’t make sense of that. Isn’t the whole point of metaethics to create an account of what this morality stuff is (if it’s anything at all) and how the word “morality” manages to refer to it? If metaethics wasn’t about morality it wouldn’t be called metaethics, it would be called, I dunno, “decision theory” or something.
And if it is about morality, it’s unclear how you’re supposed to refer to the subject matter (morality) without saying “morality”. Or the other subject matter (the word “morality”) to which you fail to refer if you start talking about a made-up word that’s also spelled “m o r a l i t y” but isn’t the word people actually use.
I remember it as being about both. (exhibit 1, exhibit 2. The latter was written before EY had heard of rigid designators, though. It could probably be improved these days.)
Agreed. What I should do is a separate thing from what you should do, even though they’re the same type of thing and may be similar in many ways.
What you morally should do to me has to take me into account, and vice versa. Otherwise you are using morality to mean hedonism.
In one sense, this is trivial. I have to take you into account when I do something to you, just like I have to take rocks into account when I do something to them. You’re part of a state of the world. (It may be the case that after taking rocks into account, it doesn’t affect my decision in any way. But my decision can still be formulated as taking rocks into account.)
In another sense, whether I should take your well-being into account depends on my values. If I’m Clippy, then I shouldn’t. If I’m me, then I should.
Hedonism makes action-guiding claims about what you should do, so it’s a form of morality, but it doesn’t by itself mean that I shouldn’t take you into account—it only means that I should take your well-being into account instrumentally, to the degree it gives me pleasure. Also, the fulfillment of one’s values is not synonymous with hedonism. A being incapable of experiencing pleasure, such as some form of Clippy, has values but acting to fulfill them would not be hedonism.
Whether or not or you morally-should take me into account does not depend on your values, it depends on what the correct theory of morality is. “Should” is not an unambiguous term with a free variable for ” to whom”. It is an ambiguous term, and morally-should is not hedonistically-should, is not practically-should....etc.
Unless the correct theory of morality is that morally-should is the same thing as practically-should, in which case it would depend on your values.
A sentence beginning “unless the correct theory is” does not refute a sentence including ” depends on what the correct theory ”....
If the correct theory of morality is that morally-should is the same as practically-should, then “whether or not you morally-should take me into account does not depend on your values” is false.
Whether or not morality depends on your values depends on what the correct theory of morality is.
Saying it’s true-for-me-but-not-for-you conflates two very different things: truth being agent-relative and descriptive statements about agents being true or false depending on the agent they’re referring to. “X is 6 feet tall” is true when X is someone who’s 6 feet tall and false when X is someone who’s 4 feet tall, and in neither case is it subjective, even though the truth-value depends on who X is. Morality is similar—“X is the right thing for TheAncientGeek to do” is an objectively true (or false) statement, regardless of who’s evaluating you. Encountering “X is the right thing to do if you’re Person A and the wrong thing to do if you’re Person B” and thinking moralitry subjective is the same sort of mistake as if you encountered the statement “Person A is 6 feet tall and Person B is not 6 feet tall” and concluded that height is subjective.
See my other reply.
Indexing statements about individuals to individuals is harmless. Subjectivity comes in when you index statements about something else to individuals.
Morally relevant actions are actions which potentially affect others
Your morality machine is subjective because I don’t need to feed in anyone else’s preferences, even though my actions will affect them.
Other people’s preferences are part of states of the world, and states of the world are fed into the machine.
Not part of the original spec!!!
Fair enough. In that case, the machine would tell you something like “Find out expected states of the world. If it’s A, do X. If it’s B, do Y”.
It may well, but that’ is a less interesting and comtentious claim. It’s fairly widely accepted that the sum total of ethi.cs is inferrable from (supervenes on) the sum total of facts.
Not so! Rather, “X is the right thing for TheAncientGeek to do given TheAncientGeek’s values” is an objectively true (or false) statement. But “X is the right thing for TheAncientGeek to do” tout court is not; it depends on a specific value system being implicitly understood.
“X is the right thing for TheAncientGeek to do” is synonymous with “X is the right thing for TheAncientGeek to do according to his (reflectively consistent) values”. You may not want him to act in accordance with his values, but that doesn’t change the fact that he should—much like in the standard analysis of the prisoner’s dilemma, each prisoner wants the other to cooperate, but has to admit that each of them should defect.
Same mistake, Only actions that affect others are morally relevant, from which it follows that rightness cannot be evaluated from one person’s values alone.
Maximizing ones values solipsitically is hedonism, not morality.
Notice I didn’t use the term “morality” in the grandparent. Cf. my other comment.
But the umpteenth grandparent was explicitly about morality.