Different people seem to find different parts of this counterintuitive.
And some people simply disagree with you. Some people say, for example, that ‘they don’t have a universal meaning’. They assert that ‘should’ claims are not claims that have truth value and allow that the value depends on the person speaking. They find this quite intuitive and even go so far as create words such as ‘normative’ and ‘subjective’ to describe these concepts when talking to each other.
It is not likely that aliens, for example, have the concept ‘should’ at all and so it is likely that other words will be needed. The Babyeaters, as described, seem to be using a concept sufficiently similar as to be within the variability of use within humans. ‘Should’ and ‘good’ would not be particularly poor translations. About the same as using, say, ‘tribe’ or ‘herd’ for example.
Okay, then these are the people I’m arguing against, as a view of morality. I’m arguing that, say, dragging 6-year-olds off the train tracks, as opposed to eating them for lunch, is every bit as much uniquely the right answer as it looks; and that the Space Cannibals are every bit as awful as they look; and that the aliens do not have a different view of the subject, but simply a view of a different subject.
As an intuition pump, it might help to imagine someone saying that “truth” has different values in different places and that we want to parameterize it by true and true. If Islam has a sufficiently different criterion for using the word “true”, i.e. “recorded in the Koran”, then we just want to say “recorded in the Koran”, not use the word “true”.
Another way of looking at it is that if we are not allowed to use the word “right” or any of its synonyms, at all, a la Empty Labels and Rationalist Taboo and Replace the Symbol with the Substance, then the new language that we are forced to use will no longer create the illusion that we and the aliens are talking about the same thing. (Like forcing people from two different spiritual traditions to say what they think exists without using the word “God”, thus eliminating the illusion of agreement.) And once you realize that we and the aliens are not talking about the same thing at all, and have no disagreement over the same subject, you are no longer tempted to try to relativize morality.
It’s all very well to tell me that I should stop arguing over definitions, but I seem to be at a loss to make people understand what I am trying to say here. You are, of course, welcome to tell me that this is my fault; but it is somewhat disconcerting to find everyone saying that they agree with me, while continuing to disagree with each other.
I disagree with you about what “should” means, and I’m not even a Space Cannibal. Or do I? Are you committed to saying that I, too, am talking past you if I type “should” to sincerely refer to things?
Are you basically declaring yourself impossible to disagree with?
Do you think we’re asking sufficiently different questions such that they would be expected to have different answers in the first place? How could you know?
Humans, especially humans from an Enlightenment tradition, I presume by default to be talking about the same thing as me—we share a lot of motivations and might share even more in the limit of perfect knowledge and perfect reflection. So when we appear to disagree, I assume by default and as a matter of courtesy that we are disagreeing about the answer to the same question or to questions sufficiently similar that they could normally be expected to have almost the same answer. And so we argue, and try to share thoughts.
With aliens, there might be some overlap—or might not; a starfish is pretty different from a mammal, and that’s just on Earth. With paperclip maximizers, they are simply not asking our question or anything like that question. And so there is no point in arguing, for there is no disagreement to argue about. It would be like arguing with natural selection. Evolution does not work like you do, and it does not choose actions the way you do, and it was not disagreeing with you about anything when it sentenced you to die of old age. It’s not that evolution is a less authoritative source, but that it is not saying anything at all about the morality of aging. Consider how many bioconservatives cannot understand the last sentence; it may help convey why this point is both metaethically important and intuitively difficult.
Do you think we’re asking sufficiently different questions such that they would be expected to have different answers in the first place? How could you know?
I really do not know. Our disagreements on ethics are definitely nontrivial—the structure of consequentialism inspires you to look at a completely different set of sub-questions than the ones I’d use to determine the nature of morality. That might mean that (at least) one of us is taking the wrong tack on a shared question, or that we’re asking different basic questions. We will arrive at superficially similar answers much of the time because “appeal to intuition” is considered a legitimate move in ethics and we have some similar intuitions about the kinds of answers we want to arrive at.
I think you are right that paperclip maximizers would not care at all about ethics. Babyeaters, though, seem like they do, and it’s not even completely obvious to me that the gulf between me and a babyeater (in methodology, not in result) is larger than the gulf between me and you. It looks to me a bit like you and I get to different parts of city A via bicycle and dirigible respectively, and then the babyeaters get to city B via kayak—yes, we humans have more similar destinations to each other than to the Space Cannibals, but the kind of journey undertaken seems at least as significant, and trying to compare a bike and a blimp and a boat is not a task obviously approachable.
Do you also find it suspicious that we could both arrive in the same city using different vehicles? Or that the answer to “how many socks is Alicorn wearing?” and the answer to “what is 6 − 4?” are the same? Or that one could correctly answer “yes” to the question “is there cheese in the fridge?” and the question “is it 4:30?” without meaning to use a completely different, non-yes word in either case?
Do you also find it suspicious that we could both arrive in the same city using different vehicles?
Not at all, if we started out by wanting to arrive in the same city.
And not at all, if I selected you as a point of comparison by looking around the city I was in at the time.
Otherwise, yes, very suspicious. Usually, when two randomly selected people in Earth’s population get into a car and drive somewhere, they arrive in different cities.
Or that the answer to “how many socks is Alicorn wearing?” and the answer to “what is 6 − 4?” are the same?
No, because you selected those two questions to have the same answer.
Or that one could correctly answer “yes” to the question “is there cheese in the fridge?” and the question “is it 4:30?” without meaning to use a completely different, non-yes word in either case?
Yes-or-no questions have a very small answer space so even if you hadn’t selected them to correlate, it would only be 1 bit of coincidence.
The examples in the grandparent do seem to miss the point that Alicorn was originally describing.
I find it a suspicious coincidence that we should arrive at similar answers by asking dissimilar questions.
It is still surprising, but somewhat less so if our question answering is about finding descriptions for our hardwired intuitions. In that case people with similar personalities can be expected to formulate question-answer pairs that differ mainly in their respective areas of awkwardness as descriptions of the territory.
Not at all, if we started out by wanting to arrive in the same city.
And we did exactly that (metaphorically speaking). I said:
We will arrive at superficially similar answers much of the time because “appeal to intuition” is considered a legitimate move in ethics and we have some similar intuitions about the kinds of answers we want to arrive at.
It seems to me that you and I ask dissimilar questions and arrive at superficially similar answers. (I say “superficially similar” because I consider the “because” clause in an ethical statement to be important—if you think you should pull the six-year-old off the train tracks because that maximizes your utility function and I think you should do it because the six-year-old is entitled to your protection on account of being a person, those are different answers, even if the six-year-old lives either way.) The babyeaters get more non-matching results in the “does the six-year-old live” department, but their questions—just about as important in comparing theories—are not (it seems to me) so much more different than yours and mine.
Everybody, in seeking a principled ethical theory, has to bite some bullets (or go on an endless Easter-epicycle hunt).
To me, this doesn’t seem like superficial similarity at all. I should sooner call the differences of verbal “because” superficial, and focus on that which actually produces the answer.
I think you should do it because the six-year-old is valuable and precious and irreplaceable, and if I had a utility function it would describe that. I’m not sure how this differs from what you’re doing, but I think it differs from what you think I’m doing.
I think you are right that paperclip maximizers would not care at all about ethics.
Correct. But neither would they ‘care’ about paperclips, under the way Eliezer’s pushing this idea. They would flarb about paperclips, and caring would be as alien to them as flarbing is to you.
Babyeaters, though, seem like they do, and it’s not even completely obvious to me that the gulf between me and a babyeater (in methodology, not in result) is larger than the gulf between me and you.
It’s all very well to tell me that I should stop arguing over definitions
You are arguing over definitions but it is useful. You make many posts that rely on these concepts so the definitions are relevant. That ‘you are just arguing semantics’ call is sometimes an irritating cached response.
but I seem to be at a loss to make people understand what I am trying to say here. You are, of course, welcome to tell me that this is my fault; but
You are making more than one claim here. The different-concept-alien stuff you have explained quite clearly (eg. from the first semicolon onwards in the parent). This seems to be obviously true. The part before the semicolon is a different concept (probably two). Your posts have not given me the impression that you consider the true issue distinct from normative issue and subjectivity. You also included ‘objective morality’ in with ‘True’, ‘transcendental’ and ‘ultimate’ as things that have no meaning. I believe you are confused and that your choice of definition for ‘should’ contributes to this.
it is somewhat disconcerting to find everyone saying that they agree with me
I say I disagree with a significant part of your position, although not the most important part.
, while continuing to disagree with each other.
I definitely disagree with Tim. I may agree with some of the others.
I agree with the claim you imply with the intuition pump. I disagree with the claim you imply when you are talking about ‘uniquely the right answer’. Your intuition pump does not describe the same concept that your description does.
; and that the aliens do not have a different view of the subject, but simply a view of a different subject.
This part does match the intuition pump but you are consistently conflating this concept with another (see uniquely-right true-value of girl treatment) in your posts in this thread. You are confused.
And some people simply disagree with you. Some people say, for example, that ‘they don’t have a universal meaning’. They assert that ‘should’ claims are not claims that have truth value and allow that the value depends on the person speaking. They find this quite intuitive and even go so far as create words such as ‘normative’ and ‘subjective’ to describe these concepts when talking to each other.
It is not likely that aliens, for example, have the concept ‘should’ at all and so it is likely that other words will be needed. The Babyeaters, as described, seem to be using a concept sufficiently similar as to be within the variability of use within humans. ‘Should’ and ‘good’ would not be particularly poor translations. About the same as using, say, ‘tribe’ or ‘herd’ for example.
Okay, then these are the people I’m arguing against, as a view of morality. I’m arguing that, say, dragging 6-year-olds off the train tracks, as opposed to eating them for lunch, is every bit as much uniquely the right answer as it looks; and that the Space Cannibals are every bit as awful as they look; and that the aliens do not have a different view of the subject, but simply a view of a different subject.
As an intuition pump, it might help to imagine someone saying that “truth” has different values in different places and that we want to parameterize it by true and true. If Islam has a sufficiently different criterion for using the word “true”, i.e. “recorded in the Koran”, then we just want to say “recorded in the Koran”, not use the word “true”.
Another way of looking at it is that if we are not allowed to use the word “right” or any of its synonyms, at all, a la Empty Labels and Rationalist Taboo and Replace the Symbol with the Substance, then the new language that we are forced to use will no longer create the illusion that we and the aliens are talking about the same thing. (Like forcing people from two different spiritual traditions to say what they think exists without using the word “God”, thus eliminating the illusion of agreement.) And once you realize that we and the aliens are not talking about the same thing at all, and have no disagreement over the same subject, you are no longer tempted to try to relativize morality.
It’s all very well to tell me that I should stop arguing over definitions, but I seem to be at a loss to make people understand what I am trying to say here. You are, of course, welcome to tell me that this is my fault; but it is somewhat disconcerting to find everyone saying that they agree with me, while continuing to disagree with each other.
I disagree with you about what “should” means, and I’m not even a Space Cannibal. Or do I? Are you committed to saying that I, too, am talking past you if I type “should” to sincerely refer to things?
Are you basically declaring yourself impossible to disagree with?
Do you think we’re asking sufficiently different questions such that they would be expected to have different answers in the first place? How could you know?
Humans, especially humans from an Enlightenment tradition, I presume by default to be talking about the same thing as me—we share a lot of motivations and might share even more in the limit of perfect knowledge and perfect reflection. So when we appear to disagree, I assume by default and as a matter of courtesy that we are disagreeing about the answer to the same question or to questions sufficiently similar that they could normally be expected to have almost the same answer. And so we argue, and try to share thoughts.
With aliens, there might be some overlap—or might not; a starfish is pretty different from a mammal, and that’s just on Earth. With paperclip maximizers, they are simply not asking our question or anything like that question. And so there is no point in arguing, for there is no disagreement to argue about. It would be like arguing with natural selection. Evolution does not work like you do, and it does not choose actions the way you do, and it was not disagreeing with you about anything when it sentenced you to die of old age. It’s not that evolution is a less authoritative source, but that it is not saying anything at all about the morality of aging. Consider how many bioconservatives cannot understand the last sentence; it may help convey why this point is both metaethically important and intuitively difficult.
I really do not know. Our disagreements on ethics are definitely nontrivial—the structure of consequentialism inspires you to look at a completely different set of sub-questions than the ones I’d use to determine the nature of morality. That might mean that (at least) one of us is taking the wrong tack on a shared question, or that we’re asking different basic questions. We will arrive at superficially similar answers much of the time because “appeal to intuition” is considered a legitimate move in ethics and we have some similar intuitions about the kinds of answers we want to arrive at.
I think you are right that paperclip maximizers would not care at all about ethics. Babyeaters, though, seem like they do, and it’s not even completely obvious to me that the gulf between me and a babyeater (in methodology, not in result) is larger than the gulf between me and you. It looks to me a bit like you and I get to different parts of city A via bicycle and dirigible respectively, and then the babyeaters get to city B via kayak—yes, we humans have more similar destinations to each other than to the Space Cannibals, but the kind of journey undertaken seems at least as significant, and trying to compare a bike and a blimp and a boat is not a task obviously approachable.
I find it a suspicious coincidence that we should arrive at similar answers by asking dissimilar questions.
Do you also find it suspicious that we could both arrive in the same city using different vehicles? Or that the answer to “how many socks is Alicorn wearing?” and the answer to “what is 6 − 4?” are the same? Or that one could correctly answer “yes” to the question “is there cheese in the fridge?” and the question “is it 4:30?” without meaning to use a completely different, non-yes word in either case?
Not at all, if we started out by wanting to arrive in the same city.
And not at all, if I selected you as a point of comparison by looking around the city I was in at the time.
Otherwise, yes, very suspicious. Usually, when two randomly selected people in Earth’s population get into a car and drive somewhere, they arrive in different cities.
No, because you selected those two questions to have the same answer.
Yes-or-no questions have a very small answer space so even if you hadn’t selected them to correlate, it would only be 1 bit of coincidence.
The examples in the grandparent do seem to miss the point that Alicorn was originally describing.
It is still surprising, but somewhat less so if our question answering is about finding descriptions for our hardwired intuitions. In that case people with similar personalities can be expected to formulate question-answer pairs that differ mainly in their respective areas of awkwardness as descriptions of the territory.
And we did exactly that (metaphorically speaking). I said:
It seems to me that you and I ask dissimilar questions and arrive at superficially similar answers. (I say “superficially similar” because I consider the “because” clause in an ethical statement to be important—if you think you should pull the six-year-old off the train tracks because that maximizes your utility function and I think you should do it because the six-year-old is entitled to your protection on account of being a person, those are different answers, even if the six-year-old lives either way.) The babyeaters get more non-matching results in the “does the six-year-old live” department, but their questions—just about as important in comparing theories—are not (it seems to me) so much more different than yours and mine.
Everybody, in seeking a principled ethical theory, has to bite some bullets (or go on an endless Easter-epicycle hunt).
To me, this doesn’t seem like superficial similarity at all. I should sooner call the differences of verbal “because” superficial, and focus on that which actually produces the answer.
I think you should do it because the six-year-old is valuable and precious and irreplaceable, and if I had a utility function it would describe that. I’m not sure how this differs from what you’re doing, but I think it differs from what you think I’m doing.
Correct. But neither would they ‘care’ about paperclips, under the way Eliezer’s pushing this idea. They would flarb about paperclips, and caring would be as alien to them as flarbing is to you.
I think some subset of paperclip maximizers might be said to care about paperclips. Not, most likely, all possible instances of them.
I had the same thought.
You are arguing over definitions but it is useful. You make many posts that rely on these concepts so the definitions are relevant. That ‘you are just arguing semantics’ call is sometimes an irritating cached response.
You are making more than one claim here. The different-concept-alien stuff you have explained quite clearly (eg. from the first semicolon onwards in the parent). This seems to be obviously true. The part before the semicolon is a different concept (probably two). Your posts have not given me the impression that you consider the true issue distinct from normative issue and subjectivity. You also included ‘objective morality’ in with ‘True’, ‘transcendental’ and ‘ultimate’ as things that have no meaning. I believe you are confused and that your choice of definition for ‘should’ contributes to this.
I say I disagree with a significant part of your position, although not the most important part.
I definitely disagree with Tim. I may agree with some of the others.
I agree with the claim you imply with the intuition pump. I disagree with the claim you imply when you are talking about ‘uniquely the right answer’. Your intuition pump does not describe the same concept that your description does.
This part does match the intuition pump but you are consistently conflating this concept with another (see uniquely-right true-value of girl treatment) in your posts in this thread. You are confused.