Yes—and the important thing to remember is that the second view, which all of us here agree is silly, is the naive, common-sense human view.
No, it’s not. The naive, common-sense human view is that sneaking into Jane’s tent while she’s not there and stealing her water-gourd is “wrong”. People don’t end up talking about transcendent ineffable stuff until they have pursued bad philosophy for a considerable length of time. And the conclusion—that you can make murder right without changing the murder itself but by changing a sort of ineffable stuff that makes the murder wrong—is one that, once the implications are put baldly, squarely disagrees with naive moralism. It is an attempt to rescue a naive misunderstanding of the subject matter of mind and ontology, at the expense of naive morality.
What makes the theory relativist is simply the fact that it refers explicitly to particular agents—humans
why should we do what we prefer rather than what they prefer? The correct answer is, of course, “because that’s what we prefer”.
See above. The correct answer is “Because children shouldn’t die, they should live and be happy and have fun.” Note the lack of any reference to humans—this is the sort of logical fact that humans find compelling, but it is not a logical fact about humans. It is a physical fact that I find that logic compelling, but this physical fact is not, itself, the sort of fact that I find compelling.
This is the part of the problem which I find myself unable to explain well to the LessWrongians who self-identify as moral non-realists. It is, admittedly, more subtle than the point about there not being transcendent ineffable stuff, but still, there is a further point and y’all don’t seem to be getting it...
I agree that this constitutes relativism, and deny that I am a relativist.
It looks to me like the opposing position is not based on disagreement with this point but rather outright failure to understand what is being said.
I have the same feeling, from the other direction.
I feel like I completely understand the error you’re warning against in No License To Be Human; if I’m making a mistake, it’s not that one. I totally get that “right”, as you use it, is a rigid designator; if you changed humans, that wouldn’t change what’s right. Fine. The fact remains, however, that “right” is a highly specific, information-theoretically complex computation. You have to look in a specific, narrow region of computation-space to find it. This is what makes you vulnerable to the chauvinism charge; there are lots of other computations that you didn’t decide to single out and call “right”, and the question is: why not? What makes this one so special? The answer is that you looked at human brains, as they happen to be constituted, and said, “This is a nice thing we’ve got going here; let’s preserve it.”
Yes, of course that doesn’t constitute a general license to look at the brains of whatever species you happen to be a member of to decide what’s “right”; if the Babyeaters or Pebblesorters did this, they’d get the wrong answer. But that doesn’t change the fact that there’s no way to convince Babyeaters or Pebblesorters to be interested in “rightness” rather than babyeating or primaility. It is this lack of a totally-neutral, agent-independent persuasion route that is responsible for the fundamentally relative nature of morality.
And yes, of course, it’s a mistake to expect to find any argument that would convince every mind, or an ideal philosopher of perfect emptiness—that’s why moral realism is a mistake!
I promise to take it seriously if you need to refer to Löb’s theorem in your response. I once understood your cartoon guide and could again if need be.
If we concede that when people say “wrong”, they’re referring to the output of a particular function to which we don’t have direct access, doesn’t the problem still arise when we ask how to identify what function that is? In order to pin down what it is that we’re looking for, in order to get any information about it, we have to interview human subjects. Out of all the possible judgment-specifying functions out there, what’s special about this one is precisely the relationship humans have with it.
No, it’s not. The naive, common-sense human view is that sneaking into Jane’s tent while she’s not there and stealing her water-gourd is “wrong”. People don’t end up talking about transcendent ineffable stuff until they have pursued bad philosophy for a considerable length of time. And the conclusion—that you can make murder right without changing the murder itself but by changing a sort of ineffable stuff that makes the murder wrong—is one that, once the implications are put baldly, squarely disagrees with naive moralism. It is an attempt to rescue a naive misunderstanding of the subject matter of mind and ontology, at the expense of naive morality.
I agree that this constitutes relativism, and deny that I am a relativist.
See above. The correct answer is “Because children shouldn’t die, they should live and be happy and have fun.” Note the lack of any reference to humans—this is the sort of logical fact that humans find compelling, but it is not a logical fact about humans. It is a physical fact that I find that logic compelling, but this physical fact is not, itself, the sort of fact that I find compelling.
This is the part of the problem which I find myself unable to explain well to the LessWrongians who self-identify as moral non-realists. It is, admittedly, more subtle than the point about there not being transcendent ineffable stuff, but still, there is a further point and y’all don’t seem to be getting it...
I have the same feeling, from the other direction.
I feel like I completely understand the error you’re warning against in No License To Be Human; if I’m making a mistake, it’s not that one. I totally get that “right”, as you use it, is a rigid designator; if you changed humans, that wouldn’t change what’s right. Fine. The fact remains, however, that “right” is a highly specific, information-theoretically complex computation. You have to look in a specific, narrow region of computation-space to find it. This is what makes you vulnerable to the chauvinism charge; there are lots of other computations that you didn’t decide to single out and call “right”, and the question is: why not? What makes this one so special? The answer is that you looked at human brains, as they happen to be constituted, and said, “This is a nice thing we’ve got going here; let’s preserve it.”
Yes, of course that doesn’t constitute a general license to look at the brains of whatever species you happen to be a member of to decide what’s “right”; if the Babyeaters or Pebblesorters did this, they’d get the wrong answer. But that doesn’t change the fact that there’s no way to convince Babyeaters or Pebblesorters to be interested in “rightness” rather than babyeating or primaility. It is this lack of a totally-neutral, agent-independent persuasion route that is responsible for the fundamentally relative nature of morality.
And yes, of course, it’s a mistake to expect to find any argument that would convince every mind, or an ideal philosopher of perfect emptiness—that’s why moral realism is a mistake!
I promise to take it seriously if you need to refer to Löb’s theorem in your response. I once understood your cartoon guide and could again if need be.
If we concede that when people say “wrong”, they’re referring to the output of a particular function to which we don’t have direct access, doesn’t the problem still arise when we ask how to identify what function that is? In order to pin down what it is that we’re looking for, in order to get any information about it, we have to interview human subjects. Out of all the possible judgment-specifying functions out there, what’s special about this one is precisely the relationship humans have with it.