Yesterday, I presented the idea that when only five people are present, having just stumbled across a pie in the woods (a naturally growing pie, that just popped out of the ground) then it is fair to give Dennis only 1/5th of this pie, even if Dennis persistently claims that it is fair for him to get the whole thing. Furthermore, it is meta-fair to follow such a symmetrical division procedure, even if Dennis insists that he ought to dictate the division procedure.
Fair, meta-fair, or meta-meta-fair, there is no level of fairness where you’re obliged to concede everything to Dennis, without reciprocation or compensation, just because he demands it.
Which goes to say that fairness has a meaning beyond which “that which everyone can be convinced is ‘fair’”. This is an empty proposition, isomorphic to “Xyblz is that which everyone can be convinced is ‘xyblz’”. There must be some specific thing of which people are being convinced; and once you identify that thing, it has a meaning beyond agreements and convincing.
You’re not introducing something arbitrary, something un-fair, in refusing to concede everything to Dennis. You are being fair, and meta-fair and meta-meta-fair. As far up as you go, there’s no level that calls for unconditional surrender. The stars do not judge between you and Dennis—but it is baked into the very question that is asked, when you ask, “What is fair?” as opposed to “What is xyblz?”
Ah, but why should you be fair, rather than xyblz? Let us concede that Dennis cannot validly persuade us, on any level, that it is fair for him to dictate terms and give himself the whole pie; but perhaps he could argue whether we should be fair?
The hidden agenda of the whole discussion of fairness, of course, is that good-ness and right-ness and should-ness, ground out similarly to fairness.
Natural selection optimizes for inclusive genetic fitness. This is not a disagreement with humans about what is good. It is simply that natural selection does not do what is good: it optimizes for inclusive genetic fitness.
Well, since some optimization processes optimize for inclusive genetic fitness, instead of what is good, which should we do, ourselves?
I know my answer to this question. It has something to do with natural selection being a terribly wasteful and stupid and inefficient process. It has something to do with elephants starving to death in their old age when they wear out their last set of teeth. It has something to do with natural selection never choosing a single act of mercy, of grace, even when it would cost its purpose nothing: not auto-anesthetizing a wounded and dying gazelle, when its pain no longer serves even the adaptive purpose that first created pain. Evolution had to happen sometime in the history of the universe, because that’s the only way that intelligence could first come into being, without brains to make brains; but now that era is over, and good riddance.
But most of all—why on Earth would any human being think that one ought to optimize inclusive genetic fitness, rather than what is good? What is even the appeal of this, morally or otherwise? At all? I know people who claim to think like this, and I wonder what wrong turn they made in their cognitive history, and I wonder how to get them to snap out of it.
When we take a step back from fairness, and ask if we should be fair, the answer may not always be yes. Maybe sometimes we should be merciful. But if you ask if it is meta-fair to be fair, the answer will generally be yes. Even if someone else wants you to be unfair in their favor, or claims to disagree about what is “fair”, it will still generally be meta-fair to be fair, even if you can’t make the Other agree. By the same token, if you ask if we meta-should do what we should, rather than something else, the answer is yes. Even if some other agent or optimization process does not do what is right, that doesn’t change what is meta-right.
And this is not “arbitrary” in the sense of rolling dice, not “arbitrary” in the sense that justification is expected and then not found. The accusations that I level against evolution are not merely pulled from a hat; they are expressions of morality as I understand it. They are merely moral, and there is nothing mere about that.
The upshot is that differently structured minds may well label different propositions with their analogues of the internal label “arbitrary”—though only one of these labels is what you mean when you say “arbitrary”, so you and these other agents do not really have a disagreement.
This was to help shake people loose of the idea that if any two possible minds can say or do different things, then it must all be arbitrary. Different minds may have different ideas of what’s “arbitrary”, so clearly this whole business of “arbitrariness” is arbitrary, and we should ignore it. After all, Sinned (the anti-Dennis) just always says “Morality isn’t arbitrary!” no matter how you try to persuade her otherwise, so clearly you’re just being arbitrary in saying that morality is arbitrary.
From the perspective of a human, saying that one should sort pebbles into prime-numbered heaps is arbitrary—it’s the sort of act you’d expect to come with a justification attached, but there isn’t any justification.
From the perspective of a Pebblesorter, saying that one p-should scatter a heap of 38 pebbles into two heaps of 19 pebbles is not p-arbitrary at all—it’s the most p-important thing in the world, and fully p-justified by the intuitively obvious fact that a heap of 19 pebbles is p-correct and a heap of 38 pebbles is not.
So which perspective should we adopt? I answer that I see no reason at all why I should start sorting pebble-heaps. It strikes me as a completely pointless activity. Better to engage in art, or music, or science, or heck, better to connive political plots of terrifying dark elegance, than to sort pebbles into prime-numbered heaps. A galaxy transformed into pebbles and sorted into prime-numbered heaps would be just plain boring.
The Pebblesorters, of course, would only reason that music is p-pointless because it doesn’t help you sort pebbles into heaps; the human activity of humor is not only p-pointless but just plain p-bizarre and p-incomprehensible; and most of all, the human vision of a galaxy in which agents are running around experiencing positive reinforcement but not sorting any pebbles, is a vision of an utterly p-arbitrary galaxy devoid of p-purpose. The Pebblesorters would gladly sacrifice their lives to create a P-Friendly AI that sorted the galaxy on their behalf; it would be the most p-profound statement they could make about the p-meaning of their lives.
So which of these two perspectives do I choose? The human one, of course; not because it is the human one, but because it is right. I do not know perfectly what is right, but neither can I plead entire ignorance.
And the Pebblesorters, who simply are not built to do what is right, choose the Pebblesorting perspective: not merely because it is theirs, or because they think they can get away with being p-arbitrary, but because that is what is p-right.
And in fact, both we and the Pebblesorters can agree on all these points. We can agree that sorting pebbles into prime-numbered heaps is arbitrary and unjustified, but not p-arbitrary or p-unjustified; that it is the sort of thing an agent p-should do, but not the sort of thing an agent should do.
I fully expect that even if there is other life in the universe only a few trillions of lightyears away (I don’t think it’s local, or we would have seen it by now), that we humans are the only creatures for a long long way indeed who are built to do what is right. That may be a moral miracle, but it is not a causal miracle.
There may be some other evolved races, a sizable fraction perhaps, maybe even a majority, who do some right things. Our executing adaptation of compassion is not so far removed from the game theory that gave it birth; it might be a common adaptation. But laughter, I suspect, may be rarer by far than mercy. What would a galactic civilization be like, if it had sympathy, but never a moment of humor? A little more boring, perhaps, by our standards.
This humanity that we find ourselves in, is a great gift. It may not be a great p-gift, but who cares about p-gifts?
So I really must deny the charges of moral relativism: I don’t think that human morality is arbitrary at all, and I would expect any logically omniscient reasoner to agree with me on that. We are better than the Pebblesorters, because we care about sentient lives, and the Pebblesorters don’t. Just as the Pebblesorters are p-better than us, because they care about pebble heaps, and we don’t. Human morality is p-arbitrary, but who cares? P-arbitrariness is arbitrary.
You’ve just got to avoid thinking that the words “better” and “p-better”, or “moral” and “p-moral”, are talking about the same thing—because then you might think that the Pebblesorters were coming to different conclusions than us about the same thing—and then you might be tempted to think that our own morals were arbitrary. Which, of course, they’re not.
Yes, I really truly do believe that humanity is better than the Pebblesorters! I am not being sarcastic, I really do believe that. I am not playing games by redefining “good” or “arbitrary”, I think I mean the same thing by those terms as everyone else. When you understand that I am genuinely sincere about that, you will understand my metaethics. I really don’t consider myself a moral relativist—not even in the slightest!
The Bedrock of Morality: Arbitrary?
Followup to: Is Fairness Arbitrary?, Joy in the Merely Good, Sorting Pebbles Into Correct Heaps
Yesterday, I presented the idea that when only five people are present, having just stumbled across a pie in the woods (a naturally growing pie, that just popped out of the ground) then it is fair to give Dennis only 1/5th of this pie, even if Dennis persistently claims that it is fair for him to get the whole thing. Furthermore, it is meta-fair to follow such a symmetrical division procedure, even if Dennis insists that he ought to dictate the division procedure.
Fair, meta-fair, or meta-meta-fair, there is no level of fairness where you’re obliged to concede everything to Dennis, without reciprocation or compensation, just because he demands it.
Which goes to say that fairness has a meaning beyond which “that which everyone can be convinced is ‘fair’”. This is an empty proposition, isomorphic to “Xyblz is that which everyone can be convinced is ‘xyblz’”. There must be some specific thing of which people are being convinced; and once you identify that thing, it has a meaning beyond agreements and convincing.
You’re not introducing something arbitrary, something un-fair, in refusing to concede everything to Dennis. You are being fair, and meta-fair and meta-meta-fair. As far up as you go, there’s no level that calls for unconditional surrender. The stars do not judge between you and Dennis—but it is baked into the very question that is asked, when you ask, “What is fair?” as opposed to “What is xyblz?”
Ah, but why should you be fair, rather than xyblz? Let us concede that Dennis cannot validly persuade us, on any level, that it is fair for him to dictate terms and give himself the whole pie; but perhaps he could argue whether we should be fair?
The hidden agenda of the whole discussion of fairness, of course, is that good-ness and right-ness and should-ness, ground out similarly to fairness.
Natural selection optimizes for inclusive genetic fitness. This is not a disagreement with humans about what is good. It is simply that natural selection does not do what is good: it optimizes for inclusive genetic fitness.
Well, since some optimization processes optimize for inclusive genetic fitness, instead of what is good, which should we do, ourselves?
I know my answer to this question. It has something to do with natural selection being a terribly wasteful and stupid and inefficient process. It has something to do with elephants starving to death in their old age when they wear out their last set of teeth. It has something to do with natural selection never choosing a single act of mercy, of grace, even when it would cost its purpose nothing: not auto-anesthetizing a wounded and dying gazelle, when its pain no longer serves even the adaptive purpose that first created pain. Evolution had to happen sometime in the history of the universe, because that’s the only way that intelligence could first come into being, without brains to make brains; but now that era is over, and good riddance.
But most of all—why on Earth would any human being think that one ought to optimize inclusive genetic fitness, rather than what is good? What is even the appeal of this, morally or otherwise? At all? I know people who claim to think like this, and I wonder what wrong turn they made in their cognitive history, and I wonder how to get them to snap out of it.
When we take a step back from fairness, and ask if we should be fair, the answer may not always be yes. Maybe sometimes we should be merciful. But if you ask if it is meta-fair to be fair, the answer will generally be yes. Even if someone else wants you to be unfair in their favor, or claims to disagree about what is “fair”, it will still generally be meta-fair to be fair, even if you can’t make the Other agree. By the same token, if you ask if we meta-should do what we should, rather than something else, the answer is yes. Even if some other agent or optimization process does not do what is right, that doesn’t change what is meta-right.
And this is not “arbitrary” in the sense of rolling dice, not “arbitrary” in the sense that justification is expected and then not found. The accusations that I level against evolution are not merely pulled from a hat; they are expressions of morality as I understand it. They are merely moral, and there is nothing mere about that.
In “Arbitrary” I finished by saying:
This was to help shake people loose of the idea that if any two possible minds can say or do different things, then it must all be arbitrary. Different minds may have different ideas of what’s “arbitrary”, so clearly this whole business of “arbitrariness” is arbitrary, and we should ignore it. After all, Sinned (the anti-Dennis) just always says “Morality isn’t arbitrary!” no matter how you try to persuade her otherwise, so clearly you’re just being arbitrary in saying that morality is arbitrary.
From the perspective of a human, saying that one should sort pebbles into prime-numbered heaps is arbitrary—it’s the sort of act you’d expect to come with a justification attached, but there isn’t any justification.
From the perspective of a Pebblesorter, saying that one p-should scatter a heap of 38 pebbles into two heaps of 19 pebbles is not p-arbitrary at all—it’s the most p-important thing in the world, and fully p-justified by the intuitively obvious fact that a heap of 19 pebbles is p-correct and a heap of 38 pebbles is not.
So which perspective should we adopt? I answer that I see no reason at all why I should start sorting pebble-heaps. It strikes me as a completely pointless activity. Better to engage in art, or music, or science, or heck, better to connive political plots of terrifying dark elegance, than to sort pebbles into prime-numbered heaps. A galaxy transformed into pebbles and sorted into prime-numbered heaps would be just plain boring.
The Pebblesorters, of course, would only reason that music is p-pointless because it doesn’t help you sort pebbles into heaps; the human activity of humor is not only p-pointless but just plain p-bizarre and p-incomprehensible; and most of all, the human vision of a galaxy in which agents are running around experiencing positive reinforcement but not sorting any pebbles, is a vision of an utterly p-arbitrary galaxy devoid of p-purpose. The Pebblesorters would gladly sacrifice their lives to create a P-Friendly AI that sorted the galaxy on their behalf; it would be the most p-profound statement they could make about the p-meaning of their lives.
So which of these two perspectives do I choose? The human one, of course; not because it is the human one, but because it is right. I do not know perfectly what is right, but neither can I plead entire ignorance.
And the Pebblesorters, who simply are not built to do what is right, choose the Pebblesorting perspective: not merely because it is theirs, or because they think they can get away with being p-arbitrary, but because that is what is p-right.
And in fact, both we and the Pebblesorters can agree on all these points. We can agree that sorting pebbles into prime-numbered heaps is arbitrary and unjustified, but not p-arbitrary or p-unjustified; that it is the sort of thing an agent p-should do, but not the sort of thing an agent should do.
I fully expect that even if there is other life in the universe only a few trillions of lightyears away (I don’t think it’s local, or we would have seen it by now), that we humans are the only creatures for a long long way indeed who are built to do what is right. That may be a moral miracle, but it is not a causal miracle.
There may be some other evolved races, a sizable fraction perhaps, maybe even a majority, who do some right things. Our executing adaptation of compassion is not so far removed from the game theory that gave it birth; it might be a common adaptation. But laughter, I suspect, may be rarer by far than mercy. What would a galactic civilization be like, if it had sympathy, but never a moment of humor? A little more boring, perhaps, by our standards.
This humanity that we find ourselves in, is a great gift. It may not be a great p-gift, but who cares about p-gifts?
So I really must deny the charges of moral relativism: I don’t think that human morality is arbitrary at all, and I would expect any logically omniscient reasoner to agree with me on that. We are better than the Pebblesorters, because we care about sentient lives, and the Pebblesorters don’t. Just as the Pebblesorters are p-better than us, because they care about pebble heaps, and we don’t. Human morality is p-arbitrary, but who cares? P-arbitrariness is arbitrary.
You’ve just got to avoid thinking that the words “better” and “p-better”, or “moral” and “p-moral”, are talking about the same thing—because then you might think that the Pebblesorters were coming to different conclusions than us about the same thing—and then you might be tempted to think that our own morals were arbitrary. Which, of course, they’re not.
Yes, I really truly do believe that humanity is better than the Pebblesorters! I am not being sarcastic, I really do believe that. I am not playing games by redefining “good” or “arbitrary”, I think I mean the same thing by those terms as everyone else. When you understand that I am genuinely sincere about that, you will understand my metaethics. I really don’t consider myself a moral relativist—not even in the slightest!
Part of The Metaethics Sequence
Next post: “You Provably Can’t Trust Yourself”
Previous post: “Is Fairness Arbitrary?”