Another type of scenario involves minorities. Imagine, for instance, that 98% of the players are unfailingly nice to each other, but unfailingly cruel to the remaining 2% (who they can recognize, let’s say, by their long noses or darker skin—some trivial feature like that). Meanwhile, the put-upon 2% return the favor by being nice to each other and mean to the 98%. Who, in this scenario, is moral, and who’s immoral? The mathematical verdict of both eigenmoses and eigenjesus is unequivocal: the 98% are almost perfectly good, while the 2% are almost perfectly evil. After all, the 98% are nice to almost everyone, while the 2% are mean to those who are nice to almost everyone, and nice only to a tiny minority who are mean to almost everyone. Of course, for much of human history, this is precisely how morality worked, in many people’s minds. But I dare say it’s a result that would make moderns uncomfortable.
There’s a crucial observation that I took for granted in the post but shouldn’t have, so let me now make it explicit. The observation is this:
No system for aggregating preferences whatsoever—neither direct democracy, nor representative democracy, nor eigendemocracy, nor anything else—can possibly deal with the “Nazi Germany problem,” wherein basically an entire society’s value system becomes inverted to the point where evil is good and good evil.
By the way, this is also related to the argument in “Well-Kept Gardens Die By Pacifism”. When we design a system for moderating a web community, we are choosing between “order” and “chaos”, not between “good” and “evil”.
We can move the power to moderator, to some inner circle of users, to most active users, even to users with most sockpuppets, but we can’t just move it to “good”. We can choose which kind of people or which kind of behavior gets the most power, but we can’t choose that the power will magically disappear if they try to abuse it; because any rule designed to prevent abuse can also be abused. The values have to come from outside of the voting system; from the humans who use it. So at the end, the only reasonable choice is to design the system to preserve the existing power, whatever it is—allowing change only when it is initiated by the currently existing power—because the only alternative is to let forces from outside of the garden optimize for their values, again, whatever they are, not only the “good” ones. And yes, if the web community had a horrible values at the beginning, the proper moderating system will preserve them. That’s not bug, that’s a side-effect of a feature. (Luckily, on the web, you have the easy option of leaving the community.)
In this sense, we have to realize that the eigen-whatever system proposed in the article, if designed correctly (how to do this specifically is still open to discussion), would capture something like “the applause lights of the majority of the influential people”, or something similar. If the “majority of the influential people” are evil, or just plain stupid, the eigen-result can easily contain evil or stupidity. It almost certainly contains religion and other irrationality. At best, this system is a useful tool to see what the “majority of influential people” think is morality (as V_V said), which itself is a very nice result for a mathematical equation, but I wouldn’t feel immoral for disagreeing with in at some specific points. Also, it misses the “extrapolated” part of the CEV; for example, if people’s moral opinions are based on incorrect or confused beliefs, the result will contain morality based on incorrect beliefs, so it could give you a recommendation to do both X and Y, where X and Y are contradictory.
Well yes, and attempting to group all actual or possible individuals into one tribe is a major mistake, one that I think should be given a name. Well, as it turns out, the name I was already going to give it is at least partially in use: False Universalism.
Ethics ought to include some kind of reasoning for determining when some bit of universalism (some universalization of a maxim, in the Kantian or Timeless sense, or some value cohering, in the CEV sense) has become False Universalism, so that the groups or individuals who diverge from each other to the point of incompatibility can be handled as conflicting, rather than simply having the ethical algorithm return the answer that one or the other is Right and the other is Wrong and the Wrong shall be corrected until they follow the values of the Right.
“Handled as conflicting” seems to either mean “all-out war” or at best “temporary putting off of all-out war until we’ve used all the atoms on our side of the universe”.
If the two sides shared your desire to be symmetrically peaceful with other sides whose only point of similarity with them was the desire to be symmetrically peaceful with other sides whose… then Universalism isn’t false. That’s its minimal case.
And if it does fail, it seems counterproductive for you to point that out to us, because while we’re happily and deludedly trying to apply it, we’re not genociding each other all over your lawn.
Sorry, when I said “False Universalism”, I meant things like, “one group wants to have kings, and another wants parliamentary democracy”. Or “one group wants chocolate, and the other wants vanilla”. Common moral algorithms seem to simply assume that the majority wins, so if the majority wants chocolate, everyone gets chocolate. Moral constructionism gets around this by saying: values may not be universal, but we can come to game-theoretically sound agreements (even if they’re only Timelessly sound, like Rawls’ Theory of Justice) on how to handle the disagreements productively, thus wasting fewer resources on fighting each other when we could be spending them on Fun.
Basically, I think the correct moral algorithm is: use a constructionist algorithm to cluster people into groups who can then use realist universalisms internally.
If we’re aggregating cooperation rather than aggregating values, we certainly can create a system that distinguishes between societies that apply an extreme level of noncooperation (i.e. killing) to larger groups of people than other societies, and that uses our own definition of noncooperation rather than what the Nazi values judge as noncooperation.
That’s not to say you couldn’t still find tricky example societies where the system evaluation isn’t doing what we want, I just mean to encourage further improvement to cover moral behaviour towards and from hated minorities, and in actual Nazi Germany.
But his own scheme isn’t the aggregation of arbitrary values, it’s based on rewarding co operation.
Perhaps in-group problems could be fixed with an eigenSinger algorithm that gives extra points to those who cop operate with people they have not cooperated with before, ie widening the circle.
Scott remarks on this himself:
Scott Says:
By the way, this is also related to the argument in “Well-Kept Gardens Die By Pacifism”. When we design a system for moderating a web community, we are choosing between “order” and “chaos”, not between “good” and “evil”.
We can move the power to moderator, to some inner circle of users, to most active users, even to users with most sockpuppets, but we can’t just move it to “good”. We can choose which kind of people or which kind of behavior gets the most power, but we can’t choose that the power will magically disappear if they try to abuse it; because any rule designed to prevent abuse can also be abused. The values have to come from outside of the voting system; from the humans who use it. So at the end, the only reasonable choice is to design the system to preserve the existing power, whatever it is—allowing change only when it is initiated by the currently existing power—because the only alternative is to let forces from outside of the garden optimize for their values, again, whatever they are, not only the “good” ones. And yes, if the web community had a horrible values at the beginning, the proper moderating system will preserve them. That’s not bug, that’s a side-effect of a feature. (Luckily, on the web, you have the easy option of leaving the community.)
In this sense, we have to realize that the eigen-whatever system proposed in the article, if designed correctly (how to do this specifically is still open to discussion), would capture something like “the applause lights of the majority of the influential people”, or something similar. If the “majority of the influential people” are evil, or just plain stupid, the eigen-result can easily contain evil or stupidity. It almost certainly contains religion and other irrationality. At best, this system is a useful tool to see what the “majority of influential people” think is morality (as V_V said), which itself is a very nice result for a mathematical equation, but I wouldn’t feel immoral for disagreeing with in at some specific points. Also, it misses the “extrapolated” part of the CEV; for example, if people’s moral opinions are based on incorrect or confused beliefs, the result will contain morality based on incorrect beliefs, so it could give you a recommendation to do both X and Y, where X and Y are contradictory.
Well yes, and attempting to group all actual or possible individuals into one tribe is a major mistake, one that I think should be given a name. Well, as it turns out, the name I was already going to give it is at least partially in use: False Universalism.
Ethics ought to include some kind of reasoning for determining when some bit of universalism (some universalization of a maxim, in the Kantian or Timeless sense, or some value cohering, in the CEV sense) has become False Universalism, so that the groups or individuals who diverge from each other to the point of incompatibility can be handled as conflicting, rather than simply having the ethical algorithm return the answer that one or the other is Right and the other is Wrong and the Wrong shall be corrected until they follow the values of the Right.
“Handled as conflicting” seems to either mean “all-out war” or at best “temporary putting off of all-out war until we’ve used all the atoms on our side of the universe”.
If the two sides shared your desire to be symmetrically peaceful with other sides whose only point of similarity with them was the desire to be symmetrically peaceful with other sides whose… then Universalism isn’t false. That’s its minimal case.
And if it does fail, it seems counterproductive for you to point that out to us, because while we’re happily and deludedly trying to apply it, we’re not genociding each other all over your lawn.
Sorry, when I said “False Universalism”, I meant things like, “one group wants to have kings, and another wants parliamentary democracy”. Or “one group wants chocolate, and the other wants vanilla”. Common moral algorithms seem to simply assume that the majority wins, so if the majority wants chocolate, everyone gets chocolate. Moral constructionism gets around this by saying: values may not be universal, but we can come to game-theoretically sound agreements (even if they’re only Timelessly sound, like Rawls’ Theory of Justice) on how to handle the disagreements productively, thus wasting fewer resources on fighting each other when we could be spending them on Fun.
Basically, I think the correct moral algorithm is: use a constructionist algorithm to cluster people into groups who can then use realist universalisms internally.
If we’re aggregating cooperation rather than aggregating values, we certainly can create a system that distinguishes between societies that apply an extreme level of noncooperation (i.e. killing) to larger groups of people than other societies, and that uses our own definition of noncooperation rather than what the Nazi values judge as noncooperation.
That’s not to say you couldn’t still find tricky example societies where the system evaluation isn’t doing what we want, I just mean to encourage further improvement to cover moral behaviour towards and from hated minorities, and in actual Nazi Germany.
But his own scheme isn’t the aggregation of arbitrary values, it’s based on rewarding co operation.
Perhaps in-group problems could be fixed with an eigenSinger algorithm that gives extra points to those who cop operate with people they have not cooperated with before, ie widening the circle.