By the way, this is also related to the argument in “Well-Kept Gardens Die By Pacifism”. When we design a system for moderating a web community, we are choosing between “order” and “chaos”, not between “good” and “evil”.
We can move the power to moderator, to some inner circle of users, to most active users, even to users with most sockpuppets, but we can’t just move it to “good”. We can choose which kind of people or which kind of behavior gets the most power, but we can’t choose that the power will magically disappear if they try to abuse it; because any rule designed to prevent abuse can also be abused. The values have to come from outside of the voting system; from the humans who use it. So at the end, the only reasonable choice is to design the system to preserve the existing power, whatever it is—allowing change only when it is initiated by the currently existing power—because the only alternative is to let forces from outside of the garden optimize for their values, again, whatever they are, not only the “good” ones. And yes, if the web community had a horrible values at the beginning, the proper moderating system will preserve them. That’s not bug, that’s a side-effect of a feature. (Luckily, on the web, you have the easy option of leaving the community.)
In this sense, we have to realize that the eigen-whatever system proposed in the article, if designed correctly (how to do this specifically is still open to discussion), would capture something like “the applause lights of the majority of the influential people”, or something similar. If the “majority of the influential people” are evil, or just plain stupid, the eigen-result can easily contain evil or stupidity. It almost certainly contains religion and other irrationality. At best, this system is a useful tool to see what the “majority of influential people” think is morality (as V_V said), which itself is a very nice result for a mathematical equation, but I wouldn’t feel immoral for disagreeing with in at some specific points. Also, it misses the “extrapolated” part of the CEV; for example, if people’s moral opinions are based on incorrect or confused beliefs, the result will contain morality based on incorrect beliefs, so it could give you a recommendation to do both X and Y, where X and Y are contradictory.
By the way, this is also related to the argument in “Well-Kept Gardens Die By Pacifism”. When we design a system for moderating a web community, we are choosing between “order” and “chaos”, not between “good” and “evil”.
We can move the power to moderator, to some inner circle of users, to most active users, even to users with most sockpuppets, but we can’t just move it to “good”. We can choose which kind of people or which kind of behavior gets the most power, but we can’t choose that the power will magically disappear if they try to abuse it; because any rule designed to prevent abuse can also be abused. The values have to come from outside of the voting system; from the humans who use it. So at the end, the only reasonable choice is to design the system to preserve the existing power, whatever it is—allowing change only when it is initiated by the currently existing power—because the only alternative is to let forces from outside of the garden optimize for their values, again, whatever they are, not only the “good” ones. And yes, if the web community had a horrible values at the beginning, the proper moderating system will preserve them. That’s not bug, that’s a side-effect of a feature. (Luckily, on the web, you have the easy option of leaving the community.)
In this sense, we have to realize that the eigen-whatever system proposed in the article, if designed correctly (how to do this specifically is still open to discussion), would capture something like “the applause lights of the majority of the influential people”, or something similar. If the “majority of the influential people” are evil, or just plain stupid, the eigen-result can easily contain evil or stupidity. It almost certainly contains religion and other irrationality. At best, this system is a useful tool to see what the “majority of influential people” think is morality (as V_V said), which itself is a very nice result for a mathematical equation, but I wouldn’t feel immoral for disagreeing with in at some specific points. Also, it misses the “extrapolated” part of the CEV; for example, if people’s moral opinions are based on incorrect or confused beliefs, the result will contain morality based on incorrect beliefs, so it could give you a recommendation to do both X and Y, where X and Y are contradictory.