That is why it is right to speak of morality in general, and human morality in particular.
I prefer Eliezer’s way because it makes evident, when talking to someone who hasn’t read the Sequence, that there are different set of self-consistent values, but it’s an agreement that people should have before starting to debate and I personally would have no problem in talking about different moralities.
Eliezer believes that human values are intrinsically arbitrary
But does he? Because that would be demonstrably false. Maybe arbitrary in the sense of “occupying a tiny space in the whole set of all possible values”, but since our morality is shaped by evolution, it will contain surely some historical accident but also a lot of useful heuristics. No human can value drinking poison, for example.
What is “good for us” is not arbitrary, but an objective fact about relationships between human nature and the world
If you were to unpack “good”, would you insert other meanings besides “what helps our survival”?
“There are different sets of self-consistent values.” This is true, but I do not agree that all logically possible sets of self-consistent values represent moralities. For example, it would be logically possible for an animal to value nothing but killing itself; but this does not represent a morality, because such an animal cannot exist in reality in a stable manner. It cannot come into existence in a natural way (namely by evolution) at all, even if you might be able to produce one artificially. If you do produce one artificially, it will just kill itself and then it will not exist.
This is part of what I was saying about how when people use words differently they hope to accomplish different things. I speak of morality in general, not to mean “logically consistent set of values”, but a set that could reasonably exist in the real word with a real intelligent being. In other words, restricting morality to human values is an indirect way of promoting the position that human values are arbitrary.
As I said, I don’t think Eliezer would accept that characterization of his position, and you give one reason why he would not. But he has a more general view where only some sets of values are possible for merely accidental reasons, namely because it just happens that things cannot evolve in other ways. I would say the contrary—it is not an accident that the value of killing yourself cannot evolve, but this is because killing yourself is bad.
And this kind of explains how “good” has to be unpacked. Good would be what tends to cause tendencies towards itself. Survival is one example, but not the only one, even if everything else will at least have to be consistent with that value. So e.g. not only is survival valued by intelligent creatures in all realistic conditions, but so is knowledge. So knowledge and survival are both good for all intelligent creatures. But since different creatures will produce their knowledge and survival in different ways, different things will be good for them in relation to these ends.
I prefer Eliezer’s way because it makes evident, when talking to someone who hasn’t read the Sequence, that there are different set of self-consistent values, but it’s an agreement that people should have before starting to debate and I personally would have no problem in talking about different moralities.
But does he? Because that would be demonstrably false. Maybe arbitrary in the sense of “occupying a tiny space in the whole set of all possible values”, but since our morality is shaped by evolution, it will contain surely some historical accident but also a lot of useful heuristics.
No human can value drinking poison, for example.
If you were to unpack “good”, would you insert other meanings besides “what helps our survival”?
“There are different sets of self-consistent values.” This is true, but I do not agree that all logically possible sets of self-consistent values represent moralities. For example, it would be logically possible for an animal to value nothing but killing itself; but this does not represent a morality, because such an animal cannot exist in reality in a stable manner. It cannot come into existence in a natural way (namely by evolution) at all, even if you might be able to produce one artificially. If you do produce one artificially, it will just kill itself and then it will not exist.
This is part of what I was saying about how when people use words differently they hope to accomplish different things. I speak of morality in general, not to mean “logically consistent set of values”, but a set that could reasonably exist in the real word with a real intelligent being. In other words, restricting morality to human values is an indirect way of promoting the position that human values are arbitrary.
As I said, I don’t think Eliezer would accept that characterization of his position, and you give one reason why he would not. But he has a more general view where only some sets of values are possible for merely accidental reasons, namely because it just happens that things cannot evolve in other ways. I would say the contrary—it is not an accident that the value of killing yourself cannot evolve, but this is because killing yourself is bad.
And this kind of explains how “good” has to be unpacked. Good would be what tends to cause tendencies towards itself. Survival is one example, but not the only one, even if everything else will at least have to be consistent with that value. So e.g. not only is survival valued by intelligent creatures in all realistic conditions, but so is knowledge. So knowledge and survival are both good for all intelligent creatures. But since different creatures will produce their knowledge and survival in different ways, different things will be good for them in relation to these ends.
Any virulently self-reproducing meme would be another.
This would be a long discussion, but there’s some truth in that, and some falsehood.