I think the standard LW argument for there being only one morality is based on the psychological unity of mankind.
Having read the Meta-Ethics sequence, this is my belief too. Indeed, Elizier cares to index human-evaluation and pebblesorters-evaluation algorithms, by calling the first “morality” and the second “pebblesorting”, but he is careful to avoid talking about Elizier-morality and MrMind-morality, or even Elizier-yesterday-morality and Elizier-today-morality. Of course his aims were different, and compared to differently evolved aliens (or AI’s) our morality is truly one of a kind.
But if we magnify our view on morality-space, I think it’s impossible not to recognize that there are differences!
I think that this state of affair can be explained in this way: while there’s a psychological unity of mankind, it concerns only very primitive aspects of our lives: the existence of joy, sadness, the importance of sex, etc. But our innermost and basic evaluation algorithm doesn’t cover every aspects of our lives, mainly because our culture poses problems too new for a genetic solution to have been spread to the whole population. Thus ad-hoc solutions, derived from culture and circumstances, step in: justice, fairness, laws, and so on. Those solutions may very well vary in time and space, and our brains being what they are, sometimes they overwrite what should have been the most primitive output. When we talk about morality, we are usually already assuming the most primitive basic facts about human evaluation algorithm, and we try to argue about the finer point not covered by the genetic wiring of our brains, as for example if murder is always wrong.
In comparison with pebble-sorters or clipping AI, humanity exhibits a very narrow way of evaluating reality, to the point that you can talk about a single human-algorithm and call it “morality”. But if you zoom in, it is clear that the bedrock of morality doesn’t cover every problems that cultures naturally throw at pepole, and that’s why you need to invent “patches” or “add-ons” to the original algorithm, in form of morality concepts like justice, fairness, the sacrality of life, etc. Obviously, different groups of people will come up with different patches. But there are add-ons that were invented a long time ago, and they are now so widespread and ingrained in certain group’s education, that they feel as if they are part of the original primitive morality, while infact they are not. There are also new problems that require the (sometimes urgent) invention of new patches (e.g.: nuclear proliferation, genetic manipulation, birth control), and they are even more problematic and still in a state of transition nowadays.
Is this view unitary, or even realist? In my opinion, philosophical distinctions are too crude and simplicistic to categorize correctly the view of morality as “algorithm + local patches”. Maybe it needs its whole new category, something like the “algorithmic theories of morality” (although the category of “synthetic etical naturalism” comes close to capture the concept).
Having read the Meta-Ethics sequence, this is my belief too. Indeed, Elizier cares to index human-evaluation and pebblesorters-evaluation algorithms, by calling the first “morality” and the second “pebblesorting”, but he is careful to avoid talking about Elizier-morality and MrMind-morality, or even Elizier-yesterday-morality and Elizier-today-morality.
Of course his aims were different, and compared to differently evolved aliens (or AI’s) our morality is truly one of a kind.
But if we magnify our view on morality-space, I think it’s impossible not to recognize that there are differences!
I think that this state of affair can be explained in this way: while there’s a psychological unity of mankind, it concerns only very primitive aspects of our lives: the existence of joy, sadness, the importance of sex, etc.
But our innermost and basic evaluation algorithm doesn’t cover every aspects of our lives, mainly because our culture poses problems too new for a genetic solution to have been spread to the whole population.
Thus ad-hoc solutions, derived from culture and circumstances, step in: justice, fairness, laws, and so on. Those solutions may very well vary in time and space, and our brains being what they are, sometimes they overwrite what should have been the most primitive output.
When we talk about morality, we are usually already assuming the most primitive basic facts about human evaluation algorithm, and we try to argue about the finer point not covered by the genetic wiring of our brains, as for example if murder is always wrong.
In comparison with pebble-sorters or clipping AI, humanity exhibits a very narrow way of evaluating reality, to the point that you can talk about a single human-algorithm and call it “morality”. But if you zoom in, it is clear that the bedrock of morality doesn’t cover every problems that cultures naturally throw at pepole, and that’s why you need to invent “patches” or “add-ons” to the original algorithm, in form of morality concepts like justice, fairness, the sacrality of life, etc. Obviously, different groups of people will come up with different patches. But there are add-ons that were invented a long time ago, and they are now so widespread and ingrained in certain group’s education, that they feel as if they are part of the original primitive morality, while infact they are not. There are also new problems that require the (sometimes urgent) invention of new patches (e.g.: nuclear proliferation, genetic manipulation, birth control), and they are even more problematic and still in a state of transition nowadays.
Is this view unitary, or even realist? In my opinion, philosophical distinctions are too crude and simplicistic to categorize correctly the view of morality as “algorithm + local patches”. Maybe it needs its whole new category, something like the “algorithmic theories of morality” (although the category of “synthetic etical naturalism” comes close to capture the concept).