Good [1] : The human consensus on morality, the human CEV, the contents of Friendly AI’s utility function, “sugar is sweet, love is good”. There is one correct definition of Good. “Pebblesorters do not care about good or evil, they care about grouping things into primes. Paperclippers do not care about good or evil, they care about paperclips”.
Good[2] : An individual’s morality, a special subset of an agent’s utility function (especially the subset that pertains to how everyone aught to act). “I feel sugar is yummy, but I don’t mind if you don’t agree. However, I feel love is good, and if you don’t agree we can’t be friends.”… “Pebblesorters think making prime numbered pebble piles is good. Paperclippers think making paperclips is good”. (A pebblesorter might selfishly prefer maximize the number of pebble piles that they make themselves, but the same pebblesorter believes everyone aught to act to maximize the total number of pebble piles, rather than selfishly maximizing their own pebble piles. A perfectly good pebblesorter seeks only to maximize pebbles. Selfish pebblesorters hoard resources to maximize their own personal pebble creation. Evil pebblesorters knowingly make non-prime abominations.)
so I don’t see where the confusion is.
Do you see what I mean by “semantic” confusion now? Eliezer (like most moral realists, universalists, etc) is using Good[1]. Those confused by his writing (who are accustomed to descriptive moral relativism, nihilism, etc) are using Good[2]. The maps are actually nearly identical in meaning, but because they are written in different languages it’s difficult to see that the maps are nearly identical.
I’m suggesting that Good[1] and Good[2] are sufficiently different that people who talk about morality often aught to have different words for them. This is one of those “If a tree falls in the forest does it make a sound” debates, which are utterly useless because they center entirely around the definition of sound.
Eliezer views ethics the same way just about everyone intuitively view aesthetics—as a body of facts that can be empirically studied and are not purely a matter of personal opinion or ad-hoc stipulation—facts, though, that make ineliminable reference to the neurally encoded preferences of specific organisms, facts that are not written in the sky and do not possess a value causally independent of the minds in question.
Yup, I agree completely, that’s exactly the correct way to think about it. The fact that you are able to give a definition of what ethics are while tabooing words like good and bad and moral, is the reason that you can simultaneously uphold Good[2] with your gustatory analogy and still understand that Eliezer doesn’t disagree with you even though he uses Good[1].
Most people’s thinking is too attached to words to do that, so they get confused. Being able to think about what things are without referencing any semantic labels is a skill.
Good [1] : The human consensus on morality, the human CEV, the contents of Friendly AI’s utility function, “sugar is sweet, love is good”. There is one correct definition of Good. “Pebblesorters do not care about good or evil, they care about grouping things into primes. Paperclippers do not care about good or evil, they care about paperclips”.
Good[2] : An individual’s morality, a special subset of an agent’s utility function (especially the subset that pertains to how everyone aught to act). “I feel sugar is yummy, but I don’t mind if you don’t agree. However, I feel love is good, and if you don’t agree we can’t be friends.”… “Pebblesorters think making prime numbered pebble piles is good. Paperclippers think making paperclips is good”. (A pebblesorter might selfishly prefer maximize the number of pebble piles that they make themselves, but the same pebblesorter believes everyone aught to act to maximize the total number of pebble piles, rather than selfishly maximizing their own pebble piles. A perfectly good pebblesorter seeks only to maximize pebbles. Selfish pebblesorters hoard resources to maximize their own personal pebble creation. Evil pebblesorters knowingly make non-prime abominations.)
Do you see what I mean by “semantic” confusion now? Eliezer (like most moral realists, universalists, etc) is using Good[1]. Those confused by his writing (who are accustomed to descriptive moral relativism, nihilism, etc) are using Good[2]. The maps are actually nearly identical in meaning, but because they are written in different languages it’s difficult to see that the maps are nearly identical.
I’m suggesting that Good[1] and Good[2] are sufficiently different that people who talk about morality often aught to have different words for them. This is one of those “If a tree falls in the forest does it make a sound” debates, which are utterly useless because they center entirely around the definition of sound.
Yup, I agree completely, that’s exactly the correct way to think about it. The fact that you are able to give a definition of what ethics are while tabooing words like good and bad and moral, is the reason that you can simultaneously uphold Good[2] with your gustatory analogy and still understand that Eliezer doesn’t disagree with you even though he uses Good[1].
Most people’s thinking is too attached to words to do that, so they get confused. Being able to think about what things are without referencing any semantic labels is a skill.