One project is the descriptive one of moral psychology and moral anthropology. Because Coherent Extrapolated Volition begins with data from moral psychology and moral anthropology, that descriptive project is important for Eliezer’s design of Friendly AI. Certainly, I agree with Eliezer that human values are too complex to easily formalize, because our terminal values are the product of millions of years of messy biological and cultural evolution.
“Morality” is a term usually used in speech acts to refer to a set of normative questions about what we ought to do, or what we ought to value. Even if you’re an ethical reductionist as I am, and reduce ‘ought’ such that it is a particular species of ‘is’, there are lots of ways to do that, and I’m not clear on how Eliezer does it.
One project is the descriptive one of moral psychology and moral anthropology. Because Coherent Extrapolated Volition begins with data from moral psychology and moral anthropology, that descriptive project is important for Eliezer’s design of Friendly AI.
Moral psychology and anthropology are pretty useless, because morality is too complex for humans to manually capture with accuracy, and too fragile to allow capturing without accuracy. We need better tools.
Your first claim doesn’t follow from the (correct) supporting evidence.
In actually implementing a CEV or other such object, it’s true that one daren’t program in specific object-level moral truths derived from human study. The implementation should be much more meta-level, in order to not get locked into bad assumptions.
However, you and I can think of classes of possible implementation failures that might be missed if we had too naive a theory of moral psychology. Maybe a researcher who didn’t know about the conscious/unconscious divide at all would come to the same implementation algorithm as one who did, but it’s not out of the question that our limited knowledge could be relevant.
One project is the descriptive one of moral psychology and moral anthropology. Because Coherent Extrapolated Volition begins with data from moral psychology and moral anthropology, that descriptive project is important for Eliezer’s design of Friendly AI. Certainly, I agree with Eliezer that human values are too complex to easily formalize, because our terminal values are the product of millions of years of messy biological and cultural evolution.
“Morality” is a term usually used in speech acts to refer to a set of normative questions about what we ought to do, or what we ought to value. Even if you’re an ethical reductionist as I am, and reduce ‘ought’ such that it is a particular species of ‘is’, there are lots of ways to do that, and I’m not clear on how Eliezer does it.
Moral psychology and anthropology are pretty useless, because morality is too complex for humans to manually capture with accuracy, and too fragile to allow capturing without accuracy. We need better tools.
Your first claim doesn’t follow from the (correct) supporting evidence.
In actually implementing a CEV or other such object, it’s true that one daren’t program in specific object-level moral truths derived from human study. The implementation should be much more meta-level, in order to not get locked into bad assumptions.
However, you and I can think of classes of possible implementation failures that might be missed if we had too naive a theory of moral psychology. Maybe a researcher who didn’t know about the conscious/unconscious divide at all would come to the same implementation algorithm as one who did, but it’s not out of the question that our limited knowledge could be relevant.