Eliezer said: “But there is always a framework, every time you are moved to change your morals—the question is whether it will be invisible to you or not. That framework is always implemented in some particular brain, so that the same argument would fail to compel a differently constructed brain—though this does not imply that the framework makes any mention of brains at all.”
And the above statement—Eliezer’s meta-framework for ethical reasoning—guarantees that he will remain a relativist. The implicit assumption is that the acid test of a particular ethical theory is whether it will persuade all possible minds (presumably he is talking about Turing machines here). Since there exists no ethical argument which will persuade all possible minds there is no “objectively best” ethical theory.
In fact, if you boil down Eliezer’s argument against moral realism to its essence, you get (using standard definitions for words like “right”, “objective”) the following:
Defn: Theory X is objectively morally right if and only if for all Turing machines Z, Z(X) = “yes I agree”
Fact: There exists a Turing machine which implements the constant function “I disagree”
Therefore: No ethical theory is objectively morally right
Now I reject the above definition: I think that there are other useful criteria—rooted in reality itself—which pick out certain axiologies as being special. Perhaps I should be more careful about what I call such frameworks: from the previous comment threads on overcoming bias, I have discovered that it is very easy to start abusing the ethical vocabulary, so I should call objective axiologies (such as UIVs) “objectively canonical” rather that objectively right.
I should add that I don’t regard the very limited amount of work I have done on UIVs and objective axiologies as a finished product, so I am somewhat surprised to find it being critiqued. All constructive criticism is appreciated, though.
Eliezer said: “But there is always a framework, every time you are moved to change your morals—the question is whether it will be invisible to you or not. That framework is always implemented in some particular brain, so that the same argument would fail to compel a differently constructed brain—though this does not imply that the framework makes any mention of brains at all.”
And the above statement—Eliezer’s meta-framework for ethical reasoning—guarantees that he will remain a relativist. The implicit assumption is that the acid test of a particular ethical theory is whether it will persuade all possible minds (presumably he is talking about Turing machines here). Since there exists no ethical argument which will persuade all possible minds there is no “objectively best” ethical theory.
In fact, if you boil down Eliezer’s argument against moral realism to its essence, you get (using standard definitions for words like “right”, “objective”) the following:
Defn: Theory X is objectively morally right if and only if for all Turing machines Z, Z(X) = “yes I agree”
Fact: There exists a Turing machine which implements the constant function “I disagree”
Therefore: No ethical theory is objectively morally right
Now I reject the above definition: I think that there are other useful criteria—rooted in reality itself—which pick out certain axiologies as being special. Perhaps I should be more careful about what I call such frameworks: from the previous comment threads on overcoming bias, I have discovered that it is very easy to start abusing the ethical vocabulary, so I should call objective axiologies (such as UIVs) “objectively canonical” rather that objectively right.
I should add that I don’t regard the very limited amount of work I have done on UIVs and objective axiologies as a finished product, so I am somewhat surprised to find it being critiqued. All constructive criticism is appreciated, though.