they will no longer trust you about anything except when they have a special reason to do so.
IIUC, you’re saying they would think that because I understand the evolutionary reasons for my instinct not to murder people, and I understand (and accept) the game-theoretical and expected-utility reasons for not murdering people, I am more likely to consciously override these reasons if I find a particular case where they don’t apply. Whereas a deontologist makes a commitment not to murder even if it creates net benefit or saves the whole world (e.g. would you murder Hitler if that was the only way of stopping WW2?)
That seems like it should generalize into an argument that utilitarians and/or rationalists will not be trusted by ‘ordinary’ people. And perhaps even by other rationalists; it may be related to the reasons why our kind can’t cooperate. Although I haven’t observed anything like that in practice; have you?
I don’t think most utilitarians and rationalists accept error theory, or at least most of them say that they don’t, and consequently there won’t be the same reason for distrusting them. For example, Eliezer calls himself a utilitarian but he still believes that “murder is wrong” is an objectively true statement about the relationship between murder and the abstract pattern which we call “right”. And he agrees that it means neither “we don’t like murder” nor “game theory doesn’t recommend murder.”
It may well be true that some people do accept error theory, but don’t admit it. In this way they will advance their goals by getting people to trust them. I would guess that you behave that way in ordinary life as well (in your previous comment you said that you can talk and act as if it is objectively wrong to murder.)
For example, Eliezer calls himself a utilitarian but he still believes that “murder is wrong” is an objectively true statement about the relationship between murder and the abstract pattern which we call “right”.
(Emphasis mine.) That word we plays a crucial role. It’s the same as my saying “wrong according to us”. You might believe the sentiment “murder is wrong” is shared by all of humanity (although I would disagree, empirically), but that’s not the same as saying it’s “objective” in the same sense as logic or physics. Eliezer would agree that wrong!Human is not the same as wrong!Babyeater or wrong!Superhappies. I merely go one step further and point out that humans (across time and space and different cultures) don’t really agree on morality nearly as much as we like to pretend.
in your previous comment you said that you can talk and act as if it is objectively wrong to murder.
It’s not as if I’m pretending to anything I don’t believe. It’s really wrong for me, according to me to murder; this is objectively true (for me!) and I behave accordingly. If anything, saying there are no universal laws that everyone actually follows should imply I should trust others less, not that others should trust me less.
Put another way, my behavior is the same as that of an objective moralist who also happens to believe most people other than him follow partial or corrupted versions of the objectively true morality, or don’t follow it at all. He and I will behave identically and make identical predictions; I merely remove the extra logical concept of ‘objective morality’ which is empirically undetectable and does no useful predictive work, just like a God who causes no miracles and is impossible to detect.
I’m not sure if “error theory” is the correct term (it may be); I used to describe my position as “moral anti-realist”, but let’s not get hung up on words.
If I say “2 and 2 make 4,” that can’t be true apart from the meanings of those words, but that doesn’t make it subjective.
Eliezer may be right or he may be wrong, but it is not obvious (even if it turns out to be true) that he is talking about something different from ordinary people. He thinks that he is simply developing what ordinary people mean, and maybe he is. But what you are saying clearly contrasts with what other people mean.
I do think what Eliezer is developing is different from what ordinary people mean. Ordinary people are, for the most part, moral objectivists in the strong sense—they think objectively true morals exist “out there” independently of humankind. This is usually tied into their religious or spiritual beliefs (which most ‘ordinary’ people have).
Eliezer spends a lot of time in the sequences saying things like “there is not a grain of mercy or justice in the universe, it is cold and uncaring, morals are found in us, humans”. This is exactly what most ‘ordinary’ people don’t accept.
Unfortunately, the issue is confused because Eliezer insists on using non-standard terminology. The whole ethics sequence can be seen as shoehorning the phrase “morals are objective” into actually meaning “human!morals are objective”. He claims this is how we should unpack these words, but I don’t believe ‘ordinary’ people would agree if asked. I also don’t think the universal subset of human!morals is nontrivially large or useful.
Eliezer says that what is signified by moral claims is something that would be true even if human beings did not exist, since he says it is basically like a mathematical statement. It is true that no one would make the statement in that situation, but no one would say that “2 and 2 make 4” in the same situation.
He doesn’t think that true morals exist “out there” in the same sense that he doesn’t think that mathematics exists “out there”. That is probably pretty similar to what most people think.
Also, people I know who believe in angels do not think that angels have the same morality as human beings, and those are pretty ordinary people. So that lines up quite closely with what Eliezer thinks as well.
IIUC, you’re saying they would think that because I understand the evolutionary reasons for my instinct not to murder people, and I understand (and accept) the game-theoretical and expected-utility reasons for not murdering people, I am more likely to consciously override these reasons if I find a particular case where they don’t apply. Whereas a deontologist makes a commitment not to murder even if it creates net benefit or saves the whole world (e.g. would you murder Hitler if that was the only way of stopping WW2?)
That seems like it should generalize into an argument that utilitarians and/or rationalists will not be trusted by ‘ordinary’ people. And perhaps even by other rationalists; it may be related to the reasons why our kind can’t cooperate. Although I haven’t observed anything like that in practice; have you?
I don’t think most utilitarians and rationalists accept error theory, or at least most of them say that they don’t, and consequently there won’t be the same reason for distrusting them. For example, Eliezer calls himself a utilitarian but he still believes that “murder is wrong” is an objectively true statement about the relationship between murder and the abstract pattern which we call “right”. And he agrees that it means neither “we don’t like murder” nor “game theory doesn’t recommend murder.”
It may well be true that some people do accept error theory, but don’t admit it. In this way they will advance their goals by getting people to trust them. I would guess that you behave that way in ordinary life as well (in your previous comment you said that you can talk and act as if it is objectively wrong to murder.)
Most people do not think “murder is wrong. Period”, they allow a few exceptions to that rule.
This is probably true as you meant it, but most people don’t call it murder in those circumstances.
(Emphasis mine.) That word we plays a crucial role. It’s the same as my saying “wrong according to us”. You might believe the sentiment “murder is wrong” is shared by all of humanity (although I would disagree, empirically), but that’s not the same as saying it’s “objective” in the same sense as logic or physics. Eliezer would agree that wrong!Human is not the same as wrong!Babyeater or wrong!Superhappies. I merely go one step further and point out that humans (across time and space and different cultures) don’t really agree on morality nearly as much as we like to pretend.
It’s not as if I’m pretending to anything I don’t believe. It’s really wrong for me, according to me to murder; this is objectively true (for me!) and I behave accordingly. If anything, saying there are no universal laws that everyone actually follows should imply I should trust others less, not that others should trust me less.
Put another way, my behavior is the same as that of an objective moralist who also happens to believe most people other than him follow partial or corrupted versions of the objectively true morality, or don’t follow it at all. He and I will behave identically and make identical predictions; I merely remove the extra logical concept of ‘objective morality’ which is empirically undetectable and does no useful predictive work, just like a God who causes no miracles and is impossible to detect.
I’m not sure if “error theory” is the correct term (it may be); I used to describe my position as “moral anti-realist”, but let’s not get hung up on words.
If I say “2 and 2 make 4,” that can’t be true apart from the meanings of those words, but that doesn’t make it subjective.
Eliezer may be right or he may be wrong, but it is not obvious (even if it turns out to be true) that he is talking about something different from ordinary people. He thinks that he is simply developing what ordinary people mean, and maybe he is. But what you are saying clearly contrasts with what other people mean.
I do think what Eliezer is developing is different from what ordinary people mean. Ordinary people are, for the most part, moral objectivists in the strong sense—they think objectively true morals exist “out there” independently of humankind. This is usually tied into their religious or spiritual beliefs (which most ‘ordinary’ people have).
Eliezer spends a lot of time in the sequences saying things like “there is not a grain of mercy or justice in the universe, it is cold and uncaring, morals are found in us, humans”. This is exactly what most ‘ordinary’ people don’t accept.
Unfortunately, the issue is confused because Eliezer insists on using non-standard terminology. The whole ethics sequence can be seen as shoehorning the phrase “morals are objective” into actually meaning “human!morals are objective”. He claims this is how we should unpack these words, but I don’t believe ‘ordinary’ people would agree if asked. I also don’t think the universal subset of human!morals is nontrivially large or useful.
Eliezer says that what is signified by moral claims is something that would be true even if human beings did not exist, since he says it is basically like a mathematical statement. It is true that no one would make the statement in that situation, but no one would say that “2 and 2 make 4” in the same situation.
He doesn’t think that true morals exist “out there” in the same sense that he doesn’t think that mathematics exists “out there”. That is probably pretty similar to what most people think.
Also, people I know who believe in angels do not think that angels have the same morality as human beings, and those are pretty ordinary people. So that lines up quite closely with what Eliezer thinks as well.