It seems to me that there’s no difference in kind between moral intuitions and religious beliefs, except that the former are more deeply held. (I guess that makes me a kind of error theorist.)
If that’s true, that means FAI designers shouldn’t work on approaches like “extrapolation” that can convert a religious person to an atheist, because the same procedure might convert you into a moral nihilist. The task of FAI designers is more subtle: devise an algorithm that, when applied to religious belief, would encode it “faithfully” as a utility function, despite the absence of God.
Does that sound right? I’ve never seen it spelled out as strongly, but logically it seems inevitable.
It seems to me that there’s no difference in kind between moral intuitions and religious beliefs,
That just doesn’t seem true to me. I agree that there’s often difference between religious beliefs and ordinary factual beliefs, but I don’t think that religious beliefs are the same sort of thing as moral intuitions. They just feel different to me.
For one thing religious beliefs are often a “belief in belief” whereas I don’t think moral beliefs are like that.
Also moral beliefs seem more instinctual, whereas religious beliefs are taught.
For one thing religious beliefs are often a “belief in belief” whereas I don’t think moral beliefs are like that.
I think moral beliefs are very often like that, at least for some people. See the comment here and JM’s response.
Stephen Diamond makes a related argument, namely that people will not give up moral beliefs because it is obviously wicked to do so, according to those very same moral beliefs, in the same way that a religious person will not give up their religious beliefs because those beliefs say it would be wicked to do so.
It seems to me that there’s no difference in kind between moral intuitions and religious beliefs, except that the former are more deeply held. (I guess that makes me a kind of error theorist.)
If that’s true, that means FAI designers shouldn’t work on approaches like “extrapolation” that can convert a religious person to an atheist, because the same procedure might convert you into a moral nihilist. The task of FAI designers is more subtle: devise an algorithm that, when applied to religious belief, would encode it “faithfully” as a utility function, despite the absence of God.
Does that sound right? I’ve never seen it spelled out as strongly, but logically it seems inevitable.
That just doesn’t seem true to me. I agree that there’s often difference between religious beliefs and ordinary factual beliefs, but I don’t think that religious beliefs are the same sort of thing as moral intuitions. They just feel different to me.
For one thing religious beliefs are often a “belief in belief” whereas I don’t think moral beliefs are like that.
Also moral beliefs seem more instinctual, whereas religious beliefs are taught.
I think moral beliefs are very often like that, at least for some people. See the comment here and JM’s response.
Stephen Diamond makes a related argument, namely that people will not give up moral beliefs because it is obviously wicked to do so, according to those very same moral beliefs, in the same way that a religious person will not give up their religious beliefs because those beliefs say it would be wicked to do so.
Every emotion connected with moral intuitions, e.g. recoiling from a bad act, can also happen due to religious beliefs.