society is failing to deal with the error-proneness of its own moral reasoning
I am still not sure that there is such a thing as an “error” in moral reasoning. Suppose I decide that the rule, equality before the law should be replaced with, say, equality before the law, except for blacks. In what sense have I made an error?
It seems to me that there is something fishy going on with both Eliezer’s and Robin’s uses of moral words in the above debate. Both speak of moral errors, of better approximations to the correct moral rules, etc, whilst I presume both would deny that, in a case of moral disagreement about who has made the “error” and who has got it right, there is any objective truth of the matter.
We can probably get further if we talk about the moral truth according to one particular individual; in this case it is more plausible to argue that when Tim Tyler, aged 14, decided that the one objective moral truth was “don’t hurt other people”, he was simply wrong, even with respect to his own moral views.
If we talk about inferring the moral truth behind noisy moral intuitions, then if people’s intuitions or models of those intuitions differ, the errors in their intuitions or models differ. One person can be more mistaken than another. If you reject moral realism you can recast this conversation in terms of commonly shared “moral” components of what we want.
Well, I interpreted Robin to mean “we’re going to use this algorithm to aggregate preferences”. You would have to drop the language of “errors” though.
Okay. In a form, this view can even be equivalent, if you stick to the same data, a kind of nonparametric view that only recognizes observations. You see this discussion as about summarization of people’s behavior (e.g. to implement a policy to which most people would agree), while I see it as about inference of people’s hidden wishes behind visible behavior or stated wishes, and maybe as summarization of people’s hidden wishes (e.g. to implement a policy that most people would appreciate as it unfolds, but which they won’t necessarily agree on at the time).
Note that e.g. signaling can seriously distort the picture of wants seen in behavior.
while I see it as about inference of people’s hidden wishes behind visible behavior or stated wishes, and maybe as summarization of people’s hidden wishes (e.g. to implement a policy that most people would appreciate as it unfolds, but which they won’t necessarily agree on at the time).
I would agree that this is sometimes sensible. However, just because a policy pleases people as it unfolds, we should not infer that that policy constituted the peoples’ unique hidden preference.
Events and situations can influence preferences—change what we think of as our values.
Furthermore, it isn’t clear where the line between exercising your free will to suppress certain desires and being deluded about your true preferences is.
Basically, this thing is a big mess, philosophically and computationally.
whilst I presume both [Eliezer and Robin] would deny that, in a case of moral disagreement about who has made the “error” and who has got it right, there is any objective truth of the matter.
I expect otherwise. There is a difference between who has got one’s preference right, and whose preference is right. The former is meaningful, the latter is not. Two people may prefer different solutions, and both be right, or they may give the same solution and only one of them will be wrong, and they can both agree on who is right or who is wrong in each of these cases. There is no objective truth about what is “objectively preferable”, but there is objective truth about what is preferable for a given person, and that person may have an incorrect belief about what that is.
What is preferable for a person here is a two-place word, while who is wrong is a one-place word about what is preferable.
(At least, approximately so, since you’d still need to interpret the two-place function of what is preferable for a given person yourself, adding your own preferences in the mix, but at least where the different people are concerned that influence is much less than the differences given by the person whose morality is being considered. Still, technically, it warrants the opposition to the idea of my-morality vs. your-morality.)
And then, there is the shared moral truth, on which most of the people’s preferences agree, but not necessarily what most of the people agree on if you ask them. This is the way in which moral truth is seen through noisy observations.
I am still not sure that there is such a thing as an “error” in moral reasoning. Suppose I decide that the rule, equality before the law should be replaced with, say, equality before the law, except for blacks. In what sense have I made an error?
It seems to me that there is something fishy going on with both Eliezer’s and Robin’s uses of moral words in the above debate. Both speak of moral errors, of better approximations to the correct moral rules, etc, whilst I presume both would deny that, in a case of moral disagreement about who has made the “error” and who has got it right, there is any objective truth of the matter.
We can probably get further if we talk about the moral truth according to one particular individual; in this case it is more plausible to argue that when Tim Tyler, aged 14, decided that the one objective moral truth was “don’t hurt other people”, he was simply wrong, even with respect to his own moral views.
If we talk about inferring the moral truth behind noisy moral intuitions, then if people’s intuitions or models of those intuitions differ, the errors in their intuitions or models differ. One person can be more mistaken than another. If you reject moral realism you can recast this conversation in terms of commonly shared “moral” components of what we want.
This seems reasonable.
I don’t understand. How can you never be wrong about what is right, and still can be wrong about what is a shared component of what is right?
Well, I interpreted Robin to mean “we’re going to use this algorithm to aggregate preferences”. You would have to drop the language of “errors” though.
Okay. In a form, this view can even be equivalent, if you stick to the same data, a kind of nonparametric view that only recognizes observations. You see this discussion as about summarization of people’s behavior (e.g. to implement a policy to which most people would agree), while I see it as about inference of people’s hidden wishes behind visible behavior or stated wishes, and maybe as summarization of people’s hidden wishes (e.g. to implement a policy that most people would appreciate as it unfolds, but which they won’t necessarily agree on at the time).
Note that e.g. signaling can seriously distort the picture of wants seen in behavior.
I would agree that this is sometimes sensible. However, just because a policy pleases people as it unfolds, we should not infer that that policy constituted the peoples’ unique hidden preference.
Events and situations can influence preferences—change what we think of as our values.
Furthermore, it isn’t clear where the line between exercising your free will to suppress certain desires and being deluded about your true preferences is.
Basically, this thing is a big mess, philosophically and computationally.
The best summation of the topic I’ve yet come across.
Yes, you’re very intelligent. Please expand.
I expect otherwise. There is a difference between who has got one’s preference right, and whose preference is right. The former is meaningful, the latter is not. Two people may prefer different solutions, and both be right, or they may give the same solution and only one of them will be wrong, and they can both agree on who is right or who is wrong in each of these cases. There is no objective truth about what is “objectively preferable”, but there is objective truth about what is preferable for a given person, and that person may have an incorrect belief about what that is.
What is preferable for a person here is a two-place word, while who is wrong is a one-place word about what is preferable.
(At least, approximately so, since you’d still need to interpret the two-place function of what is preferable for a given person yourself, adding your own preferences in the mix, but at least where the different people are concerned that influence is much less than the differences given by the person whose morality is being considered. Still, technically, it warrants the opposition to the idea of my-morality vs. your-morality.)
And then, there is the shared moral truth, on which most of the people’s preferences agree, but not necessarily what most of the people agree on if you ask them. This is the way in which moral truth is seen through noisy observations.