The existence of moral disagreement is not an argument against CEV, unless all disagreeing parties know everything there is to know about their desires, and are perfect bayesians. Otherwise, people can be mistaken about what they really want, or what the facts prescribe (given their values).
‘Objective ethics’? ‘Merely points… at where you wish you were’? “Merely”!?
Take your most innate desires. Not ‘I like chocolate’ or ‘I ought to condemn murder’, but the most basic levels (go to a neuroscientist to figure those out). Then take the facts of the world. If you had a sufficiently powerful computer, and you could input the values and plug in the facts, then the output would be what you wanted to do best.
That doesn’t mean whichever urge is strongest, but it takes into account the desires that make up your conscience, and the bit of you saying ‘but that’s not what’s right’. If you could perform this calculation in your head, you’d get the feeling of ‘Yes, that’s what is right. What else could it possibly be? What else could possibly matter?’ This isn’t ‘merely’ where you wish you were. This is the ‘right’ place to be.
This reply is more about the meta-ethics, but for interpersonal ethics, please see my response to peter_hurford’s comment above.
Otherwise, people can be mistaken about what they really want, or what the facts prescribe (given their values).
The fact that people can be mistaken about what they really want is vanishingly small evidence that if they were not mistaken, they would find out they all want the same things.
A very common desire is to be more prosperous than one’s peers. It’s not clear to me that there is some “real” goal that this serves (for an individual) -- it could be literally a primary goal. If that’s the case, then we already have a problem: two people in a peer group cannot both get all they want if both want to have more than any other. I can’t think of any satisfactory solution to this. Now, one might say, “well, if they’d grown up farther together this would be solvable”, but I don’t see any reason that should be true. People don’t necessarily grow more altruistic as they “grow up”, so it seems that there might well be no CEV to arrive at. I think, actually, a weaker version of the UFAI problem exists here: sure, humans are more similar to each other than UFAI’s need be to each other, but they still seem fundamentally different in goal systems and ethical views, in many respects.
The existence of moral disagreement is not an argument against CEV, unless all disagreeing parties know everything there is to know about their desires, and are perfect bayesians. Otherwise, people can be mistaken about what they really want, or what the facts prescribe (given their values).
‘Objective ethics’? ‘Merely points… at where you wish you were’? “Merely”!?
Take your most innate desires. Not ‘I like chocolate’ or ‘I ought to condemn murder’, but the most basic levels (go to a neuroscientist to figure those out). Then take the facts of the world. If you had a sufficiently powerful computer, and you could input the values and plug in the facts, then the output would be what you wanted to do best.
That doesn’t mean whichever urge is strongest, but it takes into account the desires that make up your conscience, and the bit of you saying ‘but that’s not what’s right’. If you could perform this calculation in your head, you’d get the feeling of ‘Yes, that’s what is right. What else could it possibly be? What else could possibly matter?’ This isn’t ‘merely’ where you wish you were. This is the ‘right’ place to be.
This reply is more about the meta-ethics, but for interpersonal ethics, please see my response to peter_hurford’s comment above.
The fact that people can be mistaken about what they really want is vanishingly small evidence that if they were not mistaken, they would find out they all want the same things.
A very common desire is to be more prosperous than one’s peers. It’s not clear to me that there is some “real” goal that this serves (for an individual) -- it could be literally a primary goal. If that’s the case, then we already have a problem: two people in a peer group cannot both get all they want if both want to have more than any other. I can’t think of any satisfactory solution to this. Now, one might say, “well, if they’d grown up farther together this would be solvable”, but I don’t see any reason that should be true. People don’t necessarily grow more altruistic as they “grow up”, so it seems that there might well be no CEV to arrive at. I think, actually, a weaker version of the UFAI problem exists here: sure, humans are more similar to each other than UFAI’s need be to each other, but they still seem fundamentally different in goal systems and ethical views, in many respects.