How can anyone sincerely want to build an AI that fulfills anything except their own current, personal volition?
That’s exactly my objection to CEV. No-one acts on anything but their personal desires and values, by definition. Eliezer’s personal desire might be to implement CEV of humanity (whatever it turns out to be). I believe, however, that for well over 99% of humans this would not be the best possible outcome they might desire. At best it might be a reasonable compromise, but that would depend entirely on what the CEV actually ended up being.
Eliezer’s personal desire might be to implement CEV of humanity (whatever it turns out to be). I believe, however, that for well over 99% of humans this would not be the best possible outcome they might desire.
I’m not clear on what you could mean by this. Do you mean that you think the process just doesn’t work as advertised, so that 99% of human beings end up definitely unhappy and with there existing some compromise state that they would all have preferred to CEV? Or that 99% of human beings all have different maxima so that their superposition is not the maximum of any one of them, but there is no single state that a supermajority prefers to CEV?
Or that 99% of human beings all have different maxima so that their superposition is not the maximum of any one of them, but there is no single state that a supermajority prefers to CEV?
Yes. I expect CEV, if it works as advertised, to lead to a state that almost all humans (as they are today, with no major cognitive changes) would see as an acceptable compromise, an improvement over things today, but far worse than their personal desires implemented at the expense of the rest of humankind.
Therefore, while working on CEV of humanity might be a good compromise and cooperation, I expect any group working on it to prefer to implement that group’s CEV, instead.
You say that you (and all people on this project) really prefer to take the CEV of all humanity. Please explain to me why—I honestly don’t understand. How did you end up with a rare preference among humans, that says “satisfy all humans even though their desires might be hateful to me”?
If that’s the point, then why does EY prefer it over implementing the CEV of himself and a small group of other people?
As for holodecks (and simulations), as long as people are aware they are in a simulation, I think many would care no less about the state of the external world. (At a minimum they must care somewhat, to make sure their simulation continues to run.)
um I think a miscommunication occurred. I am not commenting on what eliezer wants or why. I am commenting on my understanding of CEV being a (timeless) utilitarian satisfaction of preference.
That’s exactly my objection to CEV. No-one acts on anything but their personal desires and values, by definition. Eliezer’s personal desire might be to implement CEV of humanity (whatever it turns out to be). I believe, however, that for well over 99% of humans this would not be the best possible outcome they might desire. At best it might be a reasonable compromise, but that would depend entirely on what the CEV actually ended up being.
I’m not clear on what you could mean by this. Do you mean that you think the process just doesn’t work as advertised, so that 99% of human beings end up definitely unhappy and with there existing some compromise state that they would all have preferred to CEV? Or that 99% of human beings all have different maxima so that their superposition is not the maximum of any one of them, but there is no single state that a supermajority prefers to CEV?
Yes. I expect CEV, if it works as advertised, to lead to a state that almost all humans (as they are today, with no major cognitive changes) would see as an acceptable compromise, an improvement over things today, but far worse than their personal desires implemented at the expense of the rest of humankind.
Therefore, while working on CEV of humanity might be a good compromise and cooperation, I expect any group working on it to prefer to implement that group’s CEV, instead.
You say that you (and all people on this project) really prefer to take the CEV of all humanity. Please explain to me why—I honestly don’t understand. How did you end up with a rare preference among humans, that says “satisfy all humans even though their desires might be hateful to me”?
“but far worse than their personal desires implemented at the expense of the rest of humankind.”
uh....i thought this was sort of the point. also, given holodecks (or experience machines of any sort), I disagree.
EDIT: never mind, conversational context mismatch.
If that’s the point, then why does EY prefer it over implementing the CEV of himself and a small group of other people?
As for holodecks (and simulations), as long as people are aware they are in a simulation, I think many would care no less about the state of the external world. (At a minimum they must care somewhat, to make sure their simulation continues to run.)
um I think a miscommunication occurred. I am not commenting on what eliezer wants or why. I am commenting on my understanding of CEV being a (timeless) utilitarian satisfaction of preference.