you do allow compromise so that the FAI should modify preferences that contradict each other, then we might be on our way to wire-heading.
First, I just note that this is a full-blown speculation about Friendliness content which should be only done while wearing a gas mask or a clown suit, or after donating to SIAI.
Quoting CEV:
“In poetic terms, our coherent extrapolated volition is our wish if we knew more, thought faster, were more the people we wished we were, had grown up farther together; where the extrapolation converges rather than diverges, where our wishes cohere rather than interfere; extrapolated as we wish that extrapolated, interpreted as we wish that interpreted.”
Also:
“Do we want our coherent extrapolated volition to satisfice, or maximize? My guess is that we want our coherent extrapolated volition to satisfice—to apply emergency first aid to human civilization, but not do humanity’s work on our behalf, or decide our futures for us. If so, rather than trying to guess the optimal decision of a specific individual, the CEV would pick a solution that satisficed the spread of possibilities for the extrapolated statistical aggregate of humankind.”
This should adddress your question. CEV would not typically modify humans on contradictions. But I repeat, this is all speculation.
It’s not clear to me from your recent posts whether you’ve read the metaethics sequence and/or CEV; if you haven’t, I recommend it whole-heartedly as it’s the most detailed discussion of morality available. Regarding your obsession, I’m aware of it and I think I’m able to understand your history and vantage point that enable such distress to arise, although my current self finds the topic utterly trivial and essentially a non-problem.
First, I just note that this is a full-blown speculation about Friendliness content which should be only done while wearing a gas mask or a clown suit, or after donating to SIAI.
Quoting CEV:
“In poetic terms, our coherent extrapolated volition is our wish if we knew more, thought faster, were more the people we wished we were, had grown up farther together; where the extrapolation converges rather than diverges, where our wishes cohere rather than interfere; extrapolated as we wish that extrapolated, interpreted as we wish that interpreted.”
Also:
“Do we want our coherent extrapolated volition to satisfice, or maximize? My guess is that we want our coherent extrapolated volition to satisfice—to apply emergency first aid to human civilization, but not do humanity’s work on our behalf, or decide our futures for us. If so, rather than trying to guess the optimal decision of a specific individual, the CEV would pick a solution that satisficed the spread of possibilities for the extrapolated statistical aggregate of humankind.”
This should adddress your question. CEV would not typically modify humans on contradictions. But I repeat, this is all speculation.
It’s not clear to me from your recent posts whether you’ve read the metaethics sequence and/or CEV; if you haven’t, I recommend it whole-heartedly as it’s the most detailed discussion of morality available. Regarding your obsession, I’m aware of it and I think I’m able to understand your history and vantage point that enable such distress to arise, although my current self finds the topic utterly trivial and essentially a non-problem.