Nobody wants to be oppressed, nobody wants to die, nobody wants to be hurt or sick, everybody wants more good friends, everybody wants more love, everybody wants to be more autonomous, etc etc.
Some people want to oppress, some people want to kill, some people want to hurt others, everybody wants to take status from others for themselves, everyone wants others to be hated, everyone wants others to be subservient to them, ect ect.
(Note: I reflected nyan’s assertions to make the point that there’s conflict in values; I am not supporting any of the assertions.)
Conflicting terminal values are very much possible. I don’t think they exist to a relevant degree among humans.
Conflicting learned values do exist (just look at radical islam for example). I don’t think those differences would hold up under reflective value extrapolation.
Selfishness exists and would hold up under value extrapolation. However, that simple value difference is mostly symmetrical, and does not warrant cutting up humanity into groups of people with differing aggregate values.
This isn’t negotiating with babyeaters, it’s plain old economics of cooperation among humans.
Conflicting terminal values are very much possible. I don’t think they exist to a relevant degree among humans.
What exactly do you think is happening in disputes about legal interpretation or legal change?
I’m not saying that every such dispute is caused by value differences, but a substantial number are—and there is a strong social taboo against articulating disputes in the language of value conflict. The socially preferred method of argument is to assert that “common values” support the result that one prefers.
Conflicting terminal values are very much possible. I don’t think they exist to a relevant degree among humans.
Why?
Conflicting learned values do exist (just look at radical islam for example). I don’t think those differences would hold up under reflective value extrapolation.
Why?
However, that simple value difference is mostly symmetrical, and does not warrant cutting up humanity into groups of people with differing aggregate values.
Assuming a community of people are operating with extrapolated reflective values and yet are still selfish, why then is bargaining not optimal for resolving differences in values (they have values that apply to themselves, and values that apply to others, and so selfishness presumably would make them value the former more than the latter)?
Extortionate strategies in the Prisoner’s Dilemma don’t create value as well as nice strategies do, nor do they do as well against one another as nice strategies do; but they beat nice strategies individually.
Some sorts of oppression seem to follow the pattern of extortionate strategies — “I will take advantage of you, and will make it so that you are better off if you let me do so, than if you fight back.”
(Real-world examples are probably unhelpful here; I expect that everyone can think of one or two.)
What if the problem is “I want to oppress you, but I know individually being nicer would get me more of what I want, so instead I’m going to recruit allies that will help me oppress you because I think that will get me even more of what I want.”
You think conflicting terminal values don’t exist to a relevant degree because of a blog post making the point that we’re mostly identical in obvious ways? (ASPD and autism would seem -- I’m not sure what you mean by relevant—to discount conflicting terminal values not existing to a relevant degree, and then there’s enculturation/learning/environment which all affect the brain. Human universals are really cultural universals. To see human universals look towards feral children. This makes your learned values claim suspicious.)
So, assuming you’re right, I think your conclusion then is that it’s more productive to work towards uncovering what would be reflective extrapolated values than it is to bargain, but that’s non-obvious given how political even LWers are. But OTOH I don’t think we have anything to explicitly bargain with.
Some people want to oppress, some people want to kill, some people want to hurt others, everybody wants to take status from others for themselves, everyone wants others to be hated, everyone wants others to be subservient to them, ect ect.
(Note: I reflected nyan’s assertions to make the point that there’s conflict in values; I am not supporting any of the assertions.)
Conflicting terminal values are very much possible. I don’t think they exist to a relevant degree among humans.
Conflicting learned values do exist (just look at radical islam for example). I don’t think those differences would hold up under reflective value extrapolation.
Selfishness exists and would hold up under value extrapolation. However, that simple value difference is mostly symmetrical, and does not warrant cutting up humanity into groups of people with differing aggregate values.
This isn’t negotiating with babyeaters, it’s plain old economics of cooperation among humans.
If we’re talking about humans, I’m not sure that the distinction between terminal and learned values is very meaningful.
Thinking this over, I was leaning towards that.
What exactly do you think is happening in disputes about legal interpretation or legal change?
I’m not saying that every such dispute is caused by value differences, but a substantial number are—and there is a strong social taboo against articulating disputes in the language of value conflict. The socially preferred method of argument is to assert that “common values” support the result that one prefers.
Why?
Why?
Assuming a community of people are operating with extrapolated reflective values and yet are still selfish, why then is bargaining not optimal for resolving differences in values (they have values that apply to themselves, and values that apply to others, and so selfishness presumably would make them value the former more than the latter)?
roughly this
That argument doesn’t address the problem of “I want to oppress you”, “you want to oppress me”.
Extortionate strategies in the Prisoner’s Dilemma don’t create value as well as nice strategies do, nor do they do as well against one another as nice strategies do; but they beat nice strategies individually.
Some sorts of oppression seem to follow the pattern of extortionate strategies — “I will take advantage of you, and will make it so that you are better off if you let me do so, than if you fight back.”
(Real-world examples are probably unhelpful here; I expect that everyone can think of one or two.)
What if the problem is “I want to oppress you, but I know individually being nicer would get me more of what I want, so instead I’m going to recruit allies that will help me oppress you because I think that will get me even more of what I want.”
You think conflicting terminal values don’t exist to a relevant degree because of a blog post making the point that we’re mostly identical in obvious ways? (ASPD and autism would seem -- I’m not sure what you mean by relevant—to discount conflicting terminal values not existing to a relevant degree, and then there’s enculturation/learning/environment which all affect the brain. Human universals are really cultural universals. To see human universals look towards feral children. This makes your learned values claim suspicious.)
So, assuming you’re right, I think your conclusion then is that it’s more productive to work towards uncovering what would be reflective extrapolated values than it is to bargain, but that’s non-obvious given how political even LWers are. But OTOH I don’t think we have anything to explicitly bargain with.
Speak for yourself.
I don’t think everyone wants to be more autonomous, either (subs in bsdm communities for example).
That’s what happens when I comment at 4 a.m.
Better go to bed, now.