I think people overrate the prevalence of fundamental value difference as a way of protecting a bad map of human value.
I don’t think we are anywhere near the point where fundamental value differences between men and women (or any other groups) are relevant.
Nobody wants to be oppressed, nobody wants to die, nobody wants to be hurt or sick, everybody wants more good friends, everybody wants more love, everybody wants to be more autonomous, etc etc.
To me, the point where fundamental value differences matter is when you’ve solved all of that and the biggest thing left to argue about is whose aesthetic sense the architecture should be optimized for.
Suppose Alice doesn’t want Alice to die, Bob doesn’t want Bob to die, and these are the only people and values in the world. Do you think these are not “different” values? (Note that I explicitly mentioned selfish values in the OP as an example of what I meant by “different values”.) More importantly, wouldn’t such values lead to the necessity of bargaining over how to solve problems that affect both of them?
This kind of situation is usually called “conflict of interest”. I think using “value differences” is confusing terminology, at least to me it suggests some more fundamental difference such as sacredness vs avoiding harm.
Ah, that makes sense. (I was wondering why nyan_sandwich’s comment was being upvoted so much when I already mentioned selfish values in the OP.) To be clear, I’m using “value differences” to mean both selfish-but-symmetric values and “more fundamental difference such as sacredness vs avoiding harm”. (ETA: It makes sense to me because I tend to think of values in terms of utility functions that take world states as inputs.) I guess we could argue about which kind of difference is more important but that doesn’t seem relevant to the point I wanted to make.
It seems like a relevant distinction in the FAI/CEV theory context, and indirectly relevant in the gender conflicts question. That is, it isn’t first-order relevant in the latter case, but seems likely to become so in a thread that is attempting to go meta. Like, say, this one.
What I was getting at is that humans have mostly symmetric values such that they should not disagree over what type of society they want to live in, if they don’t get to choose the good end of the stick.
Even if people have symmetric values, the relevant facts are not symmetric. For example everyone values things that money can buy, but some people have much higher abilities to earn money in a free market economy, so there will be conflict over how much market competition to allow or what kind of redistributive policies to have.
if they don’t get to choose the good end of the stick
I’m not sure what you mean by this. Are you saying something like, “if they were under a Rawlsian veil of ignorance”? But we are in fact not under a Rawlsian veil of ignorance, and any conclusions we make of the form “If I were under a Rawlsian veil of ignorance, I would prefer society to be organized thus: …” are likely to be biased by the knowledge of our actual circumstances.
What I was getting at is that humans have mostly symmetric values such that they should not disagree over what type of society they want to live in, if they don’t get to choose the good end of the stick.
This seems wrong, except for extremely weak definitions of “mostly”. People should definitely disagree about what type of society they want to live in, just a whole lot less than if they were disagreeing with something non-human.
Nobody wants to be oppressed, nobody wants to die, nobody wants to be hurt or sick, everybody wants more good friends, everybody wants more love, everybody wants to be more autonomous, etc etc.
Some people want to oppress, some people want to kill, some people want to hurt others, everybody wants to take status from others for themselves, everyone wants others to be hated, everyone wants others to be subservient to them, ect ect.
(Note: I reflected nyan’s assertions to make the point that there’s conflict in values; I am not supporting any of the assertions.)
Conflicting terminal values are very much possible. I don’t think they exist to a relevant degree among humans.
Conflicting learned values do exist (just look at radical islam for example). I don’t think those differences would hold up under reflective value extrapolation.
Selfishness exists and would hold up under value extrapolation. However, that simple value difference is mostly symmetrical, and does not warrant cutting up humanity into groups of people with differing aggregate values.
This isn’t negotiating with babyeaters, it’s plain old economics of cooperation among humans.
Conflicting terminal values are very much possible. I don’t think they exist to a relevant degree among humans.
What exactly do you think is happening in disputes about legal interpretation or legal change?
I’m not saying that every such dispute is caused by value differences, but a substantial number are—and there is a strong social taboo against articulating disputes in the language of value conflict. The socially preferred method of argument is to assert that “common values” support the result that one prefers.
Conflicting terminal values are very much possible. I don’t think they exist to a relevant degree among humans.
Why?
Conflicting learned values do exist (just look at radical islam for example). I don’t think those differences would hold up under reflective value extrapolation.
Why?
However, that simple value difference is mostly symmetrical, and does not warrant cutting up humanity into groups of people with differing aggregate values.
Assuming a community of people are operating with extrapolated reflective values and yet are still selfish, why then is bargaining not optimal for resolving differences in values (they have values that apply to themselves, and values that apply to others, and so selfishness presumably would make them value the former more than the latter)?
Extortionate strategies in the Prisoner’s Dilemma don’t create value as well as nice strategies do, nor do they do as well against one another as nice strategies do; but they beat nice strategies individually.
Some sorts of oppression seem to follow the pattern of extortionate strategies — “I will take advantage of you, and will make it so that you are better off if you let me do so, than if you fight back.”
(Real-world examples are probably unhelpful here; I expect that everyone can think of one or two.)
What if the problem is “I want to oppress you, but I know individually being nicer would get me more of what I want, so instead I’m going to recruit allies that will help me oppress you because I think that will get me even more of what I want.”
You think conflicting terminal values don’t exist to a relevant degree because of a blog post making the point that we’re mostly identical in obvious ways? (ASPD and autism would seem -- I’m not sure what you mean by relevant—to discount conflicting terminal values not existing to a relevant degree, and then there’s enculturation/learning/environment which all affect the brain. Human universals are really cultural universals. To see human universals look towards feral children. This makes your learned values claim suspicious.)
So, assuming you’re right, I think your conclusion then is that it’s more productive to work towards uncovering what would be reflective extrapolated values than it is to bargain, but that’s non-obvious given how political even LWers are. But OTOH I don’t think we have anything to explicitly bargain with.
At a high level of generality, people with different values will use the same words to articulate them. But at that level, the assertions are merely applause lights. The ambiguity serves to hide real disputes.
When the discussion down to object-level disputes, the different meanings very quickly devolve into different choices. For example, the discussion about “creepy” behavior was in part a discussion about what behaviors were, and were not, oppressive. And who gets to make that judgment.
Another way of looking at the “Don’t be creepy” discussion is that some folks were saying “XYZ behavior is oppressive,” while other groups were saying “No it isn’t.”
As you say, everyone thinks oppressive behavior should stop. My point was that one’s definition of oppressive relies on one’s terminal values.
In other words, you said:
I don’t think we are anywhere near the point where fundamental value differences between [different groups] are relevant.
You have some evidence that I don’t or we are using “fundamental” differently.
My “fundamental” may be a bad concept, but what’s your reason for thinking humans have irreconcilable value differences more significant than stuff like details of aesthetic taste?
In brief, the universality of the politics-is-the-mindkiller phenomena. If some ideologies or political topics were more likely to reach agreement than others, that would be evidence that some terminal value differences are not “fundamental”.
And there aren’t any areas of universal agreement. It’s pretty easy for someone to find a viable society that supported just about any terminal value one could suggest.
In brief, the universality of the politics-is-the-mindkiller phenomena.
People seem universally drawn to status debates and politics. This is evidence against uniform fundamental values?
If some ideologies or political topics were more likely to reach agreement than others, that would be evidence that some terminal value differences are not “fundamental”.
How does this work? Can you expand?
On more thot, I retract the “people don’t ‘fundamentally’ disagree” thing. Seems awfully strong now, especially when a good chunk of who we are is memetic and not just genetic. Also, ‘fundamentally’ is a confused concept among humans.
Still, I hold that people leap to “fundamental value differences” as an explanation far too easily. Seems too convenient (It’s ok, a peaceful solution will never work and we have to kill them because Fundamental Value Differences) and comes to mind too easily for self-serving and confused reasons (reifying an unproductive argument as a Fundamental Value DIfference is a nice comfortable solution that has no reason to be correct)
I think people overrate the prevalence of fundamental value difference as a way of protecting a bad map of human value.
I don’t think we are anywhere near the point where fundamental value differences between men and women (or any other groups) are relevant.
Nobody wants to be oppressed, nobody wants to die, nobody wants to be hurt or sick, everybody wants more good friends, everybody wants more love, everybody wants to be more autonomous, etc etc.
To me, the point where fundamental value differences matter is when you’ve solved all of that and the biggest thing left to argue about is whose aesthetic sense the architecture should be optimized for.
Suppose Alice doesn’t want Alice to die, Bob doesn’t want Bob to die, and these are the only people and values in the world. Do you think these are not “different” values? (Note that I explicitly mentioned selfish values in the OP as an example of what I meant by “different values”.) More importantly, wouldn’t such values lead to the necessity of bargaining over how to solve problems that affect both of them?
This kind of situation is usually called “conflict of interest”. I think using “value differences” is confusing terminology, at least to me it suggests some more fundamental difference such as sacredness vs avoiding harm.
Ah, that makes sense. (I was wondering why nyan_sandwich’s comment was being upvoted so much when I already mentioned selfish values in the OP.) To be clear, I’m using “value differences” to mean both selfish-but-symmetric values and “more fundamental difference such as sacredness vs avoiding harm”. (ETA: It makes sense to me because I tend to think of values in terms of utility functions that take world states as inputs.) I guess we could argue about which kind of difference is more important but that doesn’t seem relevant to the point I wanted to make.
It seems like a relevant distinction in the FAI/CEV theory context, and indirectly relevant in the gender conflicts question. That is, it isn’t first-order relevant in the latter case, but seems likely to become so in a thread that is attempting to go meta. Like, say, this one.
good point on selfishness.
What I was getting at is that humans have mostly symmetric values such that they should not disagree over what type of society they want to live in, if they don’t get to choose the good end of the stick.
Even if people have symmetric values, the relevant facts are not symmetric. For example everyone values things that money can buy, but some people have much higher abilities to earn money in a free market economy, so there will be conflict over how much market competition to allow or what kind of redistributive policies to have.
I’m not sure what you mean by this. Are you saying something like, “if they were under a Rawlsian veil of ignorance”? But we are in fact not under a Rawlsian veil of ignorance, and any conclusions we make of the form “If I were under a Rawlsian veil of ignorance, I would prefer society to be organized thus: …” are likely to be biased by the knowledge of our actual circumstances.
This seems wrong, except for extremely weak definitions of “mostly”. People should definitely disagree about what type of society they want to live in, just a whole lot less than if they were disagreeing with something non-human.
Some people want to oppress, some people want to kill, some people want to hurt others, everybody wants to take status from others for themselves, everyone wants others to be hated, everyone wants others to be subservient to them, ect ect.
(Note: I reflected nyan’s assertions to make the point that there’s conflict in values; I am not supporting any of the assertions.)
Conflicting terminal values are very much possible. I don’t think they exist to a relevant degree among humans.
Conflicting learned values do exist (just look at radical islam for example). I don’t think those differences would hold up under reflective value extrapolation.
Selfishness exists and would hold up under value extrapolation. However, that simple value difference is mostly symmetrical, and does not warrant cutting up humanity into groups of people with differing aggregate values.
This isn’t negotiating with babyeaters, it’s plain old economics of cooperation among humans.
If we’re talking about humans, I’m not sure that the distinction between terminal and learned values is very meaningful.
Thinking this over, I was leaning towards that.
What exactly do you think is happening in disputes about legal interpretation or legal change?
I’m not saying that every such dispute is caused by value differences, but a substantial number are—and there is a strong social taboo against articulating disputes in the language of value conflict. The socially preferred method of argument is to assert that “common values” support the result that one prefers.
Why?
Why?
Assuming a community of people are operating with extrapolated reflective values and yet are still selfish, why then is bargaining not optimal for resolving differences in values (they have values that apply to themselves, and values that apply to others, and so selfishness presumably would make them value the former more than the latter)?
roughly this
That argument doesn’t address the problem of “I want to oppress you”, “you want to oppress me”.
Extortionate strategies in the Prisoner’s Dilemma don’t create value as well as nice strategies do, nor do they do as well against one another as nice strategies do; but they beat nice strategies individually.
Some sorts of oppression seem to follow the pattern of extortionate strategies — “I will take advantage of you, and will make it so that you are better off if you let me do so, than if you fight back.”
(Real-world examples are probably unhelpful here; I expect that everyone can think of one or two.)
What if the problem is “I want to oppress you, but I know individually being nicer would get me more of what I want, so instead I’m going to recruit allies that will help me oppress you because I think that will get me even more of what I want.”
You think conflicting terminal values don’t exist to a relevant degree because of a blog post making the point that we’re mostly identical in obvious ways? (ASPD and autism would seem -- I’m not sure what you mean by relevant—to discount conflicting terminal values not existing to a relevant degree, and then there’s enculturation/learning/environment which all affect the brain. Human universals are really cultural universals. To see human universals look towards feral children. This makes your learned values claim suspicious.)
So, assuming you’re right, I think your conclusion then is that it’s more productive to work towards uncovering what would be reflective extrapolated values than it is to bargain, but that’s non-obvious given how political even LWers are. But OTOH I don’t think we have anything to explicitly bargain with.
Speak for yourself.
I don’t think everyone wants to be more autonomous, either (subs in bsdm communities for example).
That’s what happens when I comment at 4 a.m.
Better go to bed, now.
At a high level of generality, people with different values will use the same words to articulate them. But at that level, the assertions are merely applause lights. The ambiguity serves to hide real disputes.
When the discussion down to object-level disputes, the different meanings very quickly devolve into different choices. For example, the discussion about “creepy” behavior was in part a discussion about what behaviors were, and were not, oppressive. And who gets to make that judgment.
Not sure what you are getting at.
Another way of looking at the “Don’t be creepy” discussion is that some folks were saying “XYZ behavior is oppressive,” while other groups were saying “No it isn’t.”
As you say, everyone thinks oppressive behavior should stop. My point was that one’s definition of oppressive relies on one’s terminal values.
In other words, you said:
I think that assertion is empirically false.
You have some evidence that I don’t or we are using “fundamental” differently.
My “fundamental” may be a bad concept, but what’s your reason for thinking humans have irreconcilable value differences more significant than stuff like details of aesthetic taste?
In brief, the universality of the politics-is-the-mindkiller phenomena. If some ideologies or political topics were more likely to reach agreement than others, that would be evidence that some terminal value differences are not “fundamental”.
And there aren’t any areas of universal agreement. It’s pretty easy for someone to find a viable society that supported just about any terminal value one could suggest.
People seem universally drawn to status debates and politics. This is evidence against uniform fundamental values?
How does this work? Can you expand?
On more thot, I retract the “people don’t ‘fundamentally’ disagree” thing. Seems awfully strong now, especially when a good chunk of who we are is memetic and not just genetic. Also, ‘fundamentally’ is a confused concept among humans.
Still, I hold that people leap to “fundamental value differences” as an explanation far too easily. Seems too convenient (It’s ok, a peaceful solution will never work and we have to kill them because Fundamental Value Differences) and comes to mind too easily for self-serving and confused reasons (reifying an unproductive argument as a Fundamental Value DIfference is a nice comfortable solution that has no reason to be correct)