Suppose Alice doesn’t want Alice to die, Bob doesn’t want Bob to die, and these are the only people and values in the world. Do you think these are not “different” values? (Note that I explicitly mentioned selfish values in the OP as an example of what I meant by “different values”.) More importantly, wouldn’t such values lead to the necessity of bargaining over how to solve problems that affect both of them?
This kind of situation is usually called “conflict of interest”. I think using “value differences” is confusing terminology, at least to me it suggests some more fundamental difference such as sacredness vs avoiding harm.
Ah, that makes sense. (I was wondering why nyan_sandwich’s comment was being upvoted so much when I already mentioned selfish values in the OP.) To be clear, I’m using “value differences” to mean both selfish-but-symmetric values and “more fundamental difference such as sacredness vs avoiding harm”. (ETA: It makes sense to me because I tend to think of values in terms of utility functions that take world states as inputs.) I guess we could argue about which kind of difference is more important but that doesn’t seem relevant to the point I wanted to make.
It seems like a relevant distinction in the FAI/CEV theory context, and indirectly relevant in the gender conflicts question. That is, it isn’t first-order relevant in the latter case, but seems likely to become so in a thread that is attempting to go meta. Like, say, this one.
What I was getting at is that humans have mostly symmetric values such that they should not disagree over what type of society they want to live in, if they don’t get to choose the good end of the stick.
Even if people have symmetric values, the relevant facts are not symmetric. For example everyone values things that money can buy, but some people have much higher abilities to earn money in a free market economy, so there will be conflict over how much market competition to allow or what kind of redistributive policies to have.
if they don’t get to choose the good end of the stick
I’m not sure what you mean by this. Are you saying something like, “if they were under a Rawlsian veil of ignorance”? But we are in fact not under a Rawlsian veil of ignorance, and any conclusions we make of the form “If I were under a Rawlsian veil of ignorance, I would prefer society to be organized thus: …” are likely to be biased by the knowledge of our actual circumstances.
What I was getting at is that humans have mostly symmetric values such that they should not disagree over what type of society they want to live in, if they don’t get to choose the good end of the stick.
This seems wrong, except for extremely weak definitions of “mostly”. People should definitely disagree about what type of society they want to live in, just a whole lot less than if they were disagreeing with something non-human.
Suppose Alice doesn’t want Alice to die, Bob doesn’t want Bob to die, and these are the only people and values in the world. Do you think these are not “different” values? (Note that I explicitly mentioned selfish values in the OP as an example of what I meant by “different values”.) More importantly, wouldn’t such values lead to the necessity of bargaining over how to solve problems that affect both of them?
This kind of situation is usually called “conflict of interest”. I think using “value differences” is confusing terminology, at least to me it suggests some more fundamental difference such as sacredness vs avoiding harm.
Ah, that makes sense. (I was wondering why nyan_sandwich’s comment was being upvoted so much when I already mentioned selfish values in the OP.) To be clear, I’m using “value differences” to mean both selfish-but-symmetric values and “more fundamental difference such as sacredness vs avoiding harm”. (ETA: It makes sense to me because I tend to think of values in terms of utility functions that take world states as inputs.) I guess we could argue about which kind of difference is more important but that doesn’t seem relevant to the point I wanted to make.
It seems like a relevant distinction in the FAI/CEV theory context, and indirectly relevant in the gender conflicts question. That is, it isn’t first-order relevant in the latter case, but seems likely to become so in a thread that is attempting to go meta. Like, say, this one.
good point on selfishness.
What I was getting at is that humans have mostly symmetric values such that they should not disagree over what type of society they want to live in, if they don’t get to choose the good end of the stick.
Even if people have symmetric values, the relevant facts are not symmetric. For example everyone values things that money can buy, but some people have much higher abilities to earn money in a free market economy, so there will be conflict over how much market competition to allow or what kind of redistributive policies to have.
I’m not sure what you mean by this. Are you saying something like, “if they were under a Rawlsian veil of ignorance”? But we are in fact not under a Rawlsian veil of ignorance, and any conclusions we make of the form “If I were under a Rawlsian veil of ignorance, I would prefer society to be organized thus: …” are likely to be biased by the knowledge of our actual circumstances.
This seems wrong, except for extremely weak definitions of “mostly”. People should definitely disagree about what type of society they want to live in, just a whole lot less than if they were disagreeing with something non-human.