I know my rationality isn’t that fragile and I doubt yours is either.
What troubles me is this: your position on the divisive issues is not exactly identical to mine, but I very much doubt that I could sway your position or you could sway mine. Therefore, I’m pretty confident that at least one of us fails at rationality when thinking about these issues. On the other hand, if we were talking about math or computing, I’d be pretty confident that a correct argument would actually be recognized as correct and there would be no room for different “positions”. There is only one truth.
We have had some big successes already. (For example, most people here know better than be confused by talk of “free will”.) I don’t think the anti-PC issue can be resolved by the drawn-out positional war we’re waging, because it isn’t actually making anyone change their opinions. It’s just a barrage of rationalizations from all sides. We need more insight. We need a breakthrough, or maybe several, that would point out the obviously correct way to think about anti-PC issues.
I don’t think using this name is a good idea. It has strong political connotations. And while I’m sure many here aren’t aware of them or are willing to ignore them, I fear this may not be true:
I think it actually is a value difference, just like Blueberry said.
I do not want to participate in nastiness (loosely defined). It’s related to my inclination not to engage in malicious gossip. (Folks who know me personally consider it almost weird how uncomfortable I am with bashing people, singly or in groups.) It’s not my business to stop other people from doing it, but I just don’t want it as part of my life, because it’s corrosive and makes me unhappy.
To refine my own position a little bit—I’m happy to consider anti-PC issues as matters of fact, but I don’t like them connotationally, because I don’t like speaking ill of people when I can help it. For example, in a conversation with a friend: he says, “Don’t you know blacks have a higher crime rate than whites?” I say, “Sure, that’s true. But what do you want from me? You want me to say how much I hate my black neighbors? What do you want me to say?”
I don’t think that’s an issue that argument can dissuade me from; it’s my own preference.
This discussion prompted a connection in my mind that startled me a lot. Let’s put it in the open.
We’ve been discussing the moral status of identical copies. I gave a partial reductio sometime ago, but wasn’t really satisfied. Now consider this: what about the welfare of your imperfect copies? Do UDT-like considerations make it provably rational to care more about creatures that share random features with you? Note that I say UDT-like considerations, not evolutionary considerations. Evolution doesn’t explain professional solidarity or feminism because neither relies on heritable traits. Ganging up looks more like a Schelling coordination game, where you benefit from seeking allies based on some random quality as long as they also get the idea of allying with you based on same quality. And it might work better if the quality is hard to change, like sex or race. Anyone willing to work out the math is welcome to do so...
your position on the divisive issues is not exactly identical to mine, but I very much doubt that I could sway your position or you could sway mine. Therefore, I’m pretty confident that at least one of us fails at rationality when thinking about these issues. On the other hand, if we were talking about math or computing, I’d be pretty confident that a correct argument would actually be recognized as correct and there would be no room for different “positions”. There is only one truth.
But there are many different values. If we can’t sway each other’s positions, that points to a value difference.
“Value difference” is often used as a cop-out. How did our terminal values come to be so different, anyway? If I’m extremely selfish and you’re extremely selfish, we will likely have very different values, but if we are both altruistic, our values are combinations of values of all the other people in the world, so they should be pretty similar. For example, if I think society should be organized like an anthill and you think it should be organized like a pool of sharks (to borrow Ken Binmore’s example), this is a factual disagreement about what would make everyone better off, not a value disagreement.
Anti-PC? Good name, I will use it.
What troubles me is this: your position on the divisive issues is not exactly identical to mine, but I very much doubt that I could sway your position or you could sway mine. Therefore, I’m pretty confident that at least one of us fails at rationality when thinking about these issues. On the other hand, if we were talking about math or computing, I’d be pretty confident that a correct argument would actually be recognized as correct and there would be no room for different “positions”. There is only one truth.
We have had some big successes already. (For example, most people here know better than be confused by talk of “free will”.) I don’t think the anti-PC issue can be resolved by the drawn-out positional war we’re waging, because it isn’t actually making anyone change their opinions. It’s just a barrage of rationalizations from all sides. We need more insight. We need a breakthrough, or maybe several, that would point out the obviously correct way to think about anti-PC issues.
I don’t think using this name is a good idea. It has strong political connotations. And while I’m sure many here aren’t aware of them or are willing to ignore them, I fear this may not be true:
For potential new readers and posters
Once the “camps” are firmly established.
I think it actually is a value difference, just like Blueberry said.
I do not want to participate in nastiness (loosely defined). It’s related to my inclination not to engage in malicious gossip. (Folks who know me personally consider it almost weird how uncomfortable I am with bashing people, singly or in groups.) It’s not my business to stop other people from doing it, but I just don’t want it as part of my life, because it’s corrosive and makes me unhappy.
To refine my own position a little bit—I’m happy to consider anti-PC issues as matters of fact, but I don’t like them connotationally, because I don’t like speaking ill of people when I can help it. For example, in a conversation with a friend: he says, “Don’t you know blacks have a higher crime rate than whites?” I say, “Sure, that’s true. But what do you want from me? You want me to say how much I hate my black neighbors? What do you want me to say?”
I don’t think that’s an issue that argument can dissuade me from; it’s my own preference.
This discussion prompted a connection in my mind that startled me a lot. Let’s put it in the open.
We’ve been discussing the moral status of identical copies. I gave a partial reductio sometime ago, but wasn’t really satisfied. Now consider this: what about the welfare of your imperfect copies? Do UDT-like considerations make it provably rational to care more about creatures that share random features with you? Note that I say UDT-like considerations, not evolutionary considerations. Evolution doesn’t explain professional solidarity or feminism because neither relies on heritable traits. Ganging up looks more like a Schelling coordination game, where you benefit from seeking allies based on some random quality as long as they also get the idea of allying with you based on same quality. And it might work better if the quality is hard to change, like sex or race. Anyone willing to work out the math is welcome to do so...
Asserting group inequalities means speaking more ill of one group of people but less ill of another, so doesn’t that cancel out?
I’m not talking about empirical claims, I’m talking about affect. I have zero problem with talking about group inequalities, in themselves.
But there are many different values. If we can’t sway each other’s positions, that points to a value difference.
If only it was always so. Value is hard to see, so easy to rationalize.
“Value difference” is often used as a cop-out. How did our terminal values come to be so different, anyway? If I’m extremely selfish and you’re extremely selfish, we will likely have very different values, but if we are both altruistic, our values are combinations of values of all the other people in the world, so they should be pretty similar. For example, if I think society should be organized like an anthill and you think it should be organized like a pool of sharks (to borrow Ken Binmore’s example), this is a factual disagreement about what would make everyone better off, not a value disagreement.