If there’s a discussion about whether or not we should seek truth—at a site about rationality—that’s a discussion worth having. It’s not a side issue.
Like whpearson, I think we’re not all on one side or another. I’m pro-truth. I’m anti-PUA. I don’t know if I’m pro or anti status—there’s something about this community’s focus on it that unsettles me, but I certainly don’t disapprove of people choosing to do something high-status like become a millionaire.
You’re basically talking about the anti-PC cluster. It’s an interesting phenomenon. We’ve got instinctively and vehemently anti-PC people; we’ve got people trying to edge in the direction of “Hey, maybe we shouldn’t just do whatever we want”; and we’ve got people like me who are sort of on the dividing line, anti-PC in theory but willing to walk away and withdraw association from people who actually spew a lot of hate.
I think it’s an interesting issue because it deals with how we ought best to react to controversy. In the spirit of the comments I made to WrongBot, I don’t think we should fear to go there; I know my rationality isn’t that fragile and I doubt yours is either. (I’ve gotten my knee-jerk emotional responses burned out of me by people much ruder than anyone here.)
I know my rationality isn’t that fragile and I doubt yours is either.
What troubles me is this: your position on the divisive issues is not exactly identical to mine, but I very much doubt that I could sway your position or you could sway mine. Therefore, I’m pretty confident that at least one of us fails at rationality when thinking about these issues. On the other hand, if we were talking about math or computing, I’d be pretty confident that a correct argument would actually be recognized as correct and there would be no room for different “positions”. There is only one truth.
We have had some big successes already. (For example, most people here know better than be confused by talk of “free will”.) I don’t think the anti-PC issue can be resolved by the drawn-out positional war we’re waging, because it isn’t actually making anyone change their opinions. It’s just a barrage of rationalizations from all sides. We need more insight. We need a breakthrough, or maybe several, that would point out the obviously correct way to think about anti-PC issues.
I don’t think using this name is a good idea. It has strong political connotations. And while I’m sure many here aren’t aware of them or are willing to ignore them, I fear this may not be true:
I think it actually is a value difference, just like Blueberry said.
I do not want to participate in nastiness (loosely defined). It’s related to my inclination not to engage in malicious gossip. (Folks who know me personally consider it almost weird how uncomfortable I am with bashing people, singly or in groups.) It’s not my business to stop other people from doing it, but I just don’t want it as part of my life, because it’s corrosive and makes me unhappy.
To refine my own position a little bit—I’m happy to consider anti-PC issues as matters of fact, but I don’t like them connotationally, because I don’t like speaking ill of people when I can help it. For example, in a conversation with a friend: he says, “Don’t you know blacks have a higher crime rate than whites?” I say, “Sure, that’s true. But what do you want from me? You want me to say how much I hate my black neighbors? What do you want me to say?”
I don’t think that’s an issue that argument can dissuade me from; it’s my own preference.
This discussion prompted a connection in my mind that startled me a lot. Let’s put it in the open.
We’ve been discussing the moral status of identical copies. I gave a partial reductio sometime ago, but wasn’t really satisfied. Now consider this: what about the welfare of your imperfect copies? Do UDT-like considerations make it provably rational to care more about creatures that share random features with you? Note that I say UDT-like considerations, not evolutionary considerations. Evolution doesn’t explain professional solidarity or feminism because neither relies on heritable traits. Ganging up looks more like a Schelling coordination game, where you benefit from seeking allies based on some random quality as long as they also get the idea of allying with you based on same quality. And it might work better if the quality is hard to change, like sex or race. Anyone willing to work out the math is welcome to do so...
your position on the divisive issues is not exactly identical to mine, but I very much doubt that I could sway your position or you could sway mine. Therefore, I’m pretty confident that at least one of us fails at rationality when thinking about these issues. On the other hand, if we were talking about math or computing, I’d be pretty confident that a correct argument would actually be recognized as correct and there would be no room for different “positions”. There is only one truth.
But there are many different values. If we can’t sway each other’s positions, that points to a value difference.
“Value difference” is often used as a cop-out. How did our terminal values come to be so different, anyway? If I’m extremely selfish and you’re extremely selfish, we will likely have very different values, but if we are both altruistic, our values are combinations of values of all the other people in the world, so they should be pretty similar. For example, if I think society should be organized like an anthill and you think it should be organized like a pool of sharks (to borrow Ken Binmore’s example), this is a factual disagreement about what would make everyone better off, not a value disagreement.
Maybe it’s a political correctness principal component, but it seems to me that ideas about status should not be aligned with that component. If PUA had not been mentioned, and we were just discussing Johnstone, then I think those who are ignorant of PUA, whether pro- or anti-PC, would have less extreme reactions and often completely different ones.
If people’s opinions on one issue are polarizing their opinions on another, without agreement that they’re logically related, something is probably going wrong and this is a cost to discussing the first issue. Also, cousin it talked about the issues creating “camps.” That’s probably the mediating problem.
Like whpearson, I think we’re not all on one side or another. I’m pro-truth. I’m anti-PUA. I don’t know if I’m pro or anti status
I am presently amused by imagining forum members declaring themselves “anti-truth”.
Though I guess there is a spectrum from sticking to discovering and exposing widely applicable truths no matter what, some kind of Straussian stance where only the enlightened elites can be allowed access to dangerous truths and the general populace is to be fed noble lies, and then on to even less coherent spheres of willful obscurantism and outright anti-intellectualism, where it seems that nobody is encouraged to pursue some topics.
For some reason though, people who either explicitly believe that noble lies are necessary or have internalized a culture where they are built-in never seem to claim to be anti-truth.
If there’s a discussion about whether or not we should seek truth—at a site about rationality—that’s a discussion worth having. It’s not a side issue.
Like whpearson, I think we’re not all on one side or another. I’m pro-truth. I’m anti-PUA. I don’t know if I’m pro or anti status—there’s something about this community’s focus on it that unsettles me, but I certainly don’t disapprove of people choosing to do something high-status like become a millionaire.
You’re basically talking about the anti-PC cluster. It’s an interesting phenomenon. We’ve got instinctively and vehemently anti-PC people; we’ve got people trying to edge in the direction of “Hey, maybe we shouldn’t just do whatever we want”; and we’ve got people like me who are sort of on the dividing line, anti-PC in theory but willing to walk away and withdraw association from people who actually spew a lot of hate.
I think it’s an interesting issue because it deals with how we ought best to react to controversy. In the spirit of the comments I made to WrongBot, I don’t think we should fear to go there; I know my rationality isn’t that fragile and I doubt yours is either. (I’ve gotten my knee-jerk emotional responses burned out of me by people much ruder than anyone here.)
Anti-PC? Good name, I will use it.
What troubles me is this: your position on the divisive issues is not exactly identical to mine, but I very much doubt that I could sway your position or you could sway mine. Therefore, I’m pretty confident that at least one of us fails at rationality when thinking about these issues. On the other hand, if we were talking about math or computing, I’d be pretty confident that a correct argument would actually be recognized as correct and there would be no room for different “positions”. There is only one truth.
We have had some big successes already. (For example, most people here know better than be confused by talk of “free will”.) I don’t think the anti-PC issue can be resolved by the drawn-out positional war we’re waging, because it isn’t actually making anyone change their opinions. It’s just a barrage of rationalizations from all sides. We need more insight. We need a breakthrough, or maybe several, that would point out the obviously correct way to think about anti-PC issues.
I don’t think using this name is a good idea. It has strong political connotations. And while I’m sure many here aren’t aware of them or are willing to ignore them, I fear this may not be true:
For potential new readers and posters
Once the “camps” are firmly established.
I think it actually is a value difference, just like Blueberry said.
I do not want to participate in nastiness (loosely defined). It’s related to my inclination not to engage in malicious gossip. (Folks who know me personally consider it almost weird how uncomfortable I am with bashing people, singly or in groups.) It’s not my business to stop other people from doing it, but I just don’t want it as part of my life, because it’s corrosive and makes me unhappy.
To refine my own position a little bit—I’m happy to consider anti-PC issues as matters of fact, but I don’t like them connotationally, because I don’t like speaking ill of people when I can help it. For example, in a conversation with a friend: he says, “Don’t you know blacks have a higher crime rate than whites?” I say, “Sure, that’s true. But what do you want from me? You want me to say how much I hate my black neighbors? What do you want me to say?”
I don’t think that’s an issue that argument can dissuade me from; it’s my own preference.
This discussion prompted a connection in my mind that startled me a lot. Let’s put it in the open.
We’ve been discussing the moral status of identical copies. I gave a partial reductio sometime ago, but wasn’t really satisfied. Now consider this: what about the welfare of your imperfect copies? Do UDT-like considerations make it provably rational to care more about creatures that share random features with you? Note that I say UDT-like considerations, not evolutionary considerations. Evolution doesn’t explain professional solidarity or feminism because neither relies on heritable traits. Ganging up looks more like a Schelling coordination game, where you benefit from seeking allies based on some random quality as long as they also get the idea of allying with you based on same quality. And it might work better if the quality is hard to change, like sex or race. Anyone willing to work out the math is welcome to do so...
Asserting group inequalities means speaking more ill of one group of people but less ill of another, so doesn’t that cancel out?
I’m not talking about empirical claims, I’m talking about affect. I have zero problem with talking about group inequalities, in themselves.
But there are many different values. If we can’t sway each other’s positions, that points to a value difference.
If only it was always so. Value is hard to see, so easy to rationalize.
“Value difference” is often used as a cop-out. How did our terminal values come to be so different, anyway? If I’m extremely selfish and you’re extremely selfish, we will likely have very different values, but if we are both altruistic, our values are combinations of values of all the other people in the world, so they should be pretty similar. For example, if I think society should be organized like an anthill and you think it should be organized like a pool of sharks (to borrow Ken Binmore’s example), this is a factual disagreement about what would make everyone better off, not a value disagreement.
Maybe it’s a political correctness principal component, but it seems to me that ideas about status should not be aligned with that component. If PUA had not been mentioned, and we were just discussing Johnstone, then I think those who are ignorant of PUA, whether pro- or anti-PC, would have less extreme reactions and often completely different ones.
If people’s opinions on one issue are polarizing their opinions on another, without agreement that they’re logically related, something is probably going wrong and this is a cost to discussing the first issue. Also, cousin it talked about the issues creating “camps.” That’s probably the mediating problem.
I am presently amused by imagining forum members declaring themselves “anti-truth”.
Though I guess there is a spectrum from sticking to discovering and exposing widely applicable truths no matter what, some kind of Straussian stance where only the enlightened elites can be allowed access to dangerous truths and the general populace is to be fed noble lies, and then on to even less coherent spheres of willful obscurantism and outright anti-intellectualism, where it seems that nobody is encouraged to pursue some topics.
For some reason though, people who either explicitly believe that noble lies are necessary or have internalized a culture where they are built-in never seem to claim to be anti-truth.