Have you noticed that the people who are remembered as making the most accurate and useful observations about PUA are the same people who didn’t cause the disgust reaction?
Also, remember that every post we promote is another possible first impression. I wouldn’t want to join a community of people, mostly men, describing Women as an undifferentiated mass distinguished by an array of mental flaws to be exploited for personal gain—that’s a sign of some hardcore irrationality, to make claims which are that self-aggrandizing and that easily refuted. (Easily refuted because they are hasty generalizations, I hastily add.)
Edit: I’ll admit that I’m exaggerating the degree of bad rhetoric displayed here during the whole PUA flamewar, but the point about the hasty generalizations shouldn’t be ignored—I know too many people who don’t fit the stereotypes promoted in those discussions to view these stereotypes sympathetically.
I wouldn’t want to join a community of people, mostly men, describing Women as an undifferentiated mass distinguished by an array of mental flaws to be exploited for personal gain—that’s a sign of some hardcore irrationality, to make claims which are that self-aggrandizing and that easily refuted.
I wouldn’t want to join a community that did those things, or which uncritically praised a community that did. Still, I think that even if the seduction community were an undifferentiated mass of irrationality, it would be worth discussing here for the same reasons that we talk about religion and astrology.
Personally, when I see people being successful in a certain domain (or believing that they are successful), yet holding some obviously irrational beliefs, my interest is piqued. If these people are successful, is that despite their irrational beliefs, or could it be because of those beliefs? Could it be that some of the beliefs of PUAs work even though they are not true?
I don’t understand why other rationalists wouldn’t be wondering the same things, even when confronted with the negative aspects of pickup. As I’ve argued in the past here and here, pickup relates to many rationality topics:
Instrumental rationality (how to succeed according to one’s criteria for success)
The availability heuristic (the theories of PUAs are based on the women they most commonly encounter, and the most salient experiences with those women; the opinions of outsiders on the seduction community are also subject to the availability heuristic)
Underdetermination of theory by evidence, and the problem of induction; how much ad hoc support which should allow to a theory about social interaction before we trash it
Self-fulfilling prophecies (to what extent believing certain notions about oneself makes them come true in social interaction; how believing certain PUA theories and acting on them might produce experiences that appear to confirm those theories)
Empiricism (PUAs advocate “field testing” ideas about how to interact with women)
Kuhnian paradigms (the theories of PUAs have gone through several Kuhnian revolutions, and PUAs tend to interpret their experiences within the reigning paradigms in the community)
Lakatos’ notions of progressive vs. degenerative research programs (to what extent do the theories of PUAs allow them to make predictions of novel facts? How progressive is the research program of PUAs?)
Demarcation criterion (some PUAs claim that their teachings are based on “science”… to what extent is pickup scientific?)
Naive realism vs. instrumentalism (many practices of PUAs work, but to what extent are the theories behind them actually true?)
Heuristic and problem-solving with limited information (how do solve the problem of a lack of social knowledge, given only one’s own anecdotal observations and those of others? What theshold of evidence you should accept for a certain piece of advice before you act on it?)
The psychology of influence and persuasion, status, and signalling (revealing biases in how people perceive each other)
Perhaps I’ve been committing the “typical mind fallacy” by assuming that just because these links between pickup and rationality are obvious to me, that they are also obvious to others.
We appear to have a topic that has a lot of connections to rationality, some of which have been discussed here with a lot of approval, judging by upvotes. There are also people who discuss this topic in a non-rigorous way that causes feelings of repugnance in many observers. In my view, the relevance of pickup to rationality and the philosophy of science is so great that we would be throwing the baby out with the bathwater to discourage discussion of the topic. The solution is to discuss this topic in a rigorous way, and the connections to rationality made clear. When the topic is discussed in a non-rigorous and repugnance-causing way, the appropriate recourse is the reply button and the downvote button.
I appreciate your list of connections between PUA and rationality, because it’s gotten me closer to working out why I don’t see PUA as having a special connection to rationality.
I think it’s because I find the connections you suggest generic. Most of them, I reckon, would hold for any subculture with a sufficiently active truth-seeking element, such as (picking a few examples out of thin air, so they may not be good examples, but I hope they communicate my point) poker, art valuation, or trading card gaming. Though I’d guess that each of these topics has links to rationality like those you mention, in depth discussion of them on LW would tend to feel off-topic to me.
This doesn’t really relate to the more typical complaints about PUA that I see upthread—i.e. that some of the discussion of it grosses people out, and that it’s inaccurately reductive—but I thought I’d add my two cents to convey my mental context for my last reply.
Thanks for giving additional context. I think you are correct that we have a difference of opinion. Personally, I would be absolutely thrilled to see a discussion on LessWrong of how poker, art valuation, or trading card gaming relate to rationality. Would these subjects not interest you, or is your worry that discussion of them would get too far off-topic to a degree that is bad?
I suppose delving very deep into those subjects could also feel off-topic to me if the connection to rationality was lost, yet I would be comfortable with whatever level of depth people more knowledgeable than me on those subject felt was necessary to elucidate the links to rationality. (And if other people were making truth-claims about the content of those disciplines, and those people often displayed bias or misunderstanding in either a laudatory or critical direction, I would be comfortable seeing those truth-claims evaluated. Even if debate about the merits or nature of a subjects gets away from the direct relationship of that subject to rationality, that debate itself may demonstrate applications of rationality to a controversial subject, which I like to see.)
Your mileage may vary, but I find that I learn in a “hands on” way, and attempting to apply rationality to a practical problem helps me attain a more abstract understanding. See the notion of Contract to Expand, where sometimes solving a specific sub-problem can be helpful for solving a larger, more general problem.
I would consider any subculture or discipline with a “sufficiently active truth-seeking element” to be excellent LessWrong fodder, as long as the discussion (a) was connected to rationality, or (b) addressed the nature of the subcultures and disciplines so that readers can know how they work well enough to evaluate their potential relationship to rationality (particularly if there is disagreement on that nature or relationship). Anyone else have feelings either way?
Would these subjects not interest you, or is your worry that discussion of them would get too far off-topic to a degree that is bad?
The second I think. (I feel about the same for topics in which I have shown interest, so it’s not about my level of interest.)
If I wanted to force a conversation about a particular subculture or hot-button topic not obviously related to rationality, and I were called out on it, I could probably contrive a defensible list of ways my desired subject relates to rationality. For example, I took your list of bullet points for PUA and adapted most of them to race and IQ (a subject I’m more familiar with):
Instrumental rationality (IQ relates to indicators of life success, so one can argue about the degree to which IQ is a measure of instrumental rationality)
The availability heuristic (use of convenience sampling when testing psychological subjects; availability bias as a source of racial stereotypes about IQ)
Underdetermination of theory by evidence, and the problem of induction; how much ad hoc support which should allow to a theory about race differences in IQ before we trash it
Self-fulfilling prophecies (stereotype threat and other situations where a white or black person’s beliefs influence performance on IQ tests; how the social impact of race and IQ theories might perpetuate the IQ gaps those theories try to explain)
Empiricism (psychologists involved in the argument do their best to present themselves as grounded in the facts, and the extent to which they succeed is a possible jumping-off point off discussion)
Kuhnian paradigms (historical shift of the IQ argument from ‘it’s in the genes’ to ‘it’s all the environment’ to an uncomfortable, hedging mixture of the two)
Lakatos’ notions of progressive vs. degenerative research programs (Nuffsaid)
Demarcation criterion (is the argument about race and IQ even a scientific one? Which contributions to it should be considered scientific?)
Naive realism vs. instrumentalism (psychologists’ obsession with defining ‘validity,’ in all its forms, often touches on this)
Heuristic and problem-solving with limited information (this is the kind of thing IQ tests try to test, but to what extent do they successfully do so? Do they do so without bias?)
In spite of the connections to rationality just listed, I’d expect a discussion of race and IQ to flirt with the failure modes of (1) adversarial nitpicking of minutiae and/or (2) arguing about the politics surrounding the topic and not the topic itself. The first time I walked into this argument on Less Wrong, I felt I ended up in the first failure mode. When it came up again in this month’s Open Thread, the poster starting the discussion seemed to want to discuss the politics of it, and I didn’t see the resulting subthread as casting new light on rationality.
I say this even though threads like that do often have people making and evaluating truth-claims; I just don’t count that kind of thing as ‘real’ rationality unless it could plausibly make a rationality lightbulb go off in my head (‘Ooooohhh, I never got Eliezer’s exposition of causal screening before, but this example totally makes it obvious to me’ - stuff like that). I can find intelligent arguments about various subcultures and issues elsewhere on the internet—I expect something else, or maybe something more specific, from LW.
This doesn’t mean I don’t/can’t/won’t learn about rationality in a hands on way—applying what you learn is how you know you’ve learned it. Still, on LW I expect discussions presented as ‘here is a general point about rationality, demonstrated with a few little examples from my pet issue’ to stay on topic more effectively than if they’re presented as ‘here is my pet issue with a side serving of rationality,’ and I expect that whether or not I can draw abstract connections between my pet topic and rationality.
Hmmm. I’ve written a lot here because I don’t feel like I’m adequately communicating what I mean. I suppose what I’m thinking is something like a generalization of ‘Politics is the Mind-Killer’ - even things tangentially related to rationality can mind-kill, so I’m wary about what I label on-topic. Quite likely more wary than whoever’s reading this.
On a side note, I tried profiling (albeit crudely) a thread about a hot topic to find out how well it focused on relevant data and the elements of rationality discussed on LW. I picked this month’s Open Thread’s subthread about race and IQ because it wasn’t very long and I posted in it, so I had some idea how it progressed. On each comment I ticked off whether it
talked about actual evidence about race and/or IQ
made a testable prediction about race and IQ
referred to specific Less Wrongian heuristics or concepts that I recognized, like ‘applause lights’ or ‘privileging the hypothesis’ (I didn’t count generic pro-truth statements like ‘freedom to look for the truth is sacrosanct’)
with the rationale that comments that did any of these were more likely to be rationality-relevant than those that didn’t. (I also tried ticking off which comments were mostly focused on politics and which weren’t, but I couldn’t do that quickly and fairly, so I didn’t bother.) Here’s my data for anyone who wants to check my work.
The subthread has 74 comments: 13 mentioned evidence, 3 made a testable prediction, 10 explicitly made connections to LWish heuristics and catchphrases, and 50 did none of these. Those 50 comments had a mean score of 2.7; the 24 comments that mentioned data/predictions/rationality tropes had a mean score of 2.4.
That suggests that not only were the overtly rationality-ish comments outnumbered, but they scored more poorly. I wouldn’t want to generalize from this quick little survey, but I do wonder whether the same trend would show up in arguments about feminism, PUA, global warming, 9/11, or other subjects that can be controversial here.
Regarding the ratios of comment types have you compared that at all to subthreads about other topics, possibly less controversial ones? Without some idea of the usual level for an equivalent LW conversation about a less controversial topic, it is very hard to evaluate this data.
I’m not sure incidentally that I agree with your breakdown of comments. For example, you include the comment that started off the conversation as in none of the categories. Even just asking a worthwhile question should be worth something. And since this comment was at +17, even just by removing it we already substantially alter the average score of the 50 nones. The score goes from 2.7 to 2.4. This also illustrates another issue which is that if even a single comment can cause that sort of change then it doesn’t seem like this sort of data is statistically significant. Frankly, after realizing that, I’m not that inclined to check the rest of your data since that already puts the two at both 2.4 on average.
The fact that it seems like this comment itself would be put into the none category when I’ve made criticisms of the interpretation of evidence suggests that your break down isn’t great. (Please forgive the mild amount of self-reference.)
Regarding the ratios of comment types have you compared that at all to subthreads about other topics, possibly less controversial ones? Without some idea of the usual level for an equivalent LW conversation about a less controversial topic, it is very hard to evaluate this data.
It would be interesting to see what the patterns would be like in other subthreads. I sampled only the one subthread because I was curious about variation among comments within the single subthread and not variation between subthreads, so I figured one subthread would be enough.
I’m not sure incidentally that I agree with your breakdown of comments.
It’s certainly not perfect! I would have liked to have used a finer and more sensitive breakdown, but it would have become difficult to apply. I tried to invent the simplest breakdown I could think of that wouldn’t need much subjective judgment, and could approximate the types of discussion HughRistik had in mind.
For example, you include the comment that started off the conversation as in none of the categories. Even just asking a worthwhile question should be worth something.
That’s true—my list of categories is conservative, so some well-regarded comments that didn’t discuss data, predictions, or heuristics nonetheless didn’t end up in a category. That said, although my category list wasn’t exhaustive, I did still expect about as many comments to fit a category as there were comments that fitted none—I was genuinely surprised to get a 2⁄3 to 1⁄3 split.
This also illustrates another issue which is that if even a single comment can cause that sort of change then it doesn’t seem like this sort of data is statistically significant.
Fair point. The distribution of comment scores in that subthread is very skewed with a few outliers:
If I drop the four high scorers on the far tail I can recalculate the averages for the ‘nones’ versus the non-‘none’ comments without the influence of those outliers. The 47 remaining nones’ scores have mean 2.0 and the 23 remaining non-nones have a mean score of 1.8; the gap shrinks, but it’s still there.
If I did a statistical test of the difference, it likely would be statistically insignificant (and it’d likely have been insignificant even before dropping the outliers) - but that’s OK, because I don’t mean to generalize from that one subthread’s comments to the population of all comments.
The fact that it seems like this comment itself would be put into the none category when I’ve made criticisms of the interpretation of evidence suggests that your break down isn’t great.
Yes—if I planned to apply the breakdown to other subthreads, I’d add a category for comments that criticize or discuss evidence mentioned by someone else. Fortunately, it shouldn’t make much difference for the particular subthread I picked—I don’t remember any of the comments making detailed criticisms of other people’s evidence.
That is indeed a good question that I don’t know the answer to. Though it has been my impression that some of the ideas in NLP are parasitic on mainstream psychology. For example, “anchoring” seems related to classical conditioning.
I think it is a species of logical rudeness to judge an idea by its worst advocates. I’m sure any atheists who have been reminded that Hitler was an atheist can sympathize.
Neil Strauss (author of The Game) recently made some good points:
When I wrote The Game and went on to do the press, I told myself that I would neither DEFEND nor ATTACK the seduction community. I’d simply present the truth as it was, the good and the bad.
However, the more interviews I did, the more I realized I was going to have to defend something: The right of guys to learn this.
Anyone who’s ever seen the front page of Cosmopolitan or Sex in the City knows that self-help, sexual improvement, dating advice, and attraction skills is an accepted rite of passage for women.
There is no equivalent for men: We are simply shown images of women we are supposed to desire in the pages of Maxim and Playboy, then not told what to do about it.
People get tutored for everything else in life. If you can’t do math, you get a tutor. Sex in the City was women getting tutored in what to do with different types of men. I think the coolest thing someone could do is recognize their weakness and work to improve it.
When guys ask me questions, it’s usually not about what to do to trick a woman into bed — it’s about how to get over heart- break, whether Alexander Technique will improve their posture, whether improv classes will make them more spontaneous,
what to do about “this one special girl,” how to dress, and so on.
Though some of the “gurus” may have their issues, 99.9 percent of the guys I met learning this are the NICE GUYS. They are the guys women always say they are looking for, yet at the same time are never attracted to.
Usually, the true assholes, jerks, and misogynists are too cocky and arrogant to even consider that they might need to “learn” how to interact with women.
So anyone who’s going to get on a bully pulpit and demonize men for trying to improve themselves is not a friend of mine.
And any pundit who’s going to criticize men for manipulation when that’s exactly what their show producers regularly do to their guests is not a friend of mine.
I think we’re talking past each other. I’m not talking about judging the ideas, I’m talking about judging the worst advocates. Those people are the ones who cause the revulsion, and we as a community need to deny them the spotlight when they act up until they learn better. Otherwise the community comes off as not being a rationalist community, and aspiring rationalists who might be interested walk away.
I don’t even think we’ve been doing a bad job overall. But it’s a job we’re doing, not something that happens automatically.
And this is where differing perceptions are probably causing issues. I haven’t seen any posts here from anyone who is anything nearing the worst advocates, but then I’ve hung around places where these topics are discussed much more confrontationally. I’ve seen nothing I deem worthy of censorship from the advocates, even the ‘worst’, but I have seen examples of what I view as completely unacceptable over-reaction, revulsion and guilt tripping from a small but vocal minority who claim offense.
I am very unsympathetic by nature to people who claim the right to block any conversation that they personally find offensive. My natural reaction to such people is to become more offensive, which while it has some merit from a game-theoretic standpoint is generally not conducive to social decorum so I make an effort to restrain such impulses. So for me, those people are the ones who cause revulsion, and we as a community need to deny them the spotlight when they act up until they learn better. Otherwise the community comes off as not being a rationalist community, and aspiring rationalists who might be interested walk away. So far people who share your perceptions seem to carry the support of the majority but I think there is a significant minority that share my perceptions.
I am very unsympathetic by nature to people who claim the right to block any conversation that they personally find offensive.
Despite some of the rhetoric flying around at the time, I don’t think anyone involved made that sort of claim. It was rather more like “I find this sort of thing offensive” and “Maybe we should listen to him, since lots of people probably would be turned away by that sort of thing, and the offensive bits aren’t really necessary.”
See Eliezer’s contribution, Of Exclusionary Speech and Gender Politics. Nutshell: We should avoid doing things that make people feel excluded, and that includes being sensitive and not being all feministy. So basically we want both of the potentially-interested groups you’ve identified to stay.
ETA: Surely I’ve overstated my case. Eliezer did suggest that he didn’t think it would be a problem to ban PUA if it bothered people; the main idea is that PUA isn’t that important of a topic in the grand scheme of things, so whatever.
It’s not an either-or proposition, I think. I’ll freely concede that I haven’t been particularly sensitive to those sharing your revulsion for political correctness*, but it would be a mistake to offend either group to flatter the other. It’s possible—it’s even been done here—to hold these discussions in a way which is fair to both sides.
It’s just hard. Which is why it’s usually a bad idea to go there.
It’s just hard. Which is why it’s usually a bad idea to go there.
Agree with first quoted sentence. Disagree with second one.
In my view, LessWrong should be a place where we rationally attempt to discuss subjects that would be too controversial to discuss anywhere else. On LessWrong, we can hold arguments in such discussions to higher standards of scrutiny than anywhere else.
I don’t agree with the “it’s hard, so we should give up” approach to discussing controversial subjects on LessWrong. Controversial, mind-killing subjects are exactly where rationalist scrutiny is most needed.
I don’t agree with the “it’s hard, so we should give up” approach to discussing controversial subjects on LessWrong. Controversial, mind-killing subjects are exactly where rationalist scrutiny is most needed.
Here’s a potential conflict in our views of LW’s purpose. I think of it as being about discussing rationality, and things that touch directly on rationality and being rational. In that case discussing controversial, mind-killing subjects is only relevant inasmuch as they cast light on rationality—they’re not inherently interesting.
I’ve posted here before about race/IQ and global warming, and for both of those I’ve felt as if I was covering territory that’s basically offtopic. This didn’t stop me from posting about them, or make me feel bad about it, but I did feel that if I had picked arguments about those topics just because I could, that wouldn’t have suited LW’s purpose. I would avoid writing a top-level post about subjects like that unless I thought it was a good way to make a compelling, more general point about rationality—otherwise I’d likely just be axe-grinding.
To me, it seems obvious that there a lot of links between pickup and rationality (both positive and negative). It’s occurred to me that perhaps I’ve been over-estimating the obviousness of those links to others who don’t have the same background in the subject matter, so I’ve tried to sketch out a bunch of them in my reply to RobinZ.
We may need a category of “this is too hard for us now”, with the possibility left open that as more of us get better at rationality, more difficult topics can be addressed well.
Your terminology is fine. The asymmetry that disturbs me is that while ‘political correctness’ annoys the hell out of me I’m not demanding for it to be a banned topic of conversation to avoid offending my delicate sensibilities. I don’t consider the causing of offense by particular views or topics to be a valid reason to avoid them. Note that this is different from discussing them in a deliberately offensive manner. I generally dislike an unnecessary impolite or aggressive tone to discussions but objecting to an entire topic is going too far in my opinion.
If we are still having this discussion could you link to a couple examples of the posts that you object to so much? I’m trying to figure out whether I missed something, or how similar my perceptions are to yours.
Have you noticed that the people who are remembered as making the most accurate and useful observations about PUA are the same people who didn’t cause the disgust reaction?
Also, remember that every post we promote is another possible first impression. I wouldn’t want to join a community of people, mostly men, describing Women as an undifferentiated mass distinguished by an array of mental flaws to be exploited for personal gain—that’s a sign of some hardcore irrationality, to make claims which are that self-aggrandizing and that easily refuted. (Easily refuted because they are hasty generalizations, I hastily add.)
Edit: I’ll admit that I’m exaggerating the degree of bad rhetoric displayed here during the whole PUA flamewar, but the point about the hasty generalizations shouldn’t be ignored—I know too many people who don’t fit the stereotypes promoted in those discussions to view these stereotypes sympathetically.
I wouldn’t want to join a community that did those things, or which uncritically praised a community that did. Still, I think that even if the seduction community were an undifferentiated mass of irrationality, it would be worth discussing here for the same reasons that we talk about religion and astrology.
Personally, when I see people being successful in a certain domain (or believing that they are successful), yet holding some obviously irrational beliefs, my interest is piqued. If these people are successful, is that despite their irrational beliefs, or could it be because of those beliefs? Could it be that some of the beliefs of PUAs work even though they are not true?
I don’t understand why other rationalists wouldn’t be wondering the same things, even when confronted with the negative aspects of pickup. As I’ve argued in the past here and here, pickup relates to many rationality topics:
Instrumental rationality (how to succeed according to one’s criteria for success)
The availability heuristic (the theories of PUAs are based on the women they most commonly encounter, and the most salient experiences with those women; the opinions of outsiders on the seduction community are also subject to the availability heuristic)
Underdetermination of theory by evidence, and the problem of induction; how much ad hoc support which should allow to a theory about social interaction before we trash it
Self-fulfilling prophecies (to what extent believing certain notions about oneself makes them come true in social interaction; how believing certain PUA theories and acting on them might produce experiences that appear to confirm those theories)
Empiricism (PUAs advocate “field testing” ideas about how to interact with women)
Kuhnian paradigms (the theories of PUAs have gone through several Kuhnian revolutions, and PUAs tend to interpret their experiences within the reigning paradigms in the community)
Lakatos’ notions of progressive vs. degenerative research programs (to what extent do the theories of PUAs allow them to make predictions of novel facts? How progressive is the research program of PUAs?)
Demarcation criterion (some PUAs claim that their teachings are based on “science”… to what extent is pickup scientific?)
Naive realism vs. instrumentalism (many practices of PUAs work, but to what extent are the theories behind them actually true?)
Heuristic and problem-solving with limited information (how do solve the problem of a lack of social knowledge, given only one’s own anecdotal observations and those of others? What theshold of evidence you should accept for a certain piece of advice before you act on it?)
The psychology of influence and persuasion, status, and signalling (revealing biases in how people perceive each other)
Perhaps I’ve been committing the “typical mind fallacy” by assuming that just because these links between pickup and rationality are obvious to me, that they are also obvious to others.
We appear to have a topic that has a lot of connections to rationality, some of which have been discussed here with a lot of approval, judging by upvotes. There are also people who discuss this topic in a non-rigorous way that causes feelings of repugnance in many observers. In my view, the relevance of pickup to rationality and the philosophy of science is so great that we would be throwing the baby out with the bathwater to discourage discussion of the topic. The solution is to discuss this topic in a rigorous way, and the connections to rationality made clear. When the topic is discussed in a non-rigorous and repugnance-causing way, the appropriate recourse is the reply button and the downvote button.
(Building on this earlier comment of mine.)
I appreciate your list of connections between PUA and rationality, because it’s gotten me closer to working out why I don’t see PUA as having a special connection to rationality.
I think it’s because I find the connections you suggest generic. Most of them, I reckon, would hold for any subculture with a sufficiently active truth-seeking element, such as (picking a few examples out of thin air, so they may not be good examples, but I hope they communicate my point) poker, art valuation, or trading card gaming. Though I’d guess that each of these topics has links to rationality like those you mention, in depth discussion of them on LW would tend to feel off-topic to me.
This doesn’t really relate to the more typical complaints about PUA that I see upthread—i.e. that some of the discussion of it grosses people out, and that it’s inaccurately reductive—but I thought I’d add my two cents to convey my mental context for my last reply.
Thanks for giving additional context. I think you are correct that we have a difference of opinion. Personally, I would be absolutely thrilled to see a discussion on LessWrong of how poker, art valuation, or trading card gaming relate to rationality. Would these subjects not interest you, or is your worry that discussion of them would get too far off-topic to a degree that is bad?
I suppose delving very deep into those subjects could also feel off-topic to me if the connection to rationality was lost, yet I would be comfortable with whatever level of depth people more knowledgeable than me on those subject felt was necessary to elucidate the links to rationality. (And if other people were making truth-claims about the content of those disciplines, and those people often displayed bias or misunderstanding in either a laudatory or critical direction, I would be comfortable seeing those truth-claims evaluated. Even if debate about the merits or nature of a subjects gets away from the direct relationship of that subject to rationality, that debate itself may demonstrate applications of rationality to a controversial subject, which I like to see.)
Your mileage may vary, but I find that I learn in a “hands on” way, and attempting to apply rationality to a practical problem helps me attain a more abstract understanding. See the notion of Contract to Expand, where sometimes solving a specific sub-problem can be helpful for solving a larger, more general problem.
I would consider any subculture or discipline with a “sufficiently active truth-seeking element” to be excellent LessWrong fodder, as long as the discussion (a) was connected to rationality, or (b) addressed the nature of the subcultures and disciplines so that readers can know how they work well enough to evaluate their potential relationship to rationality (particularly if there is disagreement on that nature or relationship). Anyone else have feelings either way?
The second I think. (I feel about the same for topics in which I have shown interest, so it’s not about my level of interest.)
If I wanted to force a conversation about a particular subculture or hot-button topic not obviously related to rationality, and I were called out on it, I could probably contrive a defensible list of ways my desired subject relates to rationality. For example, I took your list of bullet points for PUA and adapted most of them to race and IQ (a subject I’m more familiar with):
Instrumental rationality (IQ relates to indicators of life success, so one can argue about the degree to which IQ is a measure of instrumental rationality)
The availability heuristic (use of convenience sampling when testing psychological subjects; availability bias as a source of racial stereotypes about IQ)
Underdetermination of theory by evidence, and the problem of induction; how much ad hoc support which should allow to a theory about race differences in IQ before we trash it
Self-fulfilling prophecies (stereotype threat and other situations where a white or black person’s beliefs influence performance on IQ tests; how the social impact of race and IQ theories might perpetuate the IQ gaps those theories try to explain)
Empiricism (psychologists involved in the argument do their best to present themselves as grounded in the facts, and the extent to which they succeed is a possible jumping-off point off discussion)
Kuhnian paradigms (historical shift of the IQ argument from ‘it’s in the genes’ to ‘it’s all the environment’ to an uncomfortable, hedging mixture of the two)
Lakatos’ notions of progressive vs. degenerative research programs (Nuff said)
Demarcation criterion (is the argument about race and IQ even a scientific one? Which contributions to it should be considered scientific?)
Naive realism vs. instrumentalism (psychologists’ obsession with defining ‘validity,’ in all its forms, often touches on this)
Heuristic and problem-solving with limited information (this is the kind of thing IQ tests try to test, but to what extent do they successfully do so? Do they do so without bias?)
In spite of the connections to rationality just listed, I’d expect a discussion of race and IQ to flirt with the failure modes of (1) adversarial nitpicking of minutiae and/or (2) arguing about the politics surrounding the topic and not the topic itself. The first time I walked into this argument on Less Wrong, I felt I ended up in the first failure mode. When it came up again in this month’s Open Thread, the poster starting the discussion seemed to want to discuss the politics of it, and I didn’t see the resulting subthread as casting new light on rationality.
I say this even though threads like that do often have people making and evaluating truth-claims; I just don’t count that kind of thing as ‘real’ rationality unless it could plausibly make a rationality lightbulb go off in my head (‘Ooooohhh, I never got Eliezer’s exposition of causal screening before, but this example totally makes it obvious to me’ - stuff like that). I can find intelligent arguments about various subcultures and issues elsewhere on the internet—I expect something else, or maybe something more specific, from LW.
This doesn’t mean I don’t/can’t/won’t learn about rationality in a hands on way—applying what you learn is how you know you’ve learned it. Still, on LW I expect discussions presented as ‘here is a general point about rationality, demonstrated with a few little examples from my pet issue’ to stay on topic more effectively than if they’re presented as ‘here is my pet issue with a side serving of rationality,’ and I expect that whether or not I can draw abstract connections between my pet topic and rationality.
Hmmm. I’ve written a lot here because I don’t feel like I’m adequately communicating what I mean. I suppose what I’m thinking is something like a generalization of ‘Politics is the Mind-Killer’ - even things tangentially related to rationality can mind-kill, so I’m wary about what I label on-topic. Quite likely more wary than whoever’s reading this.
On a side note, I tried profiling (albeit crudely) a thread about a hot topic to find out how well it focused on relevant data and the elements of rationality discussed on LW. I picked this month’s Open Thread’s subthread about race and IQ because it wasn’t very long and I posted in it, so I had some idea how it progressed. On each comment I ticked off whether it
talked about actual evidence about race and/or IQ
made a testable prediction about race and IQ
referred to specific Less Wrongian heuristics or concepts that I recognized, like ‘applause lights’ or ‘privileging the hypothesis’ (I didn’t count generic pro-truth statements like ‘freedom to look for the truth is sacrosanct’)
with the rationale that comments that did any of these were more likely to be rationality-relevant than those that didn’t. (I also tried ticking off which comments were mostly focused on politics and which weren’t, but I couldn’t do that quickly and fairly, so I didn’t bother.) Here’s my data for anyone who wants to check my work.
The subthread has 74 comments: 13 mentioned evidence, 3 made a testable prediction, 10 explicitly made connections to LWish heuristics and catchphrases, and 50 did none of these. Those 50 comments had a mean score of 2.7; the 24 comments that mentioned data/predictions/rationality tropes had a mean score of 2.4.
That suggests that not only were the overtly rationality-ish comments outnumbered, but they scored more poorly. I wouldn’t want to generalize from this quick little survey, but I do wonder whether the same trend would show up in arguments about feminism, PUA, global warming, 9/11, or other subjects that can be controversial here.
Regarding the ratios of comment types have you compared that at all to subthreads about other topics, possibly less controversial ones? Without some idea of the usual level for an equivalent LW conversation about a less controversial topic, it is very hard to evaluate this data.
I’m not sure incidentally that I agree with your breakdown of comments. For example, you include the comment that started off the conversation as in none of the categories. Even just asking a worthwhile question should be worth something. And since this comment was at +17, even just by removing it we already substantially alter the average score of the 50 nones. The score goes from 2.7 to 2.4. This also illustrates another issue which is that if even a single comment can cause that sort of change then it doesn’t seem like this sort of data is statistically significant. Frankly, after realizing that, I’m not that inclined to check the rest of your data since that already puts the two at both 2.4 on average.
The fact that it seems like this comment itself would be put into the none category when I’ve made criticisms of the interpretation of evidence suggests that your break down isn’t great. (Please forgive the mild amount of self-reference.)
It would be interesting to see what the patterns would be like in other subthreads. I sampled only the one subthread because I was curious about variation among comments within the single subthread and not variation between subthreads, so I figured one subthread would be enough.
It’s certainly not perfect! I would have liked to have used a finer and more sensitive breakdown, but it would have become difficult to apply. I tried to invent the simplest breakdown I could think of that wouldn’t need much subjective judgment, and could approximate the types of discussion HughRistik had in mind.
That’s true—my list of categories is conservative, so some well-regarded comments that didn’t discuss data, predictions, or heuristics nonetheless didn’t end up in a category. That said, although my category list wasn’t exhaustive, I did still expect about as many comments to fit a category as there were comments that fitted none—I was genuinely surprised to get a 2⁄3 to 1⁄3 split.
Fair point. The distribution of comment scores in that subthread is very skewed with a few outliers:
If I drop the four high scorers on the far tail I can recalculate the averages for the ‘nones’ versus the non-‘none’ comments without the influence of those outliers. The 47 remaining nones’ scores have mean 2.0 and the 23 remaining non-nones have a mean score of 1.8; the gap shrinks, but it’s still there.
If I did a statistical test of the difference, it likely would be statistically insignificant (and it’d likely have been insignificant even before dropping the outliers) - but that’s OK, because I don’t mean to generalize from that one subthread’s comments to the population of all comments.
Yes—if I planned to apply the breakdown to other subthreads, I’d add a category for comments that criticize or discuss evidence mentioned by someone else. Fortunately, it shouldn’t make much difference for the particular subthread I picked—I don’t remember any of the comments making detailed criticisms of other people’s evidence.
Piling on to this excellent comment, I have a more specific interest in “how scientific is NLP”.
That is indeed a good question that I don’t know the answer to. Though it has been my impression that some of the ideas in NLP are parasitic on mainstream psychology. For example, “anchoring” seems related to classical conditioning.
I think it is a species of logical rudeness to judge an idea by its worst advocates. I’m sure any atheists who have been reminded that Hitler was an atheist can sympathize.
Neil Strauss (author of The Game) recently made some good points:
I think we’re talking past each other. I’m not talking about judging the ideas, I’m talking about judging the worst advocates. Those people are the ones who cause the revulsion, and we as a community need to deny them the spotlight when they act up until they learn better. Otherwise the community comes off as not being a rationalist community, and aspiring rationalists who might be interested walk away.
I don’t even think we’ve been doing a bad job overall. But it’s a job we’re doing, not something that happens automatically.
And this is where differing perceptions are probably causing issues. I haven’t seen any posts here from anyone who is anything nearing the worst advocates, but then I’ve hung around places where these topics are discussed much more confrontationally. I’ve seen nothing I deem worthy of censorship from the advocates, even the ‘worst’, but I have seen examples of what I view as completely unacceptable over-reaction, revulsion and guilt tripping from a small but vocal minority who claim offense.
I am very unsympathetic by nature to people who claim the right to block any conversation that they personally find offensive. My natural reaction to such people is to become more offensive, which while it has some merit from a game-theoretic standpoint is generally not conducive to social decorum so I make an effort to restrain such impulses. So for me, those people are the ones who cause revulsion, and we as a community need to deny them the spotlight when they act up until they learn better. Otherwise the community comes off as not being a rationalist community, and aspiring rationalists who might be interested walk away. So far people who share your perceptions seem to carry the support of the majority but I think there is a significant minority that share my perceptions.
Despite some of the rhetoric flying around at the time, I don’t think anyone involved made that sort of claim. It was rather more like “I find this sort of thing offensive” and “Maybe we should listen to him, since lots of people probably would be turned away by that sort of thing, and the offensive bits aren’t really necessary.”
See Eliezer’s contribution, Of Exclusionary Speech and Gender Politics. Nutshell: We should avoid doing things that make people feel excluded, and that includes being sensitive and not being all feministy. So basically we want both of the potentially-interested groups you’ve identified to stay.
ETA: Surely I’ve overstated my case. Eliezer did suggest that he didn’t think it would be a problem to ban PUA if it bothered people; the main idea is that PUA isn’t that important of a topic in the grand scheme of things, so whatever.
It’s not an either-or proposition, I think. I’ll freely concede that I haven’t been particularly sensitive to those sharing your revulsion for political correctness*, but it would be a mistake to offend either group to flatter the other. It’s possible—it’s even been done here—to hold these discussions in a way which is fair to both sides.
It’s just hard. Which is why it’s usually a bad idea to go there.
* I apologize if my terminology is incorrect.
Agree with first quoted sentence. Disagree with second one.
In my view, LessWrong should be a place where we rationally attempt to discuss subjects that would be too controversial to discuss anywhere else. On LessWrong, we can hold arguments in such discussions to higher standards of scrutiny than anywhere else.
I don’t agree with the “it’s hard, so we should give up” approach to discussing controversial subjects on LessWrong. Controversial, mind-killing subjects are exactly where rationalist scrutiny is most needed.
Here’s a potential conflict in our views of LW’s purpose. I think of it as being about discussing rationality, and things that touch directly on rationality and being rational. In that case discussing controversial, mind-killing subjects is only relevant inasmuch as they cast light on rationality—they’re not inherently interesting.
I’ve posted here before about race/IQ and global warming, and for both of those I’ve felt as if I was covering territory that’s basically offtopic. This didn’t stop me from posting about them, or make me feel bad about it, but I did feel that if I had picked arguments about those topics just because I could, that wouldn’t have suited LW’s purpose. I would avoid writing a top-level post about subjects like that unless I thought it was a good way to make a compelling, more general point about rationality—otherwise I’d likely just be axe-grinding.
To me, it seems obvious that there a lot of links between pickup and rationality (both positive and negative). It’s occurred to me that perhaps I’ve been over-estimating the obviousness of those links to others who don’t have the same background in the subject matter, so I’ve tried to sketch out a bunch of them in my reply to RobinZ.
I’m down with a “one does not simply walk into PUA” attitude. I apologize for not saying so.
We may need a category of “this is too hard for us now”, with the possibility left open that as more of us get better at rationality, more difficult topics can be addressed well.
Your terminology is fine. The asymmetry that disturbs me is that while ‘political correctness’ annoys the hell out of me I’m not demanding for it to be a banned topic of conversation to avoid offending my delicate sensibilities. I don’t consider the causing of offense by particular views or topics to be a valid reason to avoid them. Note that this is different from discussing them in a deliberately offensive manner. I generally dislike an unnecessary impolite or aggressive tone to discussions but objecting to an entire topic is going too far in my opinion.
You’re correct. My “usually” was an attempt to acknowledge this—in retrospect, not a competent one.
If we are still having this discussion could you link to a couple examples of the posts that you object to so much? I’m trying to figure out whether I missed something, or how similar my perceptions are to yours.
I can’t point to any specific examples.