I wouldn’t want to join a community of people, mostly men, describing Women as an undifferentiated mass distinguished by an array of mental flaws to be exploited for personal gain—that’s a sign of some hardcore irrationality, to make claims which are that self-aggrandizing and that easily refuted.
I wouldn’t want to join a community that did those things, or which uncritically praised a community that did. Still, I think that even if the seduction community were an undifferentiated mass of irrationality, it would be worth discussing here for the same reasons that we talk about religion and astrology.
Personally, when I see people being successful in a certain domain (or believing that they are successful), yet holding some obviously irrational beliefs, my interest is piqued. If these people are successful, is that despite their irrational beliefs, or could it be because of those beliefs? Could it be that some of the beliefs of PUAs work even though they are not true?
I don’t understand why other rationalists wouldn’t be wondering the same things, even when confronted with the negative aspects of pickup. As I’ve argued in the past here and here, pickup relates to many rationality topics:
Instrumental rationality (how to succeed according to one’s criteria for success)
The availability heuristic (the theories of PUAs are based on the women they most commonly encounter, and the most salient experiences with those women; the opinions of outsiders on the seduction community are also subject to the availability heuristic)
Underdetermination of theory by evidence, and the problem of induction; how much ad hoc support which should allow to a theory about social interaction before we trash it
Self-fulfilling prophecies (to what extent believing certain notions about oneself makes them come true in social interaction; how believing certain PUA theories and acting on them might produce experiences that appear to confirm those theories)
Empiricism (PUAs advocate “field testing” ideas about how to interact with women)
Kuhnian paradigms (the theories of PUAs have gone through several Kuhnian revolutions, and PUAs tend to interpret their experiences within the reigning paradigms in the community)
Lakatos’ notions of progressive vs. degenerative research programs (to what extent do the theories of PUAs allow them to make predictions of novel facts? How progressive is the research program of PUAs?)
Demarcation criterion (some PUAs claim that their teachings are based on “science”… to what extent is pickup scientific?)
Naive realism vs. instrumentalism (many practices of PUAs work, but to what extent are the theories behind them actually true?)
Heuristic and problem-solving with limited information (how do solve the problem of a lack of social knowledge, given only one’s own anecdotal observations and those of others? What theshold of evidence you should accept for a certain piece of advice before you act on it?)
The psychology of influence and persuasion, status, and signalling (revealing biases in how people perceive each other)
Perhaps I’ve been committing the “typical mind fallacy” by assuming that just because these links between pickup and rationality are obvious to me, that they are also obvious to others.
We appear to have a topic that has a lot of connections to rationality, some of which have been discussed here with a lot of approval, judging by upvotes. There are also people who discuss this topic in a non-rigorous way that causes feelings of repugnance in many observers. In my view, the relevance of pickup to rationality and the philosophy of science is so great that we would be throwing the baby out with the bathwater to discourage discussion of the topic. The solution is to discuss this topic in a rigorous way, and the connections to rationality made clear. When the topic is discussed in a non-rigorous and repugnance-causing way, the appropriate recourse is the reply button and the downvote button.
I appreciate your list of connections between PUA and rationality, because it’s gotten me closer to working out why I don’t see PUA as having a special connection to rationality.
I think it’s because I find the connections you suggest generic. Most of them, I reckon, would hold for any subculture with a sufficiently active truth-seeking element, such as (picking a few examples out of thin air, so they may not be good examples, but I hope they communicate my point) poker, art valuation, or trading card gaming. Though I’d guess that each of these topics has links to rationality like those you mention, in depth discussion of them on LW would tend to feel off-topic to me.
This doesn’t really relate to the more typical complaints about PUA that I see upthread—i.e. that some of the discussion of it grosses people out, and that it’s inaccurately reductive—but I thought I’d add my two cents to convey my mental context for my last reply.
Thanks for giving additional context. I think you are correct that we have a difference of opinion. Personally, I would be absolutely thrilled to see a discussion on LessWrong of how poker, art valuation, or trading card gaming relate to rationality. Would these subjects not interest you, or is your worry that discussion of them would get too far off-topic to a degree that is bad?
I suppose delving very deep into those subjects could also feel off-topic to me if the connection to rationality was lost, yet I would be comfortable with whatever level of depth people more knowledgeable than me on those subject felt was necessary to elucidate the links to rationality. (And if other people were making truth-claims about the content of those disciplines, and those people often displayed bias or misunderstanding in either a laudatory or critical direction, I would be comfortable seeing those truth-claims evaluated. Even if debate about the merits or nature of a subjects gets away from the direct relationship of that subject to rationality, that debate itself may demonstrate applications of rationality to a controversial subject, which I like to see.)
Your mileage may vary, but I find that I learn in a “hands on” way, and attempting to apply rationality to a practical problem helps me attain a more abstract understanding. See the notion of Contract to Expand, where sometimes solving a specific sub-problem can be helpful for solving a larger, more general problem.
I would consider any subculture or discipline with a “sufficiently active truth-seeking element” to be excellent LessWrong fodder, as long as the discussion (a) was connected to rationality, or (b) addressed the nature of the subcultures and disciplines so that readers can know how they work well enough to evaluate their potential relationship to rationality (particularly if there is disagreement on that nature or relationship). Anyone else have feelings either way?
Would these subjects not interest you, or is your worry that discussion of them would get too far off-topic to a degree that is bad?
The second I think. (I feel about the same for topics in which I have shown interest, so it’s not about my level of interest.)
If I wanted to force a conversation about a particular subculture or hot-button topic not obviously related to rationality, and I were called out on it, I could probably contrive a defensible list of ways my desired subject relates to rationality. For example, I took your list of bullet points for PUA and adapted most of them to race and IQ (a subject I’m more familiar with):
Instrumental rationality (IQ relates to indicators of life success, so one can argue about the degree to which IQ is a measure of instrumental rationality)
The availability heuristic (use of convenience sampling when testing psychological subjects; availability bias as a source of racial stereotypes about IQ)
Underdetermination of theory by evidence, and the problem of induction; how much ad hoc support which should allow to a theory about race differences in IQ before we trash it
Self-fulfilling prophecies (stereotype threat and other situations where a white or black person’s beliefs influence performance on IQ tests; how the social impact of race and IQ theories might perpetuate the IQ gaps those theories try to explain)
Empiricism (psychologists involved in the argument do their best to present themselves as grounded in the facts, and the extent to which they succeed is a possible jumping-off point off discussion)
Kuhnian paradigms (historical shift of the IQ argument from ‘it’s in the genes’ to ‘it’s all the environment’ to an uncomfortable, hedging mixture of the two)
Lakatos’ notions of progressive vs. degenerative research programs (Nuffsaid)
Demarcation criterion (is the argument about race and IQ even a scientific one? Which contributions to it should be considered scientific?)
Naive realism vs. instrumentalism (psychologists’ obsession with defining ‘validity,’ in all its forms, often touches on this)
Heuristic and problem-solving with limited information (this is the kind of thing IQ tests try to test, but to what extent do they successfully do so? Do they do so without bias?)
In spite of the connections to rationality just listed, I’d expect a discussion of race and IQ to flirt with the failure modes of (1) adversarial nitpicking of minutiae and/or (2) arguing about the politics surrounding the topic and not the topic itself. The first time I walked into this argument on Less Wrong, I felt I ended up in the first failure mode. When it came up again in this month’s Open Thread, the poster starting the discussion seemed to want to discuss the politics of it, and I didn’t see the resulting subthread as casting new light on rationality.
I say this even though threads like that do often have people making and evaluating truth-claims; I just don’t count that kind of thing as ‘real’ rationality unless it could plausibly make a rationality lightbulb go off in my head (‘Ooooohhh, I never got Eliezer’s exposition of causal screening before, but this example totally makes it obvious to me’ - stuff like that). I can find intelligent arguments about various subcultures and issues elsewhere on the internet—I expect something else, or maybe something more specific, from LW.
This doesn’t mean I don’t/can’t/won’t learn about rationality in a hands on way—applying what you learn is how you know you’ve learned it. Still, on LW I expect discussions presented as ‘here is a general point about rationality, demonstrated with a few little examples from my pet issue’ to stay on topic more effectively than if they’re presented as ‘here is my pet issue with a side serving of rationality,’ and I expect that whether or not I can draw abstract connections between my pet topic and rationality.
Hmmm. I’ve written a lot here because I don’t feel like I’m adequately communicating what I mean. I suppose what I’m thinking is something like a generalization of ‘Politics is the Mind-Killer’ - even things tangentially related to rationality can mind-kill, so I’m wary about what I label on-topic. Quite likely more wary than whoever’s reading this.
On a side note, I tried profiling (albeit crudely) a thread about a hot topic to find out how well it focused on relevant data and the elements of rationality discussed on LW. I picked this month’s Open Thread’s subthread about race and IQ because it wasn’t very long and I posted in it, so I had some idea how it progressed. On each comment I ticked off whether it
talked about actual evidence about race and/or IQ
made a testable prediction about race and IQ
referred to specific Less Wrongian heuristics or concepts that I recognized, like ‘applause lights’ or ‘privileging the hypothesis’ (I didn’t count generic pro-truth statements like ‘freedom to look for the truth is sacrosanct’)
with the rationale that comments that did any of these were more likely to be rationality-relevant than those that didn’t. (I also tried ticking off which comments were mostly focused on politics and which weren’t, but I couldn’t do that quickly and fairly, so I didn’t bother.) Here’s my data for anyone who wants to check my work.
The subthread has 74 comments: 13 mentioned evidence, 3 made a testable prediction, 10 explicitly made connections to LWish heuristics and catchphrases, and 50 did none of these. Those 50 comments had a mean score of 2.7; the 24 comments that mentioned data/predictions/rationality tropes had a mean score of 2.4.
That suggests that not only were the overtly rationality-ish comments outnumbered, but they scored more poorly. I wouldn’t want to generalize from this quick little survey, but I do wonder whether the same trend would show up in arguments about feminism, PUA, global warming, 9/11, or other subjects that can be controversial here.
Regarding the ratios of comment types have you compared that at all to subthreads about other topics, possibly less controversial ones? Without some idea of the usual level for an equivalent LW conversation about a less controversial topic, it is very hard to evaluate this data.
I’m not sure incidentally that I agree with your breakdown of comments. For example, you include the comment that started off the conversation as in none of the categories. Even just asking a worthwhile question should be worth something. And since this comment was at +17, even just by removing it we already substantially alter the average score of the 50 nones. The score goes from 2.7 to 2.4. This also illustrates another issue which is that if even a single comment can cause that sort of change then it doesn’t seem like this sort of data is statistically significant. Frankly, after realizing that, I’m not that inclined to check the rest of your data since that already puts the two at both 2.4 on average.
The fact that it seems like this comment itself would be put into the none category when I’ve made criticisms of the interpretation of evidence suggests that your break down isn’t great. (Please forgive the mild amount of self-reference.)
Regarding the ratios of comment types have you compared that at all to subthreads about other topics, possibly less controversial ones? Without some idea of the usual level for an equivalent LW conversation about a less controversial topic, it is very hard to evaluate this data.
It would be interesting to see what the patterns would be like in other subthreads. I sampled only the one subthread because I was curious about variation among comments within the single subthread and not variation between subthreads, so I figured one subthread would be enough.
I’m not sure incidentally that I agree with your breakdown of comments.
It’s certainly not perfect! I would have liked to have used a finer and more sensitive breakdown, but it would have become difficult to apply. I tried to invent the simplest breakdown I could think of that wouldn’t need much subjective judgment, and could approximate the types of discussion HughRistik had in mind.
For example, you include the comment that started off the conversation as in none of the categories. Even just asking a worthwhile question should be worth something.
That’s true—my list of categories is conservative, so some well-regarded comments that didn’t discuss data, predictions, or heuristics nonetheless didn’t end up in a category. That said, although my category list wasn’t exhaustive, I did still expect about as many comments to fit a category as there were comments that fitted none—I was genuinely surprised to get a 2⁄3 to 1⁄3 split.
This also illustrates another issue which is that if even a single comment can cause that sort of change then it doesn’t seem like this sort of data is statistically significant.
Fair point. The distribution of comment scores in that subthread is very skewed with a few outliers:
If I drop the four high scorers on the far tail I can recalculate the averages for the ‘nones’ versus the non-‘none’ comments without the influence of those outliers. The 47 remaining nones’ scores have mean 2.0 and the 23 remaining non-nones have a mean score of 1.8; the gap shrinks, but it’s still there.
If I did a statistical test of the difference, it likely would be statistically insignificant (and it’d likely have been insignificant even before dropping the outliers) - but that’s OK, because I don’t mean to generalize from that one subthread’s comments to the population of all comments.
The fact that it seems like this comment itself would be put into the none category when I’ve made criticisms of the interpretation of evidence suggests that your break down isn’t great.
Yes—if I planned to apply the breakdown to other subthreads, I’d add a category for comments that criticize or discuss evidence mentioned by someone else. Fortunately, it shouldn’t make much difference for the particular subthread I picked—I don’t remember any of the comments making detailed criticisms of other people’s evidence.
That is indeed a good question that I don’t know the answer to. Though it has been my impression that some of the ideas in NLP are parasitic on mainstream psychology. For example, “anchoring” seems related to classical conditioning.
I wouldn’t want to join a community that did those things, or which uncritically praised a community that did. Still, I think that even if the seduction community were an undifferentiated mass of irrationality, it would be worth discussing here for the same reasons that we talk about religion and astrology.
Personally, when I see people being successful in a certain domain (or believing that they are successful), yet holding some obviously irrational beliefs, my interest is piqued. If these people are successful, is that despite their irrational beliefs, or could it be because of those beliefs? Could it be that some of the beliefs of PUAs work even though they are not true?
I don’t understand why other rationalists wouldn’t be wondering the same things, even when confronted with the negative aspects of pickup. As I’ve argued in the past here and here, pickup relates to many rationality topics:
Instrumental rationality (how to succeed according to one’s criteria for success)
The availability heuristic (the theories of PUAs are based on the women they most commonly encounter, and the most salient experiences with those women; the opinions of outsiders on the seduction community are also subject to the availability heuristic)
Underdetermination of theory by evidence, and the problem of induction; how much ad hoc support which should allow to a theory about social interaction before we trash it
Self-fulfilling prophecies (to what extent believing certain notions about oneself makes them come true in social interaction; how believing certain PUA theories and acting on them might produce experiences that appear to confirm those theories)
Empiricism (PUAs advocate “field testing” ideas about how to interact with women)
Kuhnian paradigms (the theories of PUAs have gone through several Kuhnian revolutions, and PUAs tend to interpret their experiences within the reigning paradigms in the community)
Lakatos’ notions of progressive vs. degenerative research programs (to what extent do the theories of PUAs allow them to make predictions of novel facts? How progressive is the research program of PUAs?)
Demarcation criterion (some PUAs claim that their teachings are based on “science”… to what extent is pickup scientific?)
Naive realism vs. instrumentalism (many practices of PUAs work, but to what extent are the theories behind them actually true?)
Heuristic and problem-solving with limited information (how do solve the problem of a lack of social knowledge, given only one’s own anecdotal observations and those of others? What theshold of evidence you should accept for a certain piece of advice before you act on it?)
The psychology of influence and persuasion, status, and signalling (revealing biases in how people perceive each other)
Perhaps I’ve been committing the “typical mind fallacy” by assuming that just because these links between pickup and rationality are obvious to me, that they are also obvious to others.
We appear to have a topic that has a lot of connections to rationality, some of which have been discussed here with a lot of approval, judging by upvotes. There are also people who discuss this topic in a non-rigorous way that causes feelings of repugnance in many observers. In my view, the relevance of pickup to rationality and the philosophy of science is so great that we would be throwing the baby out with the bathwater to discourage discussion of the topic. The solution is to discuss this topic in a rigorous way, and the connections to rationality made clear. When the topic is discussed in a non-rigorous and repugnance-causing way, the appropriate recourse is the reply button and the downvote button.
(Building on this earlier comment of mine.)
I appreciate your list of connections between PUA and rationality, because it’s gotten me closer to working out why I don’t see PUA as having a special connection to rationality.
I think it’s because I find the connections you suggest generic. Most of them, I reckon, would hold for any subculture with a sufficiently active truth-seeking element, such as (picking a few examples out of thin air, so they may not be good examples, but I hope they communicate my point) poker, art valuation, or trading card gaming. Though I’d guess that each of these topics has links to rationality like those you mention, in depth discussion of them on LW would tend to feel off-topic to me.
This doesn’t really relate to the more typical complaints about PUA that I see upthread—i.e. that some of the discussion of it grosses people out, and that it’s inaccurately reductive—but I thought I’d add my two cents to convey my mental context for my last reply.
Thanks for giving additional context. I think you are correct that we have a difference of opinion. Personally, I would be absolutely thrilled to see a discussion on LessWrong of how poker, art valuation, or trading card gaming relate to rationality. Would these subjects not interest you, or is your worry that discussion of them would get too far off-topic to a degree that is bad?
I suppose delving very deep into those subjects could also feel off-topic to me if the connection to rationality was lost, yet I would be comfortable with whatever level of depth people more knowledgeable than me on those subject felt was necessary to elucidate the links to rationality. (And if other people were making truth-claims about the content of those disciplines, and those people often displayed bias or misunderstanding in either a laudatory or critical direction, I would be comfortable seeing those truth-claims evaluated. Even if debate about the merits or nature of a subjects gets away from the direct relationship of that subject to rationality, that debate itself may demonstrate applications of rationality to a controversial subject, which I like to see.)
Your mileage may vary, but I find that I learn in a “hands on” way, and attempting to apply rationality to a practical problem helps me attain a more abstract understanding. See the notion of Contract to Expand, where sometimes solving a specific sub-problem can be helpful for solving a larger, more general problem.
I would consider any subculture or discipline with a “sufficiently active truth-seeking element” to be excellent LessWrong fodder, as long as the discussion (a) was connected to rationality, or (b) addressed the nature of the subcultures and disciplines so that readers can know how they work well enough to evaluate their potential relationship to rationality (particularly if there is disagreement on that nature or relationship). Anyone else have feelings either way?
The second I think. (I feel about the same for topics in which I have shown interest, so it’s not about my level of interest.)
If I wanted to force a conversation about a particular subculture or hot-button topic not obviously related to rationality, and I were called out on it, I could probably contrive a defensible list of ways my desired subject relates to rationality. For example, I took your list of bullet points for PUA and adapted most of them to race and IQ (a subject I’m more familiar with):
Instrumental rationality (IQ relates to indicators of life success, so one can argue about the degree to which IQ is a measure of instrumental rationality)
The availability heuristic (use of convenience sampling when testing psychological subjects; availability bias as a source of racial stereotypes about IQ)
Underdetermination of theory by evidence, and the problem of induction; how much ad hoc support which should allow to a theory about race differences in IQ before we trash it
Self-fulfilling prophecies (stereotype threat and other situations where a white or black person’s beliefs influence performance on IQ tests; how the social impact of race and IQ theories might perpetuate the IQ gaps those theories try to explain)
Empiricism (psychologists involved in the argument do their best to present themselves as grounded in the facts, and the extent to which they succeed is a possible jumping-off point off discussion)
Kuhnian paradigms (historical shift of the IQ argument from ‘it’s in the genes’ to ‘it’s all the environment’ to an uncomfortable, hedging mixture of the two)
Lakatos’ notions of progressive vs. degenerative research programs (Nuff said)
Demarcation criterion (is the argument about race and IQ even a scientific one? Which contributions to it should be considered scientific?)
Naive realism vs. instrumentalism (psychologists’ obsession with defining ‘validity,’ in all its forms, often touches on this)
Heuristic and problem-solving with limited information (this is the kind of thing IQ tests try to test, but to what extent do they successfully do so? Do they do so without bias?)
In spite of the connections to rationality just listed, I’d expect a discussion of race and IQ to flirt with the failure modes of (1) adversarial nitpicking of minutiae and/or (2) arguing about the politics surrounding the topic and not the topic itself. The first time I walked into this argument on Less Wrong, I felt I ended up in the first failure mode. When it came up again in this month’s Open Thread, the poster starting the discussion seemed to want to discuss the politics of it, and I didn’t see the resulting subthread as casting new light on rationality.
I say this even though threads like that do often have people making and evaluating truth-claims; I just don’t count that kind of thing as ‘real’ rationality unless it could plausibly make a rationality lightbulb go off in my head (‘Ooooohhh, I never got Eliezer’s exposition of causal screening before, but this example totally makes it obvious to me’ - stuff like that). I can find intelligent arguments about various subcultures and issues elsewhere on the internet—I expect something else, or maybe something more specific, from LW.
This doesn’t mean I don’t/can’t/won’t learn about rationality in a hands on way—applying what you learn is how you know you’ve learned it. Still, on LW I expect discussions presented as ‘here is a general point about rationality, demonstrated with a few little examples from my pet issue’ to stay on topic more effectively than if they’re presented as ‘here is my pet issue with a side serving of rationality,’ and I expect that whether or not I can draw abstract connections between my pet topic and rationality.
Hmmm. I’ve written a lot here because I don’t feel like I’m adequately communicating what I mean. I suppose what I’m thinking is something like a generalization of ‘Politics is the Mind-Killer’ - even things tangentially related to rationality can mind-kill, so I’m wary about what I label on-topic. Quite likely more wary than whoever’s reading this.
On a side note, I tried profiling (albeit crudely) a thread about a hot topic to find out how well it focused on relevant data and the elements of rationality discussed on LW. I picked this month’s Open Thread’s subthread about race and IQ because it wasn’t very long and I posted in it, so I had some idea how it progressed. On each comment I ticked off whether it
talked about actual evidence about race and/or IQ
made a testable prediction about race and IQ
referred to specific Less Wrongian heuristics or concepts that I recognized, like ‘applause lights’ or ‘privileging the hypothesis’ (I didn’t count generic pro-truth statements like ‘freedom to look for the truth is sacrosanct’)
with the rationale that comments that did any of these were more likely to be rationality-relevant than those that didn’t. (I also tried ticking off which comments were mostly focused on politics and which weren’t, but I couldn’t do that quickly and fairly, so I didn’t bother.) Here’s my data for anyone who wants to check my work.
The subthread has 74 comments: 13 mentioned evidence, 3 made a testable prediction, 10 explicitly made connections to LWish heuristics and catchphrases, and 50 did none of these. Those 50 comments had a mean score of 2.7; the 24 comments that mentioned data/predictions/rationality tropes had a mean score of 2.4.
That suggests that not only were the overtly rationality-ish comments outnumbered, but they scored more poorly. I wouldn’t want to generalize from this quick little survey, but I do wonder whether the same trend would show up in arguments about feminism, PUA, global warming, 9/11, or other subjects that can be controversial here.
Regarding the ratios of comment types have you compared that at all to subthreads about other topics, possibly less controversial ones? Without some idea of the usual level for an equivalent LW conversation about a less controversial topic, it is very hard to evaluate this data.
I’m not sure incidentally that I agree with your breakdown of comments. For example, you include the comment that started off the conversation as in none of the categories. Even just asking a worthwhile question should be worth something. And since this comment was at +17, even just by removing it we already substantially alter the average score of the 50 nones. The score goes from 2.7 to 2.4. This also illustrates another issue which is that if even a single comment can cause that sort of change then it doesn’t seem like this sort of data is statistically significant. Frankly, after realizing that, I’m not that inclined to check the rest of your data since that already puts the two at both 2.4 on average.
The fact that it seems like this comment itself would be put into the none category when I’ve made criticisms of the interpretation of evidence suggests that your break down isn’t great. (Please forgive the mild amount of self-reference.)
It would be interesting to see what the patterns would be like in other subthreads. I sampled only the one subthread because I was curious about variation among comments within the single subthread and not variation between subthreads, so I figured one subthread would be enough.
It’s certainly not perfect! I would have liked to have used a finer and more sensitive breakdown, but it would have become difficult to apply. I tried to invent the simplest breakdown I could think of that wouldn’t need much subjective judgment, and could approximate the types of discussion HughRistik had in mind.
That’s true—my list of categories is conservative, so some well-regarded comments that didn’t discuss data, predictions, or heuristics nonetheless didn’t end up in a category. That said, although my category list wasn’t exhaustive, I did still expect about as many comments to fit a category as there were comments that fitted none—I was genuinely surprised to get a 2⁄3 to 1⁄3 split.
Fair point. The distribution of comment scores in that subthread is very skewed with a few outliers:
If I drop the four high scorers on the far tail I can recalculate the averages for the ‘nones’ versus the non-‘none’ comments without the influence of those outliers. The 47 remaining nones’ scores have mean 2.0 and the 23 remaining non-nones have a mean score of 1.8; the gap shrinks, but it’s still there.
If I did a statistical test of the difference, it likely would be statistically insignificant (and it’d likely have been insignificant even before dropping the outliers) - but that’s OK, because I don’t mean to generalize from that one subthread’s comments to the population of all comments.
Yes—if I planned to apply the breakdown to other subthreads, I’d add a category for comments that criticize or discuss evidence mentioned by someone else. Fortunately, it shouldn’t make much difference for the particular subthread I picked—I don’t remember any of the comments making detailed criticisms of other people’s evidence.
Piling on to this excellent comment, I have a more specific interest in “how scientific is NLP”.
That is indeed a good question that I don’t know the answer to. Though it has been my impression that some of the ideas in NLP are parasitic on mainstream psychology. For example, “anchoring” seems related to classical conditioning.