The problem is not only that the topic runs afoul of moralistic biases, but also that it triggers failure in high-quality anti-bullshit heuristics commonly used by math/tech/science-savvy people. When you first hear about it, it’s exactly the kind of thing that will set off a well-calibrated bullshit detector. It promises impossible-seeming results that sound tailored to appeal to naive wishful thinking, and stories about its success sound like they just must be explicable by selection effects, self-delusions, false boasting, etc. So I definitely don’t blame people for excessive skepticism either.
A personal anecdote: I remember when I first came across ASF long ago, when I was around 20. I quickly dismissed it as bullshit, and it didn’t catch my attention again until several years later. In retrospect, this miscalculation should probably be one of my major regrets in life, and not just for failures with women that could have been prevented; it would have likely opened my perspectives on many other issues too, as it actually happened the next time around.
The problem is not only that the topic runs afoul of moralistic biases, but also that it triggers failure in high-quality anti-bullshit heuristics commonly used by math/tech/science-savvy people. When you first hear about it, it’s exactly the kind of thing that will set off a well-calibrated bullshit detector
Very true. To me (and my bullshit detector), it sounds strikingly similar to any number of other self-help programs offered through the ages. In fact, it sounds to me a lot like Scientology—or at least the elevator pitch version that they give to lower level people before they start introducing them to the really strange stuff. And the endorsement you give it in your second paragraph sounds a lot like the way adherents to these kinds of absolutely-for-legal-reasons-definitely-not-a-cults will breathlessly talk about them to outsiders.
Now of course I realize that superficial similarity to snake oil doesn’t actually count as valid evidence. But I do think it’s fair to put PUA into the same reference class with them, and base my priors on that. Would you not agree?
Now of course I realize that superficial similarity to snake oil doesn’t actually count as valid evidence. But I do think it’s fair to put PUA into the same reference class with them, and base my priors on that. Would you not agree?
If you see PUA-like techniques being marketed without any additional knowledge about the matter, then yes, your snake oil/bullshit detector should hit the red end of the scale, and stay that way until some very strong evidence is presented otherwise. Thing is, when it comes to a certain subset of such techniques that pjeby, HughRistik, me, and various others have been discussing, there is actually such strong evidence. You just have to delve into the matter without any fatally blinding biases and see it.
That’s pretty much the point I’ve been hammering on. The problem is not that your prior is low, which it should be. The problem is that an accurate estimate of posteriors is obscured by very severe biases that push them downward.
What evidence? PUAs may use a lot of trial and error in developing their techniques, but do their tests count as valid experimental evidence, or just anecdotes? Where are their control groups? What is their null hypothesis? Was subject selection randomized? Were the data gathered and analyzed by independent parties?
Would you accept this kind of evidence if we were talking about physics? Would you accept this kind of evidence if we were evaluating someone who claimed to have psychic powers?
One of the reasons this topic is of interest to rationalists is that it is an example of an area where rational evidence is available but scientific evidence is in short supply. It is not in general rational to postpone judgment until scientific evidence is available. Learning how to make maximal use of rational evidence without succumbing to the pitfalls of cognitive biases is a topic of much interest to many LWers.
Yes, that’s true. I’ve been phrasing my more recent comments in terms of scientific evidence, because several people I’ve been butting heads with have made assertions about PUA that seemed to imply it had a scientific-level base of supporting evidence.
I’m still not sure though what the rational evidence is that I’m supposed to be updating on. Numerous other self improvement programs make similar claims, based on similar reasoning, and offer similar anecdotal evidence. So I consider such evidence to be equally likely to appear regardless of whether PUA’s claims are true or false, leaving me with nothing but my priors.
What evidence? PUAs may use a lot of trial and error in developing their techniques, but do their tests count as valid experimental evidence, or just anecdotes? Where are their control groups? What is their null hypothesis? Was subject selection randomized? Were the data gathered and analyzed by independent parties?
Well, as I said, if you study the discourse in the PUA community at its best in a non-biased and detached way, desensitized to the language and attitudes you might find instinctively off-putting, you’ll actually find the epistemological standards surprisingly high. But you just have to see that for yourself.
A good comparison for the PUA milieu would be a high-quality community of hobbyist amateurs who engage in some technical work with passion and enthusiasm. In their discussions, they probably won’t apply the same formal standards of discourse and evidence that are used in academic research and corporate R&D, but it’s nevertheless likely that they know what they’re talking about and their body of established knowledge is as reliable as any other—and even though there are no formal qualifications for joining, those bringing bullshit rather than insight will soon be identified and ostracized.
Now, if you don’t know at first sight whether you’re dealing with such an epistemologically healthy community, the first test would be to see how its main body of established knowledge conforms to your own experiences and observations. (In a non-biased way, of course, which is harder when it comes to the PUA stuff than some ordinary technical skill.) In my case, and not just mine, the result was a definite pass. The further test is to observe the actual manner of discourse practiced and its epistemological quality. Again, it’s harder to do when biased reactions to various signals of disrespectability are standing in the way.
Would you accept this kind of evidence if we were talking about physics?
Even in physics, not all evidence comes from reproducible experiments. Sometimes you just have to make the best out of observations gathered at random opportune moments, for example when it comes to unusual astronomical or geophysical events.
Would you accept this kind of evidence if we were evaluating someone who claimed to have psychic powers?
You’re biasing your skepticism way upward now. The correct level of initial skepticism with which to meet the PUA stuff is the skepticism you apply to people claiming to have solved difficult problems in a way consistent with the existing well-established scientific knowledge—not the much higher level appropriate for those whose claims contradict it.
The correct level of initial skepticism with which to meet the PUA stuff is the skepticism you apply to people claiming to have solved difficult problems in a way consistent with the existing well-established scientific knowledge—not the much higher level appropriate for those whose claims contradict it.
That’s a good point—the priors for PUA, though low, are nowhere near as low as for psychic phenomena. But that just means that you need a smaller amount of evidence to overcome those priors—it doesn’t lower the bar for what qualifies as valid evidence.
The problem is not only that the topic runs afoul of moralistic biases, but also that it triggers failure in high-quality anti-bullshit heuristics commonly used by math/tech/science-savvy people. When you first hear about it, it’s exactly the kind of thing that will set off a well-calibrated bullshit detector. It promises impossible-seeming results that sound tailored to appeal to naive wishful thinking, and stories about its success sound like they just must be explicable by selection effects, self-delusions, false boasting, etc. So I definitely don’t blame people for excessive skepticism either.
A personal anecdote: I remember when I first came across ASF long ago, when I was around 20. I quickly dismissed it as bullshit, and it didn’t catch my attention again until several years later. In retrospect, this miscalculation should probably be one of my major regrets in life, and not just for failures with women that could have been prevented; it would have likely opened my perspectives on many other issues too, as it actually happened the next time around.
Very true. To me (and my bullshit detector), it sounds strikingly similar to any number of other self-help programs offered through the ages. In fact, it sounds to me a lot like Scientology—or at least the elevator pitch version that they give to lower level people before they start introducing them to the really strange stuff. And the endorsement you give it in your second paragraph sounds a lot like the way adherents to these kinds of absolutely-for-legal-reasons-definitely-not-a-cults will breathlessly talk about them to outsiders.
Now of course I realize that superficial similarity to snake oil doesn’t actually count as valid evidence. But I do think it’s fair to put PUA into the same reference class with them, and base my priors on that. Would you not agree?
kodos96:
If you see PUA-like techniques being marketed without any additional knowledge about the matter, then yes, your snake oil/bullshit detector should hit the red end of the scale, and stay that way until some very strong evidence is presented otherwise. Thing is, when it comes to a certain subset of such techniques that pjeby, HughRistik, me, and various others have been discussing, there is actually such strong evidence. You just have to delve into the matter without any fatally blinding biases and see it.
That’s pretty much the point I’ve been hammering on. The problem is not that your prior is low, which it should be. The problem is that an accurate estimate of posteriors is obscured by very severe biases that push them downward.
What evidence? PUAs may use a lot of trial and error in developing their techniques, but do their tests count as valid experimental evidence, or just anecdotes? Where are their control groups? What is their null hypothesis? Was subject selection randomized? Were the data gathered and analyzed by independent parties?
Would you accept this kind of evidence if we were talking about physics? Would you accept this kind of evidence if we were evaluating someone who claimed to have psychic powers?
One of the reasons this topic is of interest to rationalists is that it is an example of an area where rational evidence is available but scientific evidence is in short supply. It is not in general rational to postpone judgment until scientific evidence is available. Learning how to make maximal use of rational evidence without succumbing to the pitfalls of cognitive biases is a topic of much interest to many LWers.
Yes, that’s true. I’ve been phrasing my more recent comments in terms of scientific evidence, because several people I’ve been butting heads with have made assertions about PUA that seemed to imply it had a scientific-level base of supporting evidence.
I’m still not sure though what the rational evidence is that I’m supposed to be updating on. Numerous other self improvement programs make similar claims, based on similar reasoning, and offer similar anecdotal evidence. So I consider such evidence to be equally likely to appear regardless of whether PUA’s claims are true or false, leaving me with nothing but my priors.
kodos96:
Well, as I said, if you study the discourse in the PUA community at its best in a non-biased and detached way, desensitized to the language and attitudes you might find instinctively off-putting, you’ll actually find the epistemological standards surprisingly high. But you just have to see that for yourself.
A good comparison for the PUA milieu would be a high-quality community of hobbyist amateurs who engage in some technical work with passion and enthusiasm. In their discussions, they probably won’t apply the same formal standards of discourse and evidence that are used in academic research and corporate R&D, but it’s nevertheless likely that they know what they’re talking about and their body of established knowledge is as reliable as any other—and even though there are no formal qualifications for joining, those bringing bullshit rather than insight will soon be identified and ostracized.
Now, if you don’t know at first sight whether you’re dealing with such an epistemologically healthy community, the first test would be to see how its main body of established knowledge conforms to your own experiences and observations. (In a non-biased way, of course, which is harder when it comes to the PUA stuff than some ordinary technical skill.) In my case, and not just mine, the result was a definite pass. The further test is to observe the actual manner of discourse practiced and its epistemological quality. Again, it’s harder to do when biased reactions to various signals of disrespectability are standing in the way.
Even in physics, not all evidence comes from reproducible experiments. Sometimes you just have to make the best out of observations gathered at random opportune moments, for example when it comes to unusual astronomical or geophysical events.
You’re biasing your skepticism way upward now. The correct level of initial skepticism with which to meet the PUA stuff is the skepticism you apply to people claiming to have solved difficult problems in a way consistent with the existing well-established scientific knowledge—not the much higher level appropriate for those whose claims contradict it.
That’s a good point—the priors for PUA, though low, are nowhere near as low as for psychic phenomena. But that just means that you need a smaller amount of evidence to overcome those priors—it doesn’t lower the bar for what qualifies as valid evidence.