PUA mythology seems to me to have built-in safeguards against falsifiability. … As long as the PUAs are obtaining sex some of the time, or are claiming they are doing so, their theories aren’t falsifiable.
Note that this may be a feature, not a bug: a PUA with unwavering belief in their method will likely exude more confidence, regardless of the method employed.
I remember one pickup guru describing how when he was younger, he’d found this poem online that was supposed to be the perfect pickup line… and the first few times he used it, it was, because he utterly believed it would work. Later, he had to find other methods that allowed him to have a similar level of belief.
As has been mentioned elsewhere on LW, belief causes people to act differently—often in ways that would be difficult or impossible to convincingly fake if you lacked the belief. (e.g. microexpressions, muscle tension, and similar cues)
To put it another way, even the falsifiability of PUA theory is subject to testing: i.e., do falsifiable PUA theories work better or worse than unfalsifiable ones? If unfalsifiable ones produce better results, then it’s a feature, not a bug. ;-)
Only in the same sense that the placebo effect is a “feature” of evidence-based medicine.
It’s okay if evidence-based medicine gets a tiny, tiny additional boost from the placebo effect. It’s good, in fact.
However when we are trying to figure out whether or not a treatment works we have to be absolutely sure we have ruled out the placebo effect as the causative factor. If we don’t do that then we can never find out which are the good treatments that have a real effect plus a placebo effect, and which are the fake treatments that only have a placebo effect.
Only if it turned out that method absolutely, totally did not matter and only confidence in the method mattered would it be rational to abandon the search for the truth and settle for belief in an unfalsifiable confidence-booster. It seems far more likely to me that there will in fact be approach methods that work better than others, and that only by disentangling the confounding factor of confidence from the real effect could you figure out what the real effect was and how strong it was.
It seems far more likely to me that there will in fact be approach methods that work better than others, and that only by disentangling the confounding factor of confidence from the real effect could you figure out what the real effect was and how strong it was.
This really, really underestimates the number of confounding factors. For any given man, the useful piece of information is what method will work for him, for women that:
Would be happy with him, and
He would be happy with
(Where “with” is defined as whatever sort of relationship both are happy with.)
This is a lot of confounding factors, and it’s pretty central to the tradeoff described in this post: do you go for something that’s inoffensive to lots of people, but not very attractive to anyone, or something that’s actually offensive to most people, but very attractive to your target audience?
You can’t do group randomized controls with something where individuality actually does count.
This is especially true of PUA advice like, “be in the moment” and “say something that amuses you”. How would you test these bits of advice, for example, while holding all other variables unchanged? By their very definition, they’re going to produce different behavior virtually every time you act on them.
There are two classes of claim here we need to divide up, but they share a common problem. First the classes, then the problem.
The first class is claims that are simply unfalsifiable. If there is no way even in theory that a proposition could be confirmed or falsified then that proposition is simply vacuous. There is nothing to say about it except that rational agents should discard the claim as meaningless and move on. If any element of PUA doctrine falls into this category then for LW purposes we should simply flag it as unfalsifiable and move on.
The second class is claims that are hard to prove or disprove because there are multiple confounding factors, but which with proper controls and a sufficiently large sample size we could in theory confirm or disconfirm. If a moderate amount of cologne works better than none at all or a large amount of cologne, for example, then if we got enough men to approach enough women then eventually if there’s a real effect we should be able to get a data pool that shows statistical significance despite those confounding effects.
The common problem both classes of claims have is that a rationalist is immediately going to ask someone who proposes such a claim “How do you think you know this?”. If a given claim is terribly difficult to confirm or disconfirm, and nobody has yet done the arduous legwork to check it, it’s very hard to see how a rational agent could think it is true or false. The same goes except more strongly for unfalsifiable claims.
For a PUA to argue that X is true, but that X is impossible to prove, is to open themselves up to the response “How do you know that, if it’s impossible to prove?”.
If there is no way even in theory that a proposition could be confirmed or falsified then that proposition is simply vacuous. There is nothing to say about it except that rational agents should discard the claim as meaningless and move on. If any element of PUA doctrine falls into this category then for LW purposes we should simply flag it as unfalsifiable and move on.
Sure… as long as you separate predictions from theory. When you reduce a PUA theory to what behaviors you expect someone believing that theory would produce, or what behaviors, if successful, would result in people believing such theories, you then have something suitable for testing, even if the theory is nonsensical on its face.
Lots of people believe in “The Secret” because it appears to produce results, despite the theory being utter garbage. But then, it turns out that some of what’s said is consistent with what actually makes people “luckier”… so there was a falsifiable prediction after all, buried under the nonsense.
If a group of people claim to produce results, then reduce their theory to more concrete predictions first, then test that. After all, if you discard alchemy because the theory is bunk, you miss the chance to discover chemistry.
Or, in more LW-ish speak: theories are not evidence, but even biased reports of actual experience are evidence of something. A Bayesian reductionist should be able to reduce even the craziest “woo” into some sort of useful probabilistic information… and there’s a substantial body of PUA material that’s considerably less “woo” than the average self-help book.
In the simplest form, this reduction could be just: person A claims that they were unsuccessful with women prior to adopting some set of PUA-trained behaviors. If the individual has numbers (even if somewhat imprecise) and there are a large number of people similar to person A, then this represents usable Bayesian evidence for that set of behaviors (or the training itself) being useful to persons with similar needs and desires as person A.
This is perfectly usable evidence that doesn’t require us to address the theory or its falsifiability at all.
Now, it is not necessarily evidence for the validity of person A’s favorite PUA theory!
Rather, it is evidence that something person A did differently was helpful for person A… and it remains an open question to determine what actually caused the improvement. For example, could it simply be that receiving PUA training somehow changes people? That it motivates them to approach women repeatedly, resulting in more confidence and familiarity with approaching women? Any number of other possible factors?
In other words, the actual theory put forth by the PUAs doing the teaching shouldn’t necessarily be at the top of the list of possibilities to investigate, even if the teaching clearly produces results...
And using theory-validity as a screening method for practical advice is pretty much useless, if you have “something to protect” (in LW speak). That is, if you need a method that works in an area where science is not yet settled, you cannot afford to discard practical advice on the basis of questionable theory: you will throw out way too much of the available information. (This applies to the self-help field as much as PUA.)
I’m perfectly happy to engage with PUA theories on that level, but the methodological obstacles to collecting good data are still the same. So the vital question is still the same, which is “How do these people think they know these things?”.
The only difference is that instead of addressing the question to the PUA who believes specific techniques A, B and C bring about certain outcomes, we address the question to the meta-PUA who believes that although specific techniques A, B and C are placebos that belief in the efficaciousness of those techniques has measurable effects.
However PUA devotees might not want to go down this argumentative path because the likely outcome is admitting that much of the content on PUA sites is superstition, and that the outcomes of the combined arsenal of PUA tips and techniques cannot currently be distinguished from the outcomes of a change of clothes, a little personal grooming and asking a bunch of women to go out with you.
PUA devotees like to position themselves as gurus with secret knowledge. If it turns out that the entire edifice is indistinguishable from superstition then they would be repositioned as people with poor social skills and misogynist world-views who reinvented a very old wheel and then constructed non-evidence-based folk beliefs around it.
So depending on the thesis you are arguing for, it might be safer to argue that PUA techniques do have non-placebo effects.
that the outcomes of the combined arsenal of PUA tips and techniques cannot currently be distinguished from the outcomes of a change of clothes, a little personal grooming and asking a bunch of women to go out with you.
Even if that were true (and I don’t think that’s anywhere near the case), you keep dropping out the critical meta-level for actual human beings to achieve instrumental results: i.e., motivation.
That is, even if “a change of clothes, a little grooming, and asking a bunch of women out” were actually the best possible approach, it’s kind of useless to just leave it at that, because quite a lot of actual human beings are incapable of motivating themselves to actually DO the necessary steps, using mere logical knowledge without an emotional component. (On LW, people generally use the term “akrasia” to describe this normal characteristic of human behavior as if it were some sort of strange and unexpected disease. ;-) )
To put it another way, the critical function of any kind of personal development training is to transmit a mental model to a human brain in a way such that the attached human will act in accordance with the model so transmitted.
After all, if this were not the case, then self-help books of any stripe could consist simply of short instruction sheets!
If it turns out that the entire edifice is indistinguishable from superstition …. it might be safer to argue that PUA techniques do have non-placebo effects.
“Placebo” and “superstition” are not interchangeable concepts. A placebo is a real effect, a superstition is an imaginary one.
That is, if I think my baseball batting performance is improved when I wear a red scarf, and it is, that’s a placebo effect. (Belief creating a real result.) If I think that it’s improved, but it actually isn’t, then that’s a superstition.
This means that placebo effects are instrumentally more useful than superstitions… unless of course the superstition gets you to do something that itself has a beneficial effect.
To the extent that PUA uses placebo effects on the performer of a technique, the usefulness of the effect is in the resulting non-placebo response of the recipient of the technique.
Meanwhile, there are tons of specific pieces of PUA advice that are easily testable in miniature that needn’t rely on either sort of effect.
For example, if PUAs of the Mystery school predict that “a set will open more frequently if you body-rock away from the group before establishing a false time constraint”, that prediction should be easily testable to determine its truth or falsehood, given objectively reducible definitions of “set”, “open”, “body rock”, and “false time constraint”. (All of which terms the Mystery method does quite objectively reduce.)
So, you could teach a bunch of people to do these things, send them out and videotape ’em, and then get a bunch of grad students to grade the sets as to whether they opened and how quickly (without seeing the PUA’s behavior), and voila… testable prediction.
On the level of such specific, immediately-responded-to actions and events, ISTM that PUAs have strong motivation to eliminate non-working or negatively-reinforced behaviors from their repertoire, especially when in the process of inventing them.
Of course, removing superstitious “extras” is unlikely for a given PUA guru to notice; I have observed that it is students of those gurus, or new, competing gurus who would push back with, “I haven’t seen any need to body rock”, or “Opinion openers are unnecessary”, or “indirect game is pointless”, etc. So, even though individual schools don’t often move in the direction of discarding old techniques, the field as a whole seems to evolve towards simplification where possible.
Indeed, there is at least one PU guru who says that nearly all of Mystery method is pointless superstition in the sense that guys who jump through all its hoops are succeeding not because of what they’re doing in the process, so much as what they’re not doing.
That, in essence, women either find you attractive or they don’t, and all that your “game” needs to do is not blow the attraction by saying or doing something stupid. ;-) His specific advice seems to focus more on figuring out how to tell whether a particular woman is attracted to you, and how to move as quickly as possible from that to doing something about it.
Note: I don’t believe this guru is saying that Mystery’s advice about social skills is wrong, merely that the use of those skills can be completely superfluous to a goal of having sex with attractive women, vs. a goal of being friends with groups of people and hanging out with them before having sex with some of the women, or getting into social circles containing high-status women. And I think he’s largely correct in this stance, especially if your objective isn’t to have sex with the highest-status beautiful woman present (which is Mystery method’s raison d’etre).
If your objective is to meet, say, the kinkiest girl with the dirtiest mind, or the sweetest, friendliest one, or the most adventurous one, or really almost any other criteria, Mystery’s elaborate refinements are superfluous, as they were developed to help him rapidly social-climb his way into his target’s circle of friends and disarm their ready defenses against guys coming to hit on her.
To put it another way: Mystery is using a narrow, red-line strategy specifically tuned to women who are the most attractive to a broad, blue-line spectrum of guys… because they were also his personal red line. If your red line is not those women, then Mystery method is not the tool you should use.
PUA style, in short, is very individual. Once you add back in the context of a given guru’s personality, physique, goals, and other personal characteristics, you find that it’s nowhere near as broad-spectrum/universal as the guru’s declarations appear. Once, I watched some videos online from a conference of PUA gurus who often had what sounded like contradictory advice… but which was intended for people with different personalities and different goals.
For example, one guy focused on making lots of female friends and going out with them a lot—he enjoys it, and then they play matchmaker for him. Another emphasized a lone-wolf strategy of “forced IOIs”, which is PUA code for acting in a way that forces a woman to very quickly indicate (nonverbally) whether she has any interest in him. Just looking at these two guys, you could tell that each had chosen a method that was a better match for their personality, and that neither would be happy using the other’s method, nor would they each be meeting the kind of women they wanted to meet!
So that’s why I keep saying that you’re ignoring the fact that PUA is not a single uniform thing, any more than, say, weight loss is. In theory, everybody can eat less and move more and this will make them lose weight. In practice, it ain’t nearly that simple: different people have different nutritional needs, for example, so the diet that’s healthy for one person can be very bad for another.
Thus, if you want, say, “honest, equal, supportive” PUA, then by all means, look for it. But don’t expect to find One True PUA Theory that will make all women do your bidding. It doesn’t exist. What exists in PUA is a vast assortment of vaguely related theories aimed at very individual goals and personality types.
(And, of more direct relevance to this particular sub-thread, far too many confounding factors to be of much use to group studies, unless you plan to run a lot of experiments.)
Speaking broadly, if the goal is Rational Romantic Relationships than any advice which doesn’t have actual existing evidence to back it up is not advice rational people should be taking.
If a whole bunch of different gurus are each flogging different techniques and none of them have evidence, then a rationalist should dismiss them all until they do have some evidence, just as we dismiss the alt-med gurus who flog different forms of alternative medicine without evidence. Without evidence PUA is no more the elephant in the Rationalist Romantic Relationship room than ayurveda is an elephant in the medical science room.
As far as the superstition/placebo distinction you are making I think you are simply wrong linguistically speaking. Nothing stops a superstition being a placebo, and in fact almost all of alternative medicine could legitimately be described as placebo and superstition.
Superstitions arise because of faulty cause/effect reasoning and may indeed have a placebo effect, like the red scarf you mention. I suspect but cannot prove that some parts of PUA doctrine arise in exactly the same way that belief in a lucky scarf arises. Someone tries it, they get lucky that time, and so from then on they try it every time and believe it helps.
If some pieces of PUA technique are testable, that’s great. They should test them and publish the results. Until they do their beliefs don’t really have a place if we’re talking about Rational Romantic Relationships. If they aren’t testable, then they’re unfalsifiable beliefs and rationalists should be committed to discarding unfalsifiable beliefs. PUA looks to me more like folklore than science, at this stage.
If a whole bunch of different gurus are each flogging different techniques and none of them have evidence, then a rationalist should dismiss them all until they do have some evidence
I agree with this statement… but as previously discussed, I mean Bayesian reductionist evidence.
Which means, anecdotes count, even if they still count for less than numbers and double-blind tests.
If some pieces of PUA technique are testable, that’s great. They should test them and publish the results. Until they do their beliefs don’t really have a place if we’re talking about Rational Romantic Relationships
You’re using a very non-LW definition of “rational” here, since the principles of Something To Protect, and avoiding the Failures Of Eld Science would say that it’s your job to find something and test it, not to demand that people bring you only advice that’s already vetted.
If you wait for Richard Wiseman to turn “The Secret” into “Luck Theory”, and you actually needed the instrumental result, then you lost.
That is, you lost the utility you could have had by doing the testing yourself.
For medical outcomes, doing the testing yourself is a bad idea because the worst-case scenario isn’t that you don’t get your goal, it’s that you do damage to yourself or die.
But for testing PUA or anything in personal development, your personal testing costs are ridiculously low, and the worst case is just that you don’t get the goal you were after.
This means that if the goal is actually important, and whatever scientifically-validated information you have isn’t getting you the goal, then you don’t just sit on your ass and wait for someone to hand you the research on a platter.
Anything else isn’t rational, where rational is defined (as on LW) as “winning”.
I agree with this statement… but as previously discussed, I mean Bayesian reductionist evidence.
Which means, anecdotes count, even if they still count for less than numbers and double-blind tests.
I think this is a misunderstanding of the correct application of Bayes’ Theorem. Bayes is not a magic wand, and GIGO still applies. Anecdotal evidence counts but you have to correctly estimate the probability that you would hear that anecdote in a world where PUA methods were just placebos sold to the sex-starved and nerdy, as opposed to the probability that you would hear that anecdote in a world where PUA methods have some objectively measurable effect. I think most of the time the correct estimate is that those probabilities are barely distinguishable at best.
A rationalist should have a clear distinction between Things That Are Probably True, and Things That Might Be True and Would Be Interesting To Try. The goal of the OP was to sum up the state of human knowledge with regard to Things That Are Probably True, which is the standard scholarly starting point in research.
It seemed to me that PUA techniques, lacking any objective evidence to back them up, should be filed under Things That Might Be True and Would Be Interesting To Try but that their devotees were claiming that they were the elephant in the OP’s room and that they had been unjustly excluded from the set of Things That Are Probably True.
I’m not against the ethos of going out and trying these things, as long as the testing costs really are low (i.e. you don’t pay good money for them). They might work, and even if they are just placebos you might get lucky anyway. However it’s not rational to actually believe they probably work in the absence of proper evidence, as opposed to going along with them for the sake of experiment, or to try to squeeze them in to a list of Things That Are Probably True.
I mean Bayesian reductionist evidence. Which means, anecdotes count, even if they still count for less than numbers and double-blind tests.
I think this is a misunderstanding of the correct application of Bayes’ Theorem.
That a comment opening with this quote-reply pair is voted above zero troubles me. It is a direct contradiction of one of the most basic premises of this site.
I would have voted it down were it not for the rest of the paragraph cited, which basically comes down to “anecdotes are Bayesian evidence, but with caveats related to the base rate, and not always positive evidence”. Which is, as best I can tell, correct. In isolation, the opening sentence does seem to incorrectly imply that anecdotes don’t count at all, and so I’d have phrased it differently if I was trying to make the same point, but a false start isn’t enough for a downvote if the full post is well-argued and not obviously wrong.
In context, I interpreted pjeby to be saying that anecdotes counted as evidence which should lead a Bayesian rationalist to believe the truth of PUA claims. If that was not their intention I got them totally wrong.
However if I interpreted them correctly they were indeed applying Bayes incorrectly, since we should expect a base rate of PUA-affirming anecdotes even if PUA techniques are placebos, and even in the total absence of any real effects whatsoever. It’s not evidence until the rate of observation exceeds the base rate of false claims we should expect to hear in the absence of a non-placebo effect, and if you don’t know what the base rate is you don’t have enough information to carry out a Bayesian update. You can’t update without P(B).
However it’s not rational to actually believe they probably work in the absence of proper evidence, as opposed to going along with them for the sake of experiment
The truth of this statement depends heavily on how you unpack “believe”. Brains have more than one way of “believing” things, after all. A person can not “believe” in ghosts, and yet feel scared in a “haunted” house. Or more relevant to the current thread, a person can “believe” they are attractive and worthy and have every right to go up to someone and say “hi”, yet still be unable to do it.
IOW, epistemic and instrumental beliefs are compartmentalized in humans by default… which makes a mockery of the idea that manipulating your instrumental beliefs will somehow stain your epistemic purity.
I’m not against the ethos of going out and trying these things, as long as the testing costs really are low (i.e. you don’t pay good money for them).
Relevant: willingness to spend money to change is correlated with willingness to actually change. That doesn’t mean spending money causes change, of course, I’m just pointing out that a person’s willingness to incur the costs of changing (whatever sort of cost) is strongly correlated with them taking action to change. (See Prochaska, Norcross, et al; whose research and meta-research of a dozen different types of change goals is summarized in the book “Changing For Good”.)
[Originally, I was going to include a bunch of information about my work with personal development clients that reflects the pattern described in the above-mentioned research, but since you appear to prefer research to experience, I’ve decided to skip it.]
Relevant: willingness to spend money to change is correlated with willingness to actually change. That doesn’t mean spending money causes change, of course, I’m just pointing out that a person’s willingness to incur the costs of changing (whatever sort of cost) is strongly correlated with them taking action to change. (See Prochaska, Norcross, et al; whose research and meta-research of a dozen different types of change goals is summarized in the book “Changing For Good”.)
I place a high value on not financially encouraging bad behaviour, and selling non-evidence-based interventions to people who may be desperate, irrational or ill-informed but who don’t deserve to be robbed counts as bad behaviour to me.
There’s a loss of utility beyond the mere loss of cash to myself if I give cash to a scammer, because it feeds the scammer and potentially encourages other scammers to join the market. This is the flip side of the coin that there is a gain in utility when I give cash to a worthwhile charity.
People willing to spend money on attracting a mate have a wide variety of options as to how they spend it, after all. If they are willing to actually change it’s not as if the only way to demonstrate this is to spend money on PUA training rather than clothes, transportation, food, drink, taxi fares and so on.
As I mentioned in the other sub-thread, it’s really tiring to have you continually reframing what I say to make attackable arguments out of it. Unless your sole interest in LessWrong is to score rhetorical points (i.e., trolling), it’s a rather bad idea to keep doing that to people.
Note that the text you quoted from my comment has nothing to do with PUA. It is a portion of my evidence that your professed approach to personal development (i.e., trying things only if they cost nothing) is Not Winning.
On LessWrong, rationality equals winning, not pretending to avoid losing. (Or more bluntly: attempting to signal your intelligence and status by avoiding the low-status work of actually trying things and possibly being mistaken.)
It is better to do something wrong—even repeatedly—and eventually succeed, than to sit on your ass and do nothing. Otherwise, you are less instrumentally rational than any random person who tries things at random until something works.
Meanwhile, any time that you do not spend winning, is time spent losing, no matter how you spin it as some sort of intellectual superiority.
So, on that note, I will now return to activities with a better ROI than continuing this discussion. ;-)
(And sometimes hearing them [anecdotes] counts as evidence against the phenomenon!)
Indeed. Careful reading is required when investigating any kind of “folklore”, as occasionally self-help authors provide anecdotes that (in the details) provide a very different picture of what is happening than what the author is saying is the point or moral of that anecdote.
Hi.… I haven’t read this whole thread, but I know one very important thing that immediately discredited PhilosophyTutor in my view.
I strongly feel that the best pua’s are not at all about merely extracting something from the woman they interact with. They claim they live by the motto “leave her better than you found her”. From my impression of Casanova, the ultimate pua, he lived by that too.
You’re absolutely right about the methodological issues. I’ve thought it myself; besides the enormous survivor bias of course.
But it is far more irrational to discount their findings on that ground alone, because the alternative, academic studies, are blinded by exactly the same ignore-the-elephant and keep-things-proper attitude that the original poster of this thread pointed out.
Take this into account: a lot of good pua’s may fall far short of the ideal amount of rigor, but at the same time, far exceed the average person’s rigor. I can’t condemn those who, without the perspective gained from this site, nevertheless seek to quantify things and really understand them.
Hi.… I haven’t read this whole thread, but I know one very important thing that immediately discredited PhilosophyTutor in my view. I strongly feel that the best pua’s are not at all about merely extracting something from the woman they interact with. They claim they live by the motto “leave her better than you found her”. From my impression of Casanova, the ultimate pua, he lived by that too.
How do they know whether they fulfill this motto well?
Take this into account: a lot of good pua’s may fall far short of the ideal amount of rigor, but at the same time, far exceed the average person’s rigor.
Note that this may be a feature, not a bug: a PUA with unwavering belief in their method will likely exude more confidence, regardless of the method employed.
I remember one pickup guru describing how when he was younger, he’d found this poem online that was supposed to be the perfect pickup line… and the first few times he used it, it was, because he utterly believed it would work. Later, he had to find other methods that allowed him to have a similar level of belief.
As has been mentioned elsewhere on LW, belief causes people to act differently—often in ways that would be difficult or impossible to convincingly fake if you lacked the belief. (e.g. microexpressions, muscle tension, and similar cues)
To put it another way, even the falsifiability of PUA theory is subject to testing: i.e., do falsifiable PUA theories work better or worse than unfalsifiable ones? If unfalsifiable ones produce better results, then it’s a feature, not a bug. ;-)
Only in the same sense that the placebo effect is a “feature” of evidence-based medicine.
It’s okay if evidence-based medicine gets a tiny, tiny additional boost from the placebo effect. It’s good, in fact.
However when we are trying to figure out whether or not a treatment works we have to be absolutely sure we have ruled out the placebo effect as the causative factor. If we don’t do that then we can never find out which are the good treatments that have a real effect plus a placebo effect, and which are the fake treatments that only have a placebo effect.
Only if it turned out that method absolutely, totally did not matter and only confidence in the method mattered would it be rational to abandon the search for the truth and settle for belief in an unfalsifiable confidence-booster. It seems far more likely to me that there will in fact be approach methods that work better than others, and that only by disentangling the confounding factor of confidence from the real effect could you figure out what the real effect was and how strong it was.
This really, really underestimates the number of confounding factors. For any given man, the useful piece of information is what method will work for him, for women that:
Would be happy with him, and
He would be happy with
(Where “with” is defined as whatever sort of relationship both are happy with.)
This is a lot of confounding factors, and it’s pretty central to the tradeoff described in this post: do you go for something that’s inoffensive to lots of people, but not very attractive to anyone, or something that’s actually offensive to most people, but very attractive to your target audience?
You can’t do group randomized controls with something where individuality actually does count.
This is especially true of PUA advice like, “be in the moment” and “say something that amuses you”. How would you test these bits of advice, for example, while holding all other variables unchanged? By their very definition, they’re going to produce different behavior virtually every time you act on them.
There are two classes of claim here we need to divide up, but they share a common problem. First the classes, then the problem.
The first class is claims that are simply unfalsifiable. If there is no way even in theory that a proposition could be confirmed or falsified then that proposition is simply vacuous. There is nothing to say about it except that rational agents should discard the claim as meaningless and move on. If any element of PUA doctrine falls into this category then for LW purposes we should simply flag it as unfalsifiable and move on.
The second class is claims that are hard to prove or disprove because there are multiple confounding factors, but which with proper controls and a sufficiently large sample size we could in theory confirm or disconfirm. If a moderate amount of cologne works better than none at all or a large amount of cologne, for example, then if we got enough men to approach enough women then eventually if there’s a real effect we should be able to get a data pool that shows statistical significance despite those confounding effects.
The common problem both classes of claims have is that a rationalist is immediately going to ask someone who proposes such a claim “How do you think you know this?”. If a given claim is terribly difficult to confirm or disconfirm, and nobody has yet done the arduous legwork to check it, it’s very hard to see how a rational agent could think it is true or false. The same goes except more strongly for unfalsifiable claims.
For a PUA to argue that X is true, but that X is impossible to prove, is to open themselves up to the response “How do you know that, if it’s impossible to prove?”.
Sure… as long as you separate predictions from theory. When you reduce a PUA theory to what behaviors you expect someone believing that theory would produce, or what behaviors, if successful, would result in people believing such theories, you then have something suitable for testing, even if the theory is nonsensical on its face.
Lots of people believe in “The Secret” because it appears to produce results, despite the theory being utter garbage. But then, it turns out that some of what’s said is consistent with what actually makes people “luckier”… so there was a falsifiable prediction after all, buried under the nonsense.
If a group of people claim to produce results, then reduce their theory to more concrete predictions first, then test that. After all, if you discard alchemy because the theory is bunk, you miss the chance to discover chemistry.
Or, in more LW-ish speak: theories are not evidence, but even biased reports of actual experience are evidence of something. A Bayesian reductionist should be able to reduce even the craziest “woo” into some sort of useful probabilistic information… and there’s a substantial body of PUA material that’s considerably less “woo” than the average self-help book.
In the simplest form, this reduction could be just: person A claims that they were unsuccessful with women prior to adopting some set of PUA-trained behaviors. If the individual has numbers (even if somewhat imprecise) and there are a large number of people similar to person A, then this represents usable Bayesian evidence for that set of behaviors (or the training itself) being useful to persons with similar needs and desires as person A.
This is perfectly usable evidence that doesn’t require us to address the theory or its falsifiability at all.
Now, it is not necessarily evidence for the validity of person A’s favorite PUA theory!
Rather, it is evidence that something person A did differently was helpful for person A… and it remains an open question to determine what actually caused the improvement. For example, could it simply be that receiving PUA training somehow changes people? That it motivates them to approach women repeatedly, resulting in more confidence and familiarity with approaching women? Any number of other possible factors?
In other words, the actual theory put forth by the PUAs doing the teaching shouldn’t necessarily be at the top of the list of possibilities to investigate, even if the teaching clearly produces results...
And using theory-validity as a screening method for practical advice is pretty much useless, if you have “something to protect” (in LW speak). That is, if you need a method that works in an area where science is not yet settled, you cannot afford to discard practical advice on the basis of questionable theory: you will throw out way too much of the available information. (This applies to the self-help field as much as PUA.)
I’m perfectly happy to engage with PUA theories on that level, but the methodological obstacles to collecting good data are still the same. So the vital question is still the same, which is “How do these people think they know these things?”.
The only difference is that instead of addressing the question to the PUA who believes specific techniques A, B and C bring about certain outcomes, we address the question to the meta-PUA who believes that although specific techniques A, B and C are placebos that belief in the efficaciousness of those techniques has measurable effects.
However PUA devotees might not want to go down this argumentative path because the likely outcome is admitting that much of the content on PUA sites is superstition, and that the outcomes of the combined arsenal of PUA tips and techniques cannot currently be distinguished from the outcomes of a change of clothes, a little personal grooming and asking a bunch of women to go out with you.
PUA devotees like to position themselves as gurus with secret knowledge. If it turns out that the entire edifice is indistinguishable from superstition then they would be repositioned as people with poor social skills and misogynist world-views who reinvented a very old wheel and then constructed non-evidence-based folk beliefs around it.
So depending on the thesis you are arguing for, it might be safer to argue that PUA techniques do have non-placebo effects.
Even if that were true (and I don’t think that’s anywhere near the case), you keep dropping out the critical meta-level for actual human beings to achieve instrumental results: i.e., motivation.
That is, even if “a change of clothes, a little grooming, and asking a bunch of women out” were actually the best possible approach, it’s kind of useless to just leave it at that, because quite a lot of actual human beings are incapable of motivating themselves to actually DO the necessary steps, using mere logical knowledge without an emotional component. (On LW, people generally use the term “akrasia” to describe this normal characteristic of human behavior as if it were some sort of strange and unexpected disease. ;-) )
To put it another way, the critical function of any kind of personal development training is to transmit a mental model to a human brain in a way such that the attached human will act in accordance with the model so transmitted.
After all, if this were not the case, then self-help books of any stripe could consist simply of short instruction sheets!
“Placebo” and “superstition” are not interchangeable concepts. A placebo is a real effect, a superstition is an imaginary one.
That is, if I think my baseball batting performance is improved when I wear a red scarf, and it is, that’s a placebo effect. (Belief creating a real result.) If I think that it’s improved, but it actually isn’t, then that’s a superstition.
This means that placebo effects are instrumentally more useful than superstitions… unless of course the superstition gets you to do something that itself has a beneficial effect.
To the extent that PUA uses placebo effects on the performer of a technique, the usefulness of the effect is in the resulting non-placebo response of the recipient of the technique.
Meanwhile, there are tons of specific pieces of PUA advice that are easily testable in miniature that needn’t rely on either sort of effect.
For example, if PUAs of the Mystery school predict that “a set will open more frequently if you body-rock away from the group before establishing a false time constraint”, that prediction should be easily testable to determine its truth or falsehood, given objectively reducible definitions of “set”, “open”, “body rock”, and “false time constraint”. (All of which terms the Mystery method does quite objectively reduce.)
So, you could teach a bunch of people to do these things, send them out and videotape ’em, and then get a bunch of grad students to grade the sets as to whether they opened and how quickly (without seeing the PUA’s behavior), and voila… testable prediction.
On the level of such specific, immediately-responded-to actions and events, ISTM that PUAs have strong motivation to eliminate non-working or negatively-reinforced behaviors from their repertoire, especially when in the process of inventing them.
Of course, removing superstitious “extras” is unlikely for a given PUA guru to notice; I have observed that it is students of those gurus, or new, competing gurus who would push back with, “I haven’t seen any need to body rock”, or “Opinion openers are unnecessary”, or “indirect game is pointless”, etc. So, even though individual schools don’t often move in the direction of discarding old techniques, the field as a whole seems to evolve towards simplification where possible.
Indeed, there is at least one PU guru who says that nearly all of Mystery method is pointless superstition in the sense that guys who jump through all its hoops are succeeding not because of what they’re doing in the process, so much as what they’re not doing.
That, in essence, women either find you attractive or they don’t, and all that your “game” needs to do is not blow the attraction by saying or doing something stupid. ;-) His specific advice seems to focus more on figuring out how to tell whether a particular woman is attracted to you, and how to move as quickly as possible from that to doing something about it.
Note: I don’t believe this guru is saying that Mystery’s advice about social skills is wrong, merely that the use of those skills can be completely superfluous to a goal of having sex with attractive women, vs. a goal of being friends with groups of people and hanging out with them before having sex with some of the women, or getting into social circles containing high-status women. And I think he’s largely correct in this stance, especially if your objective isn’t to have sex with the highest-status beautiful woman present (which is Mystery method’s raison d’etre).
If your objective is to meet, say, the kinkiest girl with the dirtiest mind, or the sweetest, friendliest one, or the most adventurous one, or really almost any other criteria, Mystery’s elaborate refinements are superfluous, as they were developed to help him rapidly social-climb his way into his target’s circle of friends and disarm their ready defenses against guys coming to hit on her.
To put it another way: Mystery is using a narrow, red-line strategy specifically tuned to women who are the most attractive to a broad, blue-line spectrum of guys… because they were also his personal red line. If your red line is not those women, then Mystery method is not the tool you should use.
PUA style, in short, is very individual. Once you add back in the context of a given guru’s personality, physique, goals, and other personal characteristics, you find that it’s nowhere near as broad-spectrum/universal as the guru’s declarations appear. Once, I watched some videos online from a conference of PUA gurus who often had what sounded like contradictory advice… but which was intended for people with different personalities and different goals.
For example, one guy focused on making lots of female friends and going out with them a lot—he enjoys it, and then they play matchmaker for him. Another emphasized a lone-wolf strategy of “forced IOIs”, which is PUA code for acting in a way that forces a woman to very quickly indicate (nonverbally) whether she has any interest in him. Just looking at these two guys, you could tell that each had chosen a method that was a better match for their personality, and that neither would be happy using the other’s method, nor would they each be meeting the kind of women they wanted to meet!
So that’s why I keep saying that you’re ignoring the fact that PUA is not a single uniform thing, any more than, say, weight loss is. In theory, everybody can eat less and move more and this will make them lose weight. In practice, it ain’t nearly that simple: different people have different nutritional needs, for example, so the diet that’s healthy for one person can be very bad for another.
Thus, if you want, say, “honest, equal, supportive” PUA, then by all means, look for it. But don’t expect to find One True PUA Theory that will make all women do your bidding. It doesn’t exist. What exists in PUA is a vast assortment of vaguely related theories aimed at very individual goals and personality types.
(And, of more direct relevance to this particular sub-thread, far too many confounding factors to be of much use to group studies, unless you plan to run a lot of experiments.)
Speaking broadly, if the goal is Rational Romantic Relationships than any advice which doesn’t have actual existing evidence to back it up is not advice rational people should be taking.
If a whole bunch of different gurus are each flogging different techniques and none of them have evidence, then a rationalist should dismiss them all until they do have some evidence, just as we dismiss the alt-med gurus who flog different forms of alternative medicine without evidence. Without evidence PUA is no more the elephant in the Rationalist Romantic Relationship room than ayurveda is an elephant in the medical science room.
As far as the superstition/placebo distinction you are making I think you are simply wrong linguistically speaking. Nothing stops a superstition being a placebo, and in fact almost all of alternative medicine could legitimately be described as placebo and superstition.
Superstitions arise because of faulty cause/effect reasoning and may indeed have a placebo effect, like the red scarf you mention. I suspect but cannot prove that some parts of PUA doctrine arise in exactly the same way that belief in a lucky scarf arises. Someone tries it, they get lucky that time, and so from then on they try it every time and believe it helps.
If some pieces of PUA technique are testable, that’s great. They should test them and publish the results. Until they do their beliefs don’t really have a place if we’re talking about Rational Romantic Relationships. If they aren’t testable, then they’re unfalsifiable beliefs and rationalists should be committed to discarding unfalsifiable beliefs. PUA looks to me more like folklore than science, at this stage.
I agree with this statement… but as previously discussed, I mean Bayesian reductionist evidence.
Which means, anecdotes count, even if they still count for less than numbers and double-blind tests.
You’re using a very non-LW definition of “rational” here, since the principles of Something To Protect, and avoiding the Failures Of Eld Science would say that it’s your job to find something and test it, not to demand that people bring you only advice that’s already vetted.
If you wait for Richard Wiseman to turn “The Secret” into “Luck Theory”, and you actually needed the instrumental result, then you lost.
That is, you lost the utility you could have had by doing the testing yourself.
For medical outcomes, doing the testing yourself is a bad idea because the worst-case scenario isn’t that you don’t get your goal, it’s that you do damage to yourself or die.
But for testing PUA or anything in personal development, your personal testing costs are ridiculously low, and the worst case is just that you don’t get the goal you were after.
This means that if the goal is actually important, and whatever scientifically-validated information you have isn’t getting you the goal, then you don’t just sit on your ass and wait for someone to hand you the research on a platter.
Anything else isn’t rational, where rational is defined (as on LW) as “winning”.
I think this is a misunderstanding of the correct application of Bayes’ Theorem. Bayes is not a magic wand, and GIGO still applies. Anecdotal evidence counts but you have to correctly estimate the probability that you would hear that anecdote in a world where PUA methods were just placebos sold to the sex-starved and nerdy, as opposed to the probability that you would hear that anecdote in a world where PUA methods have some objectively measurable effect. I think most of the time the correct estimate is that those probabilities are barely distinguishable at best.
A rationalist should have a clear distinction between Things That Are Probably True, and Things That Might Be True and Would Be Interesting To Try. The goal of the OP was to sum up the state of human knowledge with regard to Things That Are Probably True, which is the standard scholarly starting point in research.
It seemed to me that PUA techniques, lacking any objective evidence to back them up, should be filed under Things That Might Be True and Would Be Interesting To Try but that their devotees were claiming that they were the elephant in the OP’s room and that they had been unjustly excluded from the set of Things That Are Probably True.
I’m not against the ethos of going out and trying these things, as long as the testing costs really are low (i.e. you don’t pay good money for them). They might work, and even if they are just placebos you might get lucky anyway. However it’s not rational to actually believe they probably work in the absence of proper evidence, as opposed to going along with them for the sake of experiment, or to try to squeeze them in to a list of Things That Are Probably True.
Also, better placebo than nothing at all.
That a comment opening with this quote-reply pair is voted above zero troubles me. It is a direct contradiction of one of the most basic premises of this site.
I would have voted it down were it not for the rest of the paragraph cited, which basically comes down to “anecdotes are Bayesian evidence, but with caveats related to the base rate, and not always positive evidence”. Which is, as best I can tell, correct. In isolation, the opening sentence does seem to incorrectly imply that anecdotes don’t count at all, and so I’d have phrased it differently if I was trying to make the same point, but a false start isn’t enough for a downvote if the full post is well-argued and not obviously wrong.
In context, I interpreted pjeby to be saying that anecdotes counted as evidence which should lead a Bayesian rationalist to believe the truth of PUA claims. If that was not their intention I got them totally wrong.
However if I interpreted them correctly they were indeed applying Bayes incorrectly, since we should expect a base rate of PUA-affirming anecdotes even if PUA techniques are placebos, and even in the total absence of any real effects whatsoever. It’s not evidence until the rate of observation exceeds the base rate of false claims we should expect to hear in the absence of a non-placebo effect, and if you don’t know what the base rate is you don’t have enough information to carry out a Bayesian update. You can’t update without P(B).
The truth of this statement depends heavily on how you unpack “believe”. Brains have more than one way of “believing” things, after all. A person can not “believe” in ghosts, and yet feel scared in a “haunted” house. Or more relevant to the current thread, a person can “believe” they are attractive and worthy and have every right to go up to someone and say “hi”, yet still be unable to do it.
IOW, epistemic and instrumental beliefs are compartmentalized in humans by default… which makes a mockery of the idea that manipulating your instrumental beliefs will somehow stain your epistemic purity.
Relevant: willingness to spend money to change is correlated with willingness to actually change. That doesn’t mean spending money causes change, of course, I’m just pointing out that a person’s willingness to incur the costs of changing (whatever sort of cost) is strongly correlated with them taking action to change. (See Prochaska, Norcross, et al; whose research and meta-research of a dozen different types of change goals is summarized in the book “Changing For Good”.)
[Originally, I was going to include a bunch of information about my work with personal development clients that reflects the pattern described in the above-mentioned research, but since you appear to prefer research to experience, I’ve decided to skip it.]
I place a high value on not financially encouraging bad behaviour, and selling non-evidence-based interventions to people who may be desperate, irrational or ill-informed but who don’t deserve to be robbed counts as bad behaviour to me.
There’s a loss of utility beyond the mere loss of cash to myself if I give cash to a scammer, because it feeds the scammer and potentially encourages other scammers to join the market. This is the flip side of the coin that there is a gain in utility when I give cash to a worthwhile charity.
People willing to spend money on attracting a mate have a wide variety of options as to how they spend it, after all. If they are willing to actually change it’s not as if the only way to demonstrate this is to spend money on PUA training rather than clothes, transportation, food, drink, taxi fares and so on.
As I mentioned in the other sub-thread, it’s really tiring to have you continually reframing what I say to make attackable arguments out of it. Unless your sole interest in LessWrong is to score rhetorical points (i.e., trolling), it’s a rather bad idea to keep doing that to people.
Note that the text you quoted from my comment has nothing to do with PUA. It is a portion of my evidence that your professed approach to personal development (i.e., trying things only if they cost nothing) is Not Winning.
On LessWrong, rationality equals winning, not pretending to avoid losing. (Or more bluntly: attempting to signal your intelligence and status by avoiding the low-status work of actually trying things and possibly being mistaken.)
It is better to do something wrong—even repeatedly—and eventually succeed, than to sit on your ass and do nothing. Otherwise, you are less instrumentally rational than any random person who tries things at random until something works.
Meanwhile, any time that you do not spend winning, is time spent losing, no matter how you spin it as some sort of intellectual superiority.
So, on that note, I will now return to activities with a better ROI than continuing this discussion. ;-)
(And sometimes hearing them counts as evidence against the phenomenon!)
Indeed. Careful reading is required when investigating any kind of “folklore”, as occasionally self-help authors provide anecdotes that (in the details) provide a very different picture of what is happening than what the author is saying is the point or moral of that anecdote.
For what it is worth the majority are positioned as ‘acolytes’.
Hi.… I haven’t read this whole thread, but I know one very important thing that immediately discredited PhilosophyTutor in my view. I strongly feel that the best pua’s are not at all about merely extracting something from the woman they interact with. They claim they live by the motto “leave her better than you found her”. From my impression of Casanova, the ultimate pua, he lived by that too.
You’re absolutely right about the methodological issues. I’ve thought it myself; besides the enormous survivor bias of course.
But it is far more irrational to discount their findings on that ground alone, because the alternative, academic studies, are blinded by exactly the same ignore-the-elephant and keep-things-proper attitude that the original poster of this thread pointed out.
Take this into account: a lot of good pua’s may fall far short of the ideal amount of rigor, but at the same time, far exceed the average person’s rigor. I can’t condemn those who, without the perspective gained from this site, nevertheless seek to quantify things and really understand them.
How do they know whether they fulfill this motto well?
Whether someone does better than average is irrelevant to whether they do well enough. It’s possible, indeed very easy, to put more effort into rigor than the average person, and still fail to produce any valid Bayesian evidence.