that the outcomes of the combined arsenal of PUA tips and techniques cannot currently be distinguished from the outcomes of a change of clothes, a little personal grooming and asking a bunch of women to go out with you.
Even if that were true (and I don’t think that’s anywhere near the case), you keep dropping out the critical meta-level for actual human beings to achieve instrumental results: i.e., motivation.
That is, even if “a change of clothes, a little grooming, and asking a bunch of women out” were actually the best possible approach, it’s kind of useless to just leave it at that, because quite a lot of actual human beings are incapable of motivating themselves to actually DO the necessary steps, using mere logical knowledge without an emotional component. (On LW, people generally use the term “akrasia” to describe this normal characteristic of human behavior as if it were some sort of strange and unexpected disease. ;-) )
To put it another way, the critical function of any kind of personal development training is to transmit a mental model to a human brain in a way such that the attached human will act in accordance with the model so transmitted.
After all, if this were not the case, then self-help books of any stripe could consist simply of short instruction sheets!
If it turns out that the entire edifice is indistinguishable from superstition …. it might be safer to argue that PUA techniques do have non-placebo effects.
“Placebo” and “superstition” are not interchangeable concepts. A placebo is a real effect, a superstition is an imaginary one.
That is, if I think my baseball batting performance is improved when I wear a red scarf, and it is, that’s a placebo effect. (Belief creating a real result.) If I think that it’s improved, but it actually isn’t, then that’s a superstition.
This means that placebo effects are instrumentally more useful than superstitions… unless of course the superstition gets you to do something that itself has a beneficial effect.
To the extent that PUA uses placebo effects on the performer of a technique, the usefulness of the effect is in the resulting non-placebo response of the recipient of the technique.
Meanwhile, there are tons of specific pieces of PUA advice that are easily testable in miniature that needn’t rely on either sort of effect.
For example, if PUAs of the Mystery school predict that “a set will open more frequently if you body-rock away from the group before establishing a false time constraint”, that prediction should be easily testable to determine its truth or falsehood, given objectively reducible definitions of “set”, “open”, “body rock”, and “false time constraint”. (All of which terms the Mystery method does quite objectively reduce.)
So, you could teach a bunch of people to do these things, send them out and videotape ’em, and then get a bunch of grad students to grade the sets as to whether they opened and how quickly (without seeing the PUA’s behavior), and voila… testable prediction.
On the level of such specific, immediately-responded-to actions and events, ISTM that PUAs have strong motivation to eliminate non-working or negatively-reinforced behaviors from their repertoire, especially when in the process of inventing them.
Of course, removing superstitious “extras” is unlikely for a given PUA guru to notice; I have observed that it is students of those gurus, or new, competing gurus who would push back with, “I haven’t seen any need to body rock”, or “Opinion openers are unnecessary”, or “indirect game is pointless”, etc. So, even though individual schools don’t often move in the direction of discarding old techniques, the field as a whole seems to evolve towards simplification where possible.
Indeed, there is at least one PU guru who says that nearly all of Mystery method is pointless superstition in the sense that guys who jump through all its hoops are succeeding not because of what they’re doing in the process, so much as what they’re not doing.
That, in essence, women either find you attractive or they don’t, and all that your “game” needs to do is not blow the attraction by saying or doing something stupid. ;-) His specific advice seems to focus more on figuring out how to tell whether a particular woman is attracted to you, and how to move as quickly as possible from that to doing something about it.
Note: I don’t believe this guru is saying that Mystery’s advice about social skills is wrong, merely that the use of those skills can be completely superfluous to a goal of having sex with attractive women, vs. a goal of being friends with groups of people and hanging out with them before having sex with some of the women, or getting into social circles containing high-status women. And I think he’s largely correct in this stance, especially if your objective isn’t to have sex with the highest-status beautiful woman present (which is Mystery method’s raison d’etre).
If your objective is to meet, say, the kinkiest girl with the dirtiest mind, or the sweetest, friendliest one, or the most adventurous one, or really almost any other criteria, Mystery’s elaborate refinements are superfluous, as they were developed to help him rapidly social-climb his way into his target’s circle of friends and disarm their ready defenses against guys coming to hit on her.
To put it another way: Mystery is using a narrow, red-line strategy specifically tuned to women who are the most attractive to a broad, blue-line spectrum of guys… because they were also his personal red line. If your red line is not those women, then Mystery method is not the tool you should use.
PUA style, in short, is very individual. Once you add back in the context of a given guru’s personality, physique, goals, and other personal characteristics, you find that it’s nowhere near as broad-spectrum/universal as the guru’s declarations appear. Once, I watched some videos online from a conference of PUA gurus who often had what sounded like contradictory advice… but which was intended for people with different personalities and different goals.
For example, one guy focused on making lots of female friends and going out with them a lot—he enjoys it, and then they play matchmaker for him. Another emphasized a lone-wolf strategy of “forced IOIs”, which is PUA code for acting in a way that forces a woman to very quickly indicate (nonverbally) whether she has any interest in him. Just looking at these two guys, you could tell that each had chosen a method that was a better match for their personality, and that neither would be happy using the other’s method, nor would they each be meeting the kind of women they wanted to meet!
So that’s why I keep saying that you’re ignoring the fact that PUA is not a single uniform thing, any more than, say, weight loss is. In theory, everybody can eat less and move more and this will make them lose weight. In practice, it ain’t nearly that simple: different people have different nutritional needs, for example, so the diet that’s healthy for one person can be very bad for another.
Thus, if you want, say, “honest, equal, supportive” PUA, then by all means, look for it. But don’t expect to find One True PUA Theory that will make all women do your bidding. It doesn’t exist. What exists in PUA is a vast assortment of vaguely related theories aimed at very individual goals and personality types.
(And, of more direct relevance to this particular sub-thread, far too many confounding factors to be of much use to group studies, unless you plan to run a lot of experiments.)
Speaking broadly, if the goal is Rational Romantic Relationships than any advice which doesn’t have actual existing evidence to back it up is not advice rational people should be taking.
If a whole bunch of different gurus are each flogging different techniques and none of them have evidence, then a rationalist should dismiss them all until they do have some evidence, just as we dismiss the alt-med gurus who flog different forms of alternative medicine without evidence. Without evidence PUA is no more the elephant in the Rationalist Romantic Relationship room than ayurveda is an elephant in the medical science room.
As far as the superstition/placebo distinction you are making I think you are simply wrong linguistically speaking. Nothing stops a superstition being a placebo, and in fact almost all of alternative medicine could legitimately be described as placebo and superstition.
Superstitions arise because of faulty cause/effect reasoning and may indeed have a placebo effect, like the red scarf you mention. I suspect but cannot prove that some parts of PUA doctrine arise in exactly the same way that belief in a lucky scarf arises. Someone tries it, they get lucky that time, and so from then on they try it every time and believe it helps.
If some pieces of PUA technique are testable, that’s great. They should test them and publish the results. Until they do their beliefs don’t really have a place if we’re talking about Rational Romantic Relationships. If they aren’t testable, then they’re unfalsifiable beliefs and rationalists should be committed to discarding unfalsifiable beliefs. PUA looks to me more like folklore than science, at this stage.
If a whole bunch of different gurus are each flogging different techniques and none of them have evidence, then a rationalist should dismiss them all until they do have some evidence
I agree with this statement… but as previously discussed, I mean Bayesian reductionist evidence.
Which means, anecdotes count, even if they still count for less than numbers and double-blind tests.
If some pieces of PUA technique are testable, that’s great. They should test them and publish the results. Until they do their beliefs don’t really have a place if we’re talking about Rational Romantic Relationships
You’re using a very non-LW definition of “rational” here, since the principles of Something To Protect, and avoiding the Failures Of Eld Science would say that it’s your job to find something and test it, not to demand that people bring you only advice that’s already vetted.
If you wait for Richard Wiseman to turn “The Secret” into “Luck Theory”, and you actually needed the instrumental result, then you lost.
That is, you lost the utility you could have had by doing the testing yourself.
For medical outcomes, doing the testing yourself is a bad idea because the worst-case scenario isn’t that you don’t get your goal, it’s that you do damage to yourself or die.
But for testing PUA or anything in personal development, your personal testing costs are ridiculously low, and the worst case is just that you don’t get the goal you were after.
This means that if the goal is actually important, and whatever scientifically-validated information you have isn’t getting you the goal, then you don’t just sit on your ass and wait for someone to hand you the research on a platter.
Anything else isn’t rational, where rational is defined (as on LW) as “winning”.
I agree with this statement… but as previously discussed, I mean Bayesian reductionist evidence.
Which means, anecdotes count, even if they still count for less than numbers and double-blind tests.
I think this is a misunderstanding of the correct application of Bayes’ Theorem. Bayes is not a magic wand, and GIGO still applies. Anecdotal evidence counts but you have to correctly estimate the probability that you would hear that anecdote in a world where PUA methods were just placebos sold to the sex-starved and nerdy, as opposed to the probability that you would hear that anecdote in a world where PUA methods have some objectively measurable effect. I think most of the time the correct estimate is that those probabilities are barely distinguishable at best.
A rationalist should have a clear distinction between Things That Are Probably True, and Things That Might Be True and Would Be Interesting To Try. The goal of the OP was to sum up the state of human knowledge with regard to Things That Are Probably True, which is the standard scholarly starting point in research.
It seemed to me that PUA techniques, lacking any objective evidence to back them up, should be filed under Things That Might Be True and Would Be Interesting To Try but that their devotees were claiming that they were the elephant in the OP’s room and that they had been unjustly excluded from the set of Things That Are Probably True.
I’m not against the ethos of going out and trying these things, as long as the testing costs really are low (i.e. you don’t pay good money for them). They might work, and even if they are just placebos you might get lucky anyway. However it’s not rational to actually believe they probably work in the absence of proper evidence, as opposed to going along with them for the sake of experiment, or to try to squeeze them in to a list of Things That Are Probably True.
I mean Bayesian reductionist evidence. Which means, anecdotes count, even if they still count for less than numbers and double-blind tests.
I think this is a misunderstanding of the correct application of Bayes’ Theorem.
That a comment opening with this quote-reply pair is voted above zero troubles me. It is a direct contradiction of one of the most basic premises of this site.
I would have voted it down were it not for the rest of the paragraph cited, which basically comes down to “anecdotes are Bayesian evidence, but with caveats related to the base rate, and not always positive evidence”. Which is, as best I can tell, correct. In isolation, the opening sentence does seem to incorrectly imply that anecdotes don’t count at all, and so I’d have phrased it differently if I was trying to make the same point, but a false start isn’t enough for a downvote if the full post is well-argued and not obviously wrong.
In context, I interpreted pjeby to be saying that anecdotes counted as evidence which should lead a Bayesian rationalist to believe the truth of PUA claims. If that was not their intention I got them totally wrong.
However if I interpreted them correctly they were indeed applying Bayes incorrectly, since we should expect a base rate of PUA-affirming anecdotes even if PUA techniques are placebos, and even in the total absence of any real effects whatsoever. It’s not evidence until the rate of observation exceeds the base rate of false claims we should expect to hear in the absence of a non-placebo effect, and if you don’t know what the base rate is you don’t have enough information to carry out a Bayesian update. You can’t update without P(B).
However it’s not rational to actually believe they probably work in the absence of proper evidence, as opposed to going along with them for the sake of experiment
The truth of this statement depends heavily on how you unpack “believe”. Brains have more than one way of “believing” things, after all. A person can not “believe” in ghosts, and yet feel scared in a “haunted” house. Or more relevant to the current thread, a person can “believe” they are attractive and worthy and have every right to go up to someone and say “hi”, yet still be unable to do it.
IOW, epistemic and instrumental beliefs are compartmentalized in humans by default… which makes a mockery of the idea that manipulating your instrumental beliefs will somehow stain your epistemic purity.
I’m not against the ethos of going out and trying these things, as long as the testing costs really are low (i.e. you don’t pay good money for them).
Relevant: willingness to spend money to change is correlated with willingness to actually change. That doesn’t mean spending money causes change, of course, I’m just pointing out that a person’s willingness to incur the costs of changing (whatever sort of cost) is strongly correlated with them taking action to change. (See Prochaska, Norcross, et al; whose research and meta-research of a dozen different types of change goals is summarized in the book “Changing For Good”.)
[Originally, I was going to include a bunch of information about my work with personal development clients that reflects the pattern described in the above-mentioned research, but since you appear to prefer research to experience, I’ve decided to skip it.]
Relevant: willingness to spend money to change is correlated with willingness to actually change. That doesn’t mean spending money causes change, of course, I’m just pointing out that a person’s willingness to incur the costs of changing (whatever sort of cost) is strongly correlated with them taking action to change. (See Prochaska, Norcross, et al; whose research and meta-research of a dozen different types of change goals is summarized in the book “Changing For Good”.)
I place a high value on not financially encouraging bad behaviour, and selling non-evidence-based interventions to people who may be desperate, irrational or ill-informed but who don’t deserve to be robbed counts as bad behaviour to me.
There’s a loss of utility beyond the mere loss of cash to myself if I give cash to a scammer, because it feeds the scammer and potentially encourages other scammers to join the market. This is the flip side of the coin that there is a gain in utility when I give cash to a worthwhile charity.
People willing to spend money on attracting a mate have a wide variety of options as to how they spend it, after all. If they are willing to actually change it’s not as if the only way to demonstrate this is to spend money on PUA training rather than clothes, transportation, food, drink, taxi fares and so on.
As I mentioned in the other sub-thread, it’s really tiring to have you continually reframing what I say to make attackable arguments out of it. Unless your sole interest in LessWrong is to score rhetorical points (i.e., trolling), it’s a rather bad idea to keep doing that to people.
Note that the text you quoted from my comment has nothing to do with PUA. It is a portion of my evidence that your professed approach to personal development (i.e., trying things only if they cost nothing) is Not Winning.
On LessWrong, rationality equals winning, not pretending to avoid losing. (Or more bluntly: attempting to signal your intelligence and status by avoiding the low-status work of actually trying things and possibly being mistaken.)
It is better to do something wrong—even repeatedly—and eventually succeed, than to sit on your ass and do nothing. Otherwise, you are less instrumentally rational than any random person who tries things at random until something works.
Meanwhile, any time that you do not spend winning, is time spent losing, no matter how you spin it as some sort of intellectual superiority.
So, on that note, I will now return to activities with a better ROI than continuing this discussion. ;-)
(And sometimes hearing them [anecdotes] counts as evidence against the phenomenon!)
Indeed. Careful reading is required when investigating any kind of “folklore”, as occasionally self-help authors provide anecdotes that (in the details) provide a very different picture of what is happening than what the author is saying is the point or moral of that anecdote.
Even if that were true (and I don’t think that’s anywhere near the case), you keep dropping out the critical meta-level for actual human beings to achieve instrumental results: i.e., motivation.
That is, even if “a change of clothes, a little grooming, and asking a bunch of women out” were actually the best possible approach, it’s kind of useless to just leave it at that, because quite a lot of actual human beings are incapable of motivating themselves to actually DO the necessary steps, using mere logical knowledge without an emotional component. (On LW, people generally use the term “akrasia” to describe this normal characteristic of human behavior as if it were some sort of strange and unexpected disease. ;-) )
To put it another way, the critical function of any kind of personal development training is to transmit a mental model to a human brain in a way such that the attached human will act in accordance with the model so transmitted.
After all, if this were not the case, then self-help books of any stripe could consist simply of short instruction sheets!
“Placebo” and “superstition” are not interchangeable concepts. A placebo is a real effect, a superstition is an imaginary one.
That is, if I think my baseball batting performance is improved when I wear a red scarf, and it is, that’s a placebo effect. (Belief creating a real result.) If I think that it’s improved, but it actually isn’t, then that’s a superstition.
This means that placebo effects are instrumentally more useful than superstitions… unless of course the superstition gets you to do something that itself has a beneficial effect.
To the extent that PUA uses placebo effects on the performer of a technique, the usefulness of the effect is in the resulting non-placebo response of the recipient of the technique.
Meanwhile, there are tons of specific pieces of PUA advice that are easily testable in miniature that needn’t rely on either sort of effect.
For example, if PUAs of the Mystery school predict that “a set will open more frequently if you body-rock away from the group before establishing a false time constraint”, that prediction should be easily testable to determine its truth or falsehood, given objectively reducible definitions of “set”, “open”, “body rock”, and “false time constraint”. (All of which terms the Mystery method does quite objectively reduce.)
So, you could teach a bunch of people to do these things, send them out and videotape ’em, and then get a bunch of grad students to grade the sets as to whether they opened and how quickly (without seeing the PUA’s behavior), and voila… testable prediction.
On the level of such specific, immediately-responded-to actions and events, ISTM that PUAs have strong motivation to eliminate non-working or negatively-reinforced behaviors from their repertoire, especially when in the process of inventing them.
Of course, removing superstitious “extras” is unlikely for a given PUA guru to notice; I have observed that it is students of those gurus, or new, competing gurus who would push back with, “I haven’t seen any need to body rock”, or “Opinion openers are unnecessary”, or “indirect game is pointless”, etc. So, even though individual schools don’t often move in the direction of discarding old techniques, the field as a whole seems to evolve towards simplification where possible.
Indeed, there is at least one PU guru who says that nearly all of Mystery method is pointless superstition in the sense that guys who jump through all its hoops are succeeding not because of what they’re doing in the process, so much as what they’re not doing.
That, in essence, women either find you attractive or they don’t, and all that your “game” needs to do is not blow the attraction by saying or doing something stupid. ;-) His specific advice seems to focus more on figuring out how to tell whether a particular woman is attracted to you, and how to move as quickly as possible from that to doing something about it.
Note: I don’t believe this guru is saying that Mystery’s advice about social skills is wrong, merely that the use of those skills can be completely superfluous to a goal of having sex with attractive women, vs. a goal of being friends with groups of people and hanging out with them before having sex with some of the women, or getting into social circles containing high-status women. And I think he’s largely correct in this stance, especially if your objective isn’t to have sex with the highest-status beautiful woman present (which is Mystery method’s raison d’etre).
If your objective is to meet, say, the kinkiest girl with the dirtiest mind, or the sweetest, friendliest one, or the most adventurous one, or really almost any other criteria, Mystery’s elaborate refinements are superfluous, as they were developed to help him rapidly social-climb his way into his target’s circle of friends and disarm their ready defenses against guys coming to hit on her.
To put it another way: Mystery is using a narrow, red-line strategy specifically tuned to women who are the most attractive to a broad, blue-line spectrum of guys… because they were also his personal red line. If your red line is not those women, then Mystery method is not the tool you should use.
PUA style, in short, is very individual. Once you add back in the context of a given guru’s personality, physique, goals, and other personal characteristics, you find that it’s nowhere near as broad-spectrum/universal as the guru’s declarations appear. Once, I watched some videos online from a conference of PUA gurus who often had what sounded like contradictory advice… but which was intended for people with different personalities and different goals.
For example, one guy focused on making lots of female friends and going out with them a lot—he enjoys it, and then they play matchmaker for him. Another emphasized a lone-wolf strategy of “forced IOIs”, which is PUA code for acting in a way that forces a woman to very quickly indicate (nonverbally) whether she has any interest in him. Just looking at these two guys, you could tell that each had chosen a method that was a better match for their personality, and that neither would be happy using the other’s method, nor would they each be meeting the kind of women they wanted to meet!
So that’s why I keep saying that you’re ignoring the fact that PUA is not a single uniform thing, any more than, say, weight loss is. In theory, everybody can eat less and move more and this will make them lose weight. In practice, it ain’t nearly that simple: different people have different nutritional needs, for example, so the diet that’s healthy for one person can be very bad for another.
Thus, if you want, say, “honest, equal, supportive” PUA, then by all means, look for it. But don’t expect to find One True PUA Theory that will make all women do your bidding. It doesn’t exist. What exists in PUA is a vast assortment of vaguely related theories aimed at very individual goals and personality types.
(And, of more direct relevance to this particular sub-thread, far too many confounding factors to be of much use to group studies, unless you plan to run a lot of experiments.)
Speaking broadly, if the goal is Rational Romantic Relationships than any advice which doesn’t have actual existing evidence to back it up is not advice rational people should be taking.
If a whole bunch of different gurus are each flogging different techniques and none of them have evidence, then a rationalist should dismiss them all until they do have some evidence, just as we dismiss the alt-med gurus who flog different forms of alternative medicine without evidence. Without evidence PUA is no more the elephant in the Rationalist Romantic Relationship room than ayurveda is an elephant in the medical science room.
As far as the superstition/placebo distinction you are making I think you are simply wrong linguistically speaking. Nothing stops a superstition being a placebo, and in fact almost all of alternative medicine could legitimately be described as placebo and superstition.
Superstitions arise because of faulty cause/effect reasoning and may indeed have a placebo effect, like the red scarf you mention. I suspect but cannot prove that some parts of PUA doctrine arise in exactly the same way that belief in a lucky scarf arises. Someone tries it, they get lucky that time, and so from then on they try it every time and believe it helps.
If some pieces of PUA technique are testable, that’s great. They should test them and publish the results. Until they do their beliefs don’t really have a place if we’re talking about Rational Romantic Relationships. If they aren’t testable, then they’re unfalsifiable beliefs and rationalists should be committed to discarding unfalsifiable beliefs. PUA looks to me more like folklore than science, at this stage.
I agree with this statement… but as previously discussed, I mean Bayesian reductionist evidence.
Which means, anecdotes count, even if they still count for less than numbers and double-blind tests.
You’re using a very non-LW definition of “rational” here, since the principles of Something To Protect, and avoiding the Failures Of Eld Science would say that it’s your job to find something and test it, not to demand that people bring you only advice that’s already vetted.
If you wait for Richard Wiseman to turn “The Secret” into “Luck Theory”, and you actually needed the instrumental result, then you lost.
That is, you lost the utility you could have had by doing the testing yourself.
For medical outcomes, doing the testing yourself is a bad idea because the worst-case scenario isn’t that you don’t get your goal, it’s that you do damage to yourself or die.
But for testing PUA or anything in personal development, your personal testing costs are ridiculously low, and the worst case is just that you don’t get the goal you were after.
This means that if the goal is actually important, and whatever scientifically-validated information you have isn’t getting you the goal, then you don’t just sit on your ass and wait for someone to hand you the research on a platter.
Anything else isn’t rational, where rational is defined (as on LW) as “winning”.
I think this is a misunderstanding of the correct application of Bayes’ Theorem. Bayes is not a magic wand, and GIGO still applies. Anecdotal evidence counts but you have to correctly estimate the probability that you would hear that anecdote in a world where PUA methods were just placebos sold to the sex-starved and nerdy, as opposed to the probability that you would hear that anecdote in a world where PUA methods have some objectively measurable effect. I think most of the time the correct estimate is that those probabilities are barely distinguishable at best.
A rationalist should have a clear distinction between Things That Are Probably True, and Things That Might Be True and Would Be Interesting To Try. The goal of the OP was to sum up the state of human knowledge with regard to Things That Are Probably True, which is the standard scholarly starting point in research.
It seemed to me that PUA techniques, lacking any objective evidence to back them up, should be filed under Things That Might Be True and Would Be Interesting To Try but that their devotees were claiming that they were the elephant in the OP’s room and that they had been unjustly excluded from the set of Things That Are Probably True.
I’m not against the ethos of going out and trying these things, as long as the testing costs really are low (i.e. you don’t pay good money for them). They might work, and even if they are just placebos you might get lucky anyway. However it’s not rational to actually believe they probably work in the absence of proper evidence, as opposed to going along with them for the sake of experiment, or to try to squeeze them in to a list of Things That Are Probably True.
Also, better placebo than nothing at all.
That a comment opening with this quote-reply pair is voted above zero troubles me. It is a direct contradiction of one of the most basic premises of this site.
I would have voted it down were it not for the rest of the paragraph cited, which basically comes down to “anecdotes are Bayesian evidence, but with caveats related to the base rate, and not always positive evidence”. Which is, as best I can tell, correct. In isolation, the opening sentence does seem to incorrectly imply that anecdotes don’t count at all, and so I’d have phrased it differently if I was trying to make the same point, but a false start isn’t enough for a downvote if the full post is well-argued and not obviously wrong.
In context, I interpreted pjeby to be saying that anecdotes counted as evidence which should lead a Bayesian rationalist to believe the truth of PUA claims. If that was not their intention I got them totally wrong.
However if I interpreted them correctly they were indeed applying Bayes incorrectly, since we should expect a base rate of PUA-affirming anecdotes even if PUA techniques are placebos, and even in the total absence of any real effects whatsoever. It’s not evidence until the rate of observation exceeds the base rate of false claims we should expect to hear in the absence of a non-placebo effect, and if you don’t know what the base rate is you don’t have enough information to carry out a Bayesian update. You can’t update without P(B).
The truth of this statement depends heavily on how you unpack “believe”. Brains have more than one way of “believing” things, after all. A person can not “believe” in ghosts, and yet feel scared in a “haunted” house. Or more relevant to the current thread, a person can “believe” they are attractive and worthy and have every right to go up to someone and say “hi”, yet still be unable to do it.
IOW, epistemic and instrumental beliefs are compartmentalized in humans by default… which makes a mockery of the idea that manipulating your instrumental beliefs will somehow stain your epistemic purity.
Relevant: willingness to spend money to change is correlated with willingness to actually change. That doesn’t mean spending money causes change, of course, I’m just pointing out that a person’s willingness to incur the costs of changing (whatever sort of cost) is strongly correlated with them taking action to change. (See Prochaska, Norcross, et al; whose research and meta-research of a dozen different types of change goals is summarized in the book “Changing For Good”.)
[Originally, I was going to include a bunch of information about my work with personal development clients that reflects the pattern described in the above-mentioned research, but since you appear to prefer research to experience, I’ve decided to skip it.]
I place a high value on not financially encouraging bad behaviour, and selling non-evidence-based interventions to people who may be desperate, irrational or ill-informed but who don’t deserve to be robbed counts as bad behaviour to me.
There’s a loss of utility beyond the mere loss of cash to myself if I give cash to a scammer, because it feeds the scammer and potentially encourages other scammers to join the market. This is the flip side of the coin that there is a gain in utility when I give cash to a worthwhile charity.
People willing to spend money on attracting a mate have a wide variety of options as to how they spend it, after all. If they are willing to actually change it’s not as if the only way to demonstrate this is to spend money on PUA training rather than clothes, transportation, food, drink, taxi fares and so on.
As I mentioned in the other sub-thread, it’s really tiring to have you continually reframing what I say to make attackable arguments out of it. Unless your sole interest in LessWrong is to score rhetorical points (i.e., trolling), it’s a rather bad idea to keep doing that to people.
Note that the text you quoted from my comment has nothing to do with PUA. It is a portion of my evidence that your professed approach to personal development (i.e., trying things only if they cost nothing) is Not Winning.
On LessWrong, rationality equals winning, not pretending to avoid losing. (Or more bluntly: attempting to signal your intelligence and status by avoiding the low-status work of actually trying things and possibly being mistaken.)
It is better to do something wrong—even repeatedly—and eventually succeed, than to sit on your ass and do nothing. Otherwise, you are less instrumentally rational than any random person who tries things at random until something works.
Meanwhile, any time that you do not spend winning, is time spent losing, no matter how you spin it as some sort of intellectual superiority.
So, on that note, I will now return to activities with a better ROI than continuing this discussion. ;-)
(And sometimes hearing them counts as evidence against the phenomenon!)
Indeed. Careful reading is required when investigating any kind of “folklore”, as occasionally self-help authors provide anecdotes that (in the details) provide a very different picture of what is happening than what the author is saying is the point or moral of that anecdote.