If a whole bunch of different gurus are each flogging different techniques and none of them have evidence, then a rationalist should dismiss them all until they do have some evidence
I agree with this statement… but as previously discussed, I mean Bayesian reductionist evidence.
Which means, anecdotes count, even if they still count for less than numbers and double-blind tests.
If some pieces of PUA technique are testable, that’s great. They should test them and publish the results. Until they do their beliefs don’t really have a place if we’re talking about Rational Romantic Relationships
You’re using a very non-LW definition of “rational” here, since the principles of Something To Protect, and avoiding the Failures Of Eld Science would say that it’s your job to find something and test it, not to demand that people bring you only advice that’s already vetted.
If you wait for Richard Wiseman to turn “The Secret” into “Luck Theory”, and you actually needed the instrumental result, then you lost.
That is, you lost the utility you could have had by doing the testing yourself.
For medical outcomes, doing the testing yourself is a bad idea because the worst-case scenario isn’t that you don’t get your goal, it’s that you do damage to yourself or die.
But for testing PUA or anything in personal development, your personal testing costs are ridiculously low, and the worst case is just that you don’t get the goal you were after.
This means that if the goal is actually important, and whatever scientifically-validated information you have isn’t getting you the goal, then you don’t just sit on your ass and wait for someone to hand you the research on a platter.
Anything else isn’t rational, where rational is defined (as on LW) as “winning”.
I agree with this statement… but as previously discussed, I mean Bayesian reductionist evidence.
Which means, anecdotes count, even if they still count for less than numbers and double-blind tests.
I think this is a misunderstanding of the correct application of Bayes’ Theorem. Bayes is not a magic wand, and GIGO still applies. Anecdotal evidence counts but you have to correctly estimate the probability that you would hear that anecdote in a world where PUA methods were just placebos sold to the sex-starved and nerdy, as opposed to the probability that you would hear that anecdote in a world where PUA methods have some objectively measurable effect. I think most of the time the correct estimate is that those probabilities are barely distinguishable at best.
A rationalist should have a clear distinction between Things That Are Probably True, and Things That Might Be True and Would Be Interesting To Try. The goal of the OP was to sum up the state of human knowledge with regard to Things That Are Probably True, which is the standard scholarly starting point in research.
It seemed to me that PUA techniques, lacking any objective evidence to back them up, should be filed under Things That Might Be True and Would Be Interesting To Try but that their devotees were claiming that they were the elephant in the OP’s room and that they had been unjustly excluded from the set of Things That Are Probably True.
I’m not against the ethos of going out and trying these things, as long as the testing costs really are low (i.e. you don’t pay good money for them). They might work, and even if they are just placebos you might get lucky anyway. However it’s not rational to actually believe they probably work in the absence of proper evidence, as opposed to going along with them for the sake of experiment, or to try to squeeze them in to a list of Things That Are Probably True.
I mean Bayesian reductionist evidence. Which means, anecdotes count, even if they still count for less than numbers and double-blind tests.
I think this is a misunderstanding of the correct application of Bayes’ Theorem.
That a comment opening with this quote-reply pair is voted above zero troubles me. It is a direct contradiction of one of the most basic premises of this site.
I would have voted it down were it not for the rest of the paragraph cited, which basically comes down to “anecdotes are Bayesian evidence, but with caveats related to the base rate, and not always positive evidence”. Which is, as best I can tell, correct. In isolation, the opening sentence does seem to incorrectly imply that anecdotes don’t count at all, and so I’d have phrased it differently if I was trying to make the same point, but a false start isn’t enough for a downvote if the full post is well-argued and not obviously wrong.
In context, I interpreted pjeby to be saying that anecdotes counted as evidence which should lead a Bayesian rationalist to believe the truth of PUA claims. If that was not their intention I got them totally wrong.
However if I interpreted them correctly they were indeed applying Bayes incorrectly, since we should expect a base rate of PUA-affirming anecdotes even if PUA techniques are placebos, and even in the total absence of any real effects whatsoever. It’s not evidence until the rate of observation exceeds the base rate of false claims we should expect to hear in the absence of a non-placebo effect, and if you don’t know what the base rate is you don’t have enough information to carry out a Bayesian update. You can’t update without P(B).
However it’s not rational to actually believe they probably work in the absence of proper evidence, as opposed to going along with them for the sake of experiment
The truth of this statement depends heavily on how you unpack “believe”. Brains have more than one way of “believing” things, after all. A person can not “believe” in ghosts, and yet feel scared in a “haunted” house. Or more relevant to the current thread, a person can “believe” they are attractive and worthy and have every right to go up to someone and say “hi”, yet still be unable to do it.
IOW, epistemic and instrumental beliefs are compartmentalized in humans by default… which makes a mockery of the idea that manipulating your instrumental beliefs will somehow stain your epistemic purity.
I’m not against the ethos of going out and trying these things, as long as the testing costs really are low (i.e. you don’t pay good money for them).
Relevant: willingness to spend money to change is correlated with willingness to actually change. That doesn’t mean spending money causes change, of course, I’m just pointing out that a person’s willingness to incur the costs of changing (whatever sort of cost) is strongly correlated with them taking action to change. (See Prochaska, Norcross, et al; whose research and meta-research of a dozen different types of change goals is summarized in the book “Changing For Good”.)
[Originally, I was going to include a bunch of information about my work with personal development clients that reflects the pattern described in the above-mentioned research, but since you appear to prefer research to experience, I’ve decided to skip it.]
Relevant: willingness to spend money to change is correlated with willingness to actually change. That doesn’t mean spending money causes change, of course, I’m just pointing out that a person’s willingness to incur the costs of changing (whatever sort of cost) is strongly correlated with them taking action to change. (See Prochaska, Norcross, et al; whose research and meta-research of a dozen different types of change goals is summarized in the book “Changing For Good”.)
I place a high value on not financially encouraging bad behaviour, and selling non-evidence-based interventions to people who may be desperate, irrational or ill-informed but who don’t deserve to be robbed counts as bad behaviour to me.
There’s a loss of utility beyond the mere loss of cash to myself if I give cash to a scammer, because it feeds the scammer and potentially encourages other scammers to join the market. This is the flip side of the coin that there is a gain in utility when I give cash to a worthwhile charity.
People willing to spend money on attracting a mate have a wide variety of options as to how they spend it, after all. If they are willing to actually change it’s not as if the only way to demonstrate this is to spend money on PUA training rather than clothes, transportation, food, drink, taxi fares and so on.
As I mentioned in the other sub-thread, it’s really tiring to have you continually reframing what I say to make attackable arguments out of it. Unless your sole interest in LessWrong is to score rhetorical points (i.e., trolling), it’s a rather bad idea to keep doing that to people.
Note that the text you quoted from my comment has nothing to do with PUA. It is a portion of my evidence that your professed approach to personal development (i.e., trying things only if they cost nothing) is Not Winning.
On LessWrong, rationality equals winning, not pretending to avoid losing. (Or more bluntly: attempting to signal your intelligence and status by avoiding the low-status work of actually trying things and possibly being mistaken.)
It is better to do something wrong—even repeatedly—and eventually succeed, than to sit on your ass and do nothing. Otherwise, you are less instrumentally rational than any random person who tries things at random until something works.
Meanwhile, any time that you do not spend winning, is time spent losing, no matter how you spin it as some sort of intellectual superiority.
So, on that note, I will now return to activities with a better ROI than continuing this discussion. ;-)
(And sometimes hearing them [anecdotes] counts as evidence against the phenomenon!)
Indeed. Careful reading is required when investigating any kind of “folklore”, as occasionally self-help authors provide anecdotes that (in the details) provide a very different picture of what is happening than what the author is saying is the point or moral of that anecdote.
I agree with this statement… but as previously discussed, I mean Bayesian reductionist evidence.
Which means, anecdotes count, even if they still count for less than numbers and double-blind tests.
You’re using a very non-LW definition of “rational” here, since the principles of Something To Protect, and avoiding the Failures Of Eld Science would say that it’s your job to find something and test it, not to demand that people bring you only advice that’s already vetted.
If you wait for Richard Wiseman to turn “The Secret” into “Luck Theory”, and you actually needed the instrumental result, then you lost.
That is, you lost the utility you could have had by doing the testing yourself.
For medical outcomes, doing the testing yourself is a bad idea because the worst-case scenario isn’t that you don’t get your goal, it’s that you do damage to yourself or die.
But for testing PUA or anything in personal development, your personal testing costs are ridiculously low, and the worst case is just that you don’t get the goal you were after.
This means that if the goal is actually important, and whatever scientifically-validated information you have isn’t getting you the goal, then you don’t just sit on your ass and wait for someone to hand you the research on a platter.
Anything else isn’t rational, where rational is defined (as on LW) as “winning”.
I think this is a misunderstanding of the correct application of Bayes’ Theorem. Bayes is not a magic wand, and GIGO still applies. Anecdotal evidence counts but you have to correctly estimate the probability that you would hear that anecdote in a world where PUA methods were just placebos sold to the sex-starved and nerdy, as opposed to the probability that you would hear that anecdote in a world where PUA methods have some objectively measurable effect. I think most of the time the correct estimate is that those probabilities are barely distinguishable at best.
A rationalist should have a clear distinction between Things That Are Probably True, and Things That Might Be True and Would Be Interesting To Try. The goal of the OP was to sum up the state of human knowledge with regard to Things That Are Probably True, which is the standard scholarly starting point in research.
It seemed to me that PUA techniques, lacking any objective evidence to back them up, should be filed under Things That Might Be True and Would Be Interesting To Try but that their devotees were claiming that they were the elephant in the OP’s room and that they had been unjustly excluded from the set of Things That Are Probably True.
I’m not against the ethos of going out and trying these things, as long as the testing costs really are low (i.e. you don’t pay good money for them). They might work, and even if they are just placebos you might get lucky anyway. However it’s not rational to actually believe they probably work in the absence of proper evidence, as opposed to going along with them for the sake of experiment, or to try to squeeze them in to a list of Things That Are Probably True.
Also, better placebo than nothing at all.
That a comment opening with this quote-reply pair is voted above zero troubles me. It is a direct contradiction of one of the most basic premises of this site.
I would have voted it down were it not for the rest of the paragraph cited, which basically comes down to “anecdotes are Bayesian evidence, but with caveats related to the base rate, and not always positive evidence”. Which is, as best I can tell, correct. In isolation, the opening sentence does seem to incorrectly imply that anecdotes don’t count at all, and so I’d have phrased it differently if I was trying to make the same point, but a false start isn’t enough for a downvote if the full post is well-argued and not obviously wrong.
In context, I interpreted pjeby to be saying that anecdotes counted as evidence which should lead a Bayesian rationalist to believe the truth of PUA claims. If that was not their intention I got them totally wrong.
However if I interpreted them correctly they were indeed applying Bayes incorrectly, since we should expect a base rate of PUA-affirming anecdotes even if PUA techniques are placebos, and even in the total absence of any real effects whatsoever. It’s not evidence until the rate of observation exceeds the base rate of false claims we should expect to hear in the absence of a non-placebo effect, and if you don’t know what the base rate is you don’t have enough information to carry out a Bayesian update. You can’t update without P(B).
The truth of this statement depends heavily on how you unpack “believe”. Brains have more than one way of “believing” things, after all. A person can not “believe” in ghosts, and yet feel scared in a “haunted” house. Or more relevant to the current thread, a person can “believe” they are attractive and worthy and have every right to go up to someone and say “hi”, yet still be unable to do it.
IOW, epistemic and instrumental beliefs are compartmentalized in humans by default… which makes a mockery of the idea that manipulating your instrumental beliefs will somehow stain your epistemic purity.
Relevant: willingness to spend money to change is correlated with willingness to actually change. That doesn’t mean spending money causes change, of course, I’m just pointing out that a person’s willingness to incur the costs of changing (whatever sort of cost) is strongly correlated with them taking action to change. (See Prochaska, Norcross, et al; whose research and meta-research of a dozen different types of change goals is summarized in the book “Changing For Good”.)
[Originally, I was going to include a bunch of information about my work with personal development clients that reflects the pattern described in the above-mentioned research, but since you appear to prefer research to experience, I’ve decided to skip it.]
I place a high value on not financially encouraging bad behaviour, and selling non-evidence-based interventions to people who may be desperate, irrational or ill-informed but who don’t deserve to be robbed counts as bad behaviour to me.
There’s a loss of utility beyond the mere loss of cash to myself if I give cash to a scammer, because it feeds the scammer and potentially encourages other scammers to join the market. This is the flip side of the coin that there is a gain in utility when I give cash to a worthwhile charity.
People willing to spend money on attracting a mate have a wide variety of options as to how they spend it, after all. If they are willing to actually change it’s not as if the only way to demonstrate this is to spend money on PUA training rather than clothes, transportation, food, drink, taxi fares and so on.
As I mentioned in the other sub-thread, it’s really tiring to have you continually reframing what I say to make attackable arguments out of it. Unless your sole interest in LessWrong is to score rhetorical points (i.e., trolling), it’s a rather bad idea to keep doing that to people.
Note that the text you quoted from my comment has nothing to do with PUA. It is a portion of my evidence that your professed approach to personal development (i.e., trying things only if they cost nothing) is Not Winning.
On LessWrong, rationality equals winning, not pretending to avoid losing. (Or more bluntly: attempting to signal your intelligence and status by avoiding the low-status work of actually trying things and possibly being mistaken.)
It is better to do something wrong—even repeatedly—and eventually succeed, than to sit on your ass and do nothing. Otherwise, you are less instrumentally rational than any random person who tries things at random until something works.
Meanwhile, any time that you do not spend winning, is time spent losing, no matter how you spin it as some sort of intellectual superiority.
So, on that note, I will now return to activities with a better ROI than continuing this discussion. ;-)
(And sometimes hearing them counts as evidence against the phenomenon!)
Indeed. Careful reading is required when investigating any kind of “folklore”, as occasionally self-help authors provide anecdotes that (in the details) provide a very different picture of what is happening than what the author is saying is the point or moral of that anecdote.