I agree with this statement… but as previously discussed, I mean Bayesian reductionist evidence.
Which means, anecdotes count, even if they still count for less than numbers and double-blind tests.
I think this is a misunderstanding of the correct application of Bayes’ Theorem. Bayes is not a magic wand, and GIGO still applies. Anecdotal evidence counts but you have to correctly estimate the probability that you would hear that anecdote in a world where PUA methods were just placebos sold to the sex-starved and nerdy, as opposed to the probability that you would hear that anecdote in a world where PUA methods have some objectively measurable effect. I think most of the time the correct estimate is that those probabilities are barely distinguishable at best.
A rationalist should have a clear distinction between Things That Are Probably True, and Things That Might Be True and Would Be Interesting To Try. The goal of the OP was to sum up the state of human knowledge with regard to Things That Are Probably True, which is the standard scholarly starting point in research.
It seemed to me that PUA techniques, lacking any objective evidence to back them up, should be filed under Things That Might Be True and Would Be Interesting To Try but that their devotees were claiming that they were the elephant in the OP’s room and that they had been unjustly excluded from the set of Things That Are Probably True.
I’m not against the ethos of going out and trying these things, as long as the testing costs really are low (i.e. you don’t pay good money for them). They might work, and even if they are just placebos you might get lucky anyway. However it’s not rational to actually believe they probably work in the absence of proper evidence, as opposed to going along with them for the sake of experiment, or to try to squeeze them in to a list of Things That Are Probably True.
I mean Bayesian reductionist evidence. Which means, anecdotes count, even if they still count for less than numbers and double-blind tests.
I think this is a misunderstanding of the correct application of Bayes’ Theorem.
That a comment opening with this quote-reply pair is voted above zero troubles me. It is a direct contradiction of one of the most basic premises of this site.
I would have voted it down were it not for the rest of the paragraph cited, which basically comes down to “anecdotes are Bayesian evidence, but with caveats related to the base rate, and not always positive evidence”. Which is, as best I can tell, correct. In isolation, the opening sentence does seem to incorrectly imply that anecdotes don’t count at all, and so I’d have phrased it differently if I was trying to make the same point, but a false start isn’t enough for a downvote if the full post is well-argued and not obviously wrong.
In context, I interpreted pjeby to be saying that anecdotes counted as evidence which should lead a Bayesian rationalist to believe the truth of PUA claims. If that was not their intention I got them totally wrong.
However if I interpreted them correctly they were indeed applying Bayes incorrectly, since we should expect a base rate of PUA-affirming anecdotes even if PUA techniques are placebos, and even in the total absence of any real effects whatsoever. It’s not evidence until the rate of observation exceeds the base rate of false claims we should expect to hear in the absence of a non-placebo effect, and if you don’t know what the base rate is you don’t have enough information to carry out a Bayesian update. You can’t update without P(B).
However it’s not rational to actually believe they probably work in the absence of proper evidence, as opposed to going along with them for the sake of experiment
The truth of this statement depends heavily on how you unpack “believe”. Brains have more than one way of “believing” things, after all. A person can not “believe” in ghosts, and yet feel scared in a “haunted” house. Or more relevant to the current thread, a person can “believe” they are attractive and worthy and have every right to go up to someone and say “hi”, yet still be unable to do it.
IOW, epistemic and instrumental beliefs are compartmentalized in humans by default… which makes a mockery of the idea that manipulating your instrumental beliefs will somehow stain your epistemic purity.
I’m not against the ethos of going out and trying these things, as long as the testing costs really are low (i.e. you don’t pay good money for them).
Relevant: willingness to spend money to change is correlated with willingness to actually change. That doesn’t mean spending money causes change, of course, I’m just pointing out that a person’s willingness to incur the costs of changing (whatever sort of cost) is strongly correlated with them taking action to change. (See Prochaska, Norcross, et al; whose research and meta-research of a dozen different types of change goals is summarized in the book “Changing For Good”.)
[Originally, I was going to include a bunch of information about my work with personal development clients that reflects the pattern described in the above-mentioned research, but since you appear to prefer research to experience, I’ve decided to skip it.]
Relevant: willingness to spend money to change is correlated with willingness to actually change. That doesn’t mean spending money causes change, of course, I’m just pointing out that a person’s willingness to incur the costs of changing (whatever sort of cost) is strongly correlated with them taking action to change. (See Prochaska, Norcross, et al; whose research and meta-research of a dozen different types of change goals is summarized in the book “Changing For Good”.)
I place a high value on not financially encouraging bad behaviour, and selling non-evidence-based interventions to people who may be desperate, irrational or ill-informed but who don’t deserve to be robbed counts as bad behaviour to me.
There’s a loss of utility beyond the mere loss of cash to myself if I give cash to a scammer, because it feeds the scammer and potentially encourages other scammers to join the market. This is the flip side of the coin that there is a gain in utility when I give cash to a worthwhile charity.
People willing to spend money on attracting a mate have a wide variety of options as to how they spend it, after all. If they are willing to actually change it’s not as if the only way to demonstrate this is to spend money on PUA training rather than clothes, transportation, food, drink, taxi fares and so on.
As I mentioned in the other sub-thread, it’s really tiring to have you continually reframing what I say to make attackable arguments out of it. Unless your sole interest in LessWrong is to score rhetorical points (i.e., trolling), it’s a rather bad idea to keep doing that to people.
Note that the text you quoted from my comment has nothing to do with PUA. It is a portion of my evidence that your professed approach to personal development (i.e., trying things only if they cost nothing) is Not Winning.
On LessWrong, rationality equals winning, not pretending to avoid losing. (Or more bluntly: attempting to signal your intelligence and status by avoiding the low-status work of actually trying things and possibly being mistaken.)
It is better to do something wrong—even repeatedly—and eventually succeed, than to sit on your ass and do nothing. Otherwise, you are less instrumentally rational than any random person who tries things at random until something works.
Meanwhile, any time that you do not spend winning, is time spent losing, no matter how you spin it as some sort of intellectual superiority.
So, on that note, I will now return to activities with a better ROI than continuing this discussion. ;-)
I think this is a misunderstanding of the correct application of Bayes’ Theorem. Bayes is not a magic wand, and GIGO still applies. Anecdotal evidence counts but you have to correctly estimate the probability that you would hear that anecdote in a world where PUA methods were just placebos sold to the sex-starved and nerdy, as opposed to the probability that you would hear that anecdote in a world where PUA methods have some objectively measurable effect. I think most of the time the correct estimate is that those probabilities are barely distinguishable at best.
A rationalist should have a clear distinction between Things That Are Probably True, and Things That Might Be True and Would Be Interesting To Try. The goal of the OP was to sum up the state of human knowledge with regard to Things That Are Probably True, which is the standard scholarly starting point in research.
It seemed to me that PUA techniques, lacking any objective evidence to back them up, should be filed under Things That Might Be True and Would Be Interesting To Try but that their devotees were claiming that they were the elephant in the OP’s room and that they had been unjustly excluded from the set of Things That Are Probably True.
I’m not against the ethos of going out and trying these things, as long as the testing costs really are low (i.e. you don’t pay good money for them). They might work, and even if they are just placebos you might get lucky anyway. However it’s not rational to actually believe they probably work in the absence of proper evidence, as opposed to going along with them for the sake of experiment, or to try to squeeze them in to a list of Things That Are Probably True.
Also, better placebo than nothing at all.
That a comment opening with this quote-reply pair is voted above zero troubles me. It is a direct contradiction of one of the most basic premises of this site.
I would have voted it down were it not for the rest of the paragraph cited, which basically comes down to “anecdotes are Bayesian evidence, but with caveats related to the base rate, and not always positive evidence”. Which is, as best I can tell, correct. In isolation, the opening sentence does seem to incorrectly imply that anecdotes don’t count at all, and so I’d have phrased it differently if I was trying to make the same point, but a false start isn’t enough for a downvote if the full post is well-argued and not obviously wrong.
In context, I interpreted pjeby to be saying that anecdotes counted as evidence which should lead a Bayesian rationalist to believe the truth of PUA claims. If that was not their intention I got them totally wrong.
However if I interpreted them correctly they were indeed applying Bayes incorrectly, since we should expect a base rate of PUA-affirming anecdotes even if PUA techniques are placebos, and even in the total absence of any real effects whatsoever. It’s not evidence until the rate of observation exceeds the base rate of false claims we should expect to hear in the absence of a non-placebo effect, and if you don’t know what the base rate is you don’t have enough information to carry out a Bayesian update. You can’t update without P(B).
The truth of this statement depends heavily on how you unpack “believe”. Brains have more than one way of “believing” things, after all. A person can not “believe” in ghosts, and yet feel scared in a “haunted” house. Or more relevant to the current thread, a person can “believe” they are attractive and worthy and have every right to go up to someone and say “hi”, yet still be unable to do it.
IOW, epistemic and instrumental beliefs are compartmentalized in humans by default… which makes a mockery of the idea that manipulating your instrumental beliefs will somehow stain your epistemic purity.
Relevant: willingness to spend money to change is correlated with willingness to actually change. That doesn’t mean spending money causes change, of course, I’m just pointing out that a person’s willingness to incur the costs of changing (whatever sort of cost) is strongly correlated with them taking action to change. (See Prochaska, Norcross, et al; whose research and meta-research of a dozen different types of change goals is summarized in the book “Changing For Good”.)
[Originally, I was going to include a bunch of information about my work with personal development clients that reflects the pattern described in the above-mentioned research, but since you appear to prefer research to experience, I’ve decided to skip it.]
I place a high value on not financially encouraging bad behaviour, and selling non-evidence-based interventions to people who may be desperate, irrational or ill-informed but who don’t deserve to be robbed counts as bad behaviour to me.
There’s a loss of utility beyond the mere loss of cash to myself if I give cash to a scammer, because it feeds the scammer and potentially encourages other scammers to join the market. This is the flip side of the coin that there is a gain in utility when I give cash to a worthwhile charity.
People willing to spend money on attracting a mate have a wide variety of options as to how they spend it, after all. If they are willing to actually change it’s not as if the only way to demonstrate this is to spend money on PUA training rather than clothes, transportation, food, drink, taxi fares and so on.
As I mentioned in the other sub-thread, it’s really tiring to have you continually reframing what I say to make attackable arguments out of it. Unless your sole interest in LessWrong is to score rhetorical points (i.e., trolling), it’s a rather bad idea to keep doing that to people.
Note that the text you quoted from my comment has nothing to do with PUA. It is a portion of my evidence that your professed approach to personal development (i.e., trying things only if they cost nothing) is Not Winning.
On LessWrong, rationality equals winning, not pretending to avoid losing. (Or more bluntly: attempting to signal your intelligence and status by avoiding the low-status work of actually trying things and possibly being mistaken.)
It is better to do something wrong—even repeatedly—and eventually succeed, than to sit on your ass and do nothing. Otherwise, you are less instrumentally rational than any random person who tries things at random until something works.
Meanwhile, any time that you do not spend winning, is time spent losing, no matter how you spin it as some sort of intellectual superiority.
So, on that note, I will now return to activities with a better ROI than continuing this discussion. ;-)