I should disclose immediately that I am one of the people who find the PUA community distasteful on a variety of levels, intellectual and ethical, and this may colour my viewpoint.
The PUA community may present themselves, and think of themselves, as a “disreputable source of accurate information” but in the absence of controlled trials I don’t think the claim to accuracy is well-founded. Sticking strictly to the scientific literature is not so much ignoring the elephant in the room as suspending judgment as to whether the elephant exists until we can turn the lights on.
If it’s been said already I apologise, but it seems obvious to me that an ethical rationalist’s goals in relationship-seeking should be to seek a relationship that creates maximal utility for both parties, and that scientific evidence about how to find suitable partners and behave in the relationship so as to maximise utility for both partners is a great potential source of human happiness. It’s obvious from even the briefest perusal of PUA texts that the PUA community are concerned very much with maximising their own utility and talking down the status of male outgroup members and women in general, but not with honestly seeking means to maximise the utility of all stakeholders.
Given that their methodology is incompatible with scientific reasoning and their attitudes incompatible with maximising global utility for all sentient stakeholders, I think it’s quite correct to leave their claims out of a LW analysis of human sexual relationships.
Given that their methodology is incompatible with scientific reasoning
They write stuff on their version of ArXiv (called pick-up forums) then they go out and try it, and if it works repeatably it is incorporated into PU-lore.
What definition of science did you have in mind that this doesn’t fit?
There are a significant number of methodological problems with their evidence-gathering.
PUAs don’t change just one variable at a time, nor do they keep strict track of what they change and when so they can do a multivariate regression analysis. Instead they change lots of variables at once. A PUA would advocate that a “beta” change their clothes, scent, social environment(s), social signalling strategies and so forth all at once and see if their sexual success rate changed. However if this works you don’t know which changes did what.
The people doing the observation are the same people conducting the experiment which is obviously incompatible with proper blinding.
The people reporting the data stand to gain social status in the PUA hierarchy if they report success, and hence have an incentive to misreport their actual data. When a PUA reports that they successfully obtained coitus on one out of six attempts using a given methodology it is reasonable to suspect that some such reports come from people who actually took sixteen attempts, or from people who failed to obtain coitus given sixteen attempts and went home to angrily masturbate and then post on a PUA forum that they had obtained success. We can’t tell what the real success rate is without observing PUAs in the wild.
Even assuming honest reporting it seems intuitively likely that PUAs, like believers in psychic powers, are prone to reporting their hits and forgetting their misses. It’s a known human trait to massage our internal data this way and barring rigorous methodological safeguards it’s a safe assumption that this will bias any reported results.
There’s no comparison with a relevant base rate, which is a classic example of the base rate fallacy in action. We don’t know what the success rate for a well-groomed, well-spoken person who does not employ PUA social signalling tactics is compared with a similarly groomed and comported person using PUA social signalling tactics, for example.
A successful PUA was mentioned as having obtained coitus ~300 times out of ~10 000 approaches. That’s useless unless we know what success rate other methodologies would have produced. In any case people aren’t naturally such good statisticians that they can detect variations in frequency in a phenomenon that occurs one time in 33 at best with a sample size for a given experiment in the tens at most.
PUA mythology seems to me to have built-in safeguards against falsifiability. If a woman rejects a PUA then it can be explained away as her being “entitled” or “conflicted” or something similar. If a woman chooses a “beta” over a PUA then it can be explained away in similar terms or by saying that she has low self-esteem and doesn’t think she is worthy of an “alpha”, and/or postulating that if an “alpha” came along she would of course engage in an extra-marital affair with the “alpha”. As long as the PUAs are obtaining sex some of the time, or are claiming they are doing so, their theories aren’t falsifiable.
We shouldn’t trust a PUA’s reported opinion about their ability to obtain sex more often than chance any more than we should trust a claimed psychic’s reported opinion about their ability to predict the future more often than chance. Obviously our prior probability that they are reporting true facts about the universe should be higher for the PUA since their claims do not break the laws of physics, but their testimony should not give us strong reason to shift our prior.
PUA mythology seems to me to have built-in safeguards against falsifiability. … As long as the PUAs are obtaining sex some of the time, or are claiming they are doing so, their theories aren’t falsifiable.
Note that this may be a feature, not a bug: a PUA with unwavering belief in their method will likely exude more confidence, regardless of the method employed.
I remember one pickup guru describing how when he was younger, he’d found this poem online that was supposed to be the perfect pickup line… and the first few times he used it, it was, because he utterly believed it would work. Later, he had to find other methods that allowed him to have a similar level of belief.
As has been mentioned elsewhere on LW, belief causes people to act differently—often in ways that would be difficult or impossible to convincingly fake if you lacked the belief. (e.g. microexpressions, muscle tension, and similar cues)
To put it another way, even the falsifiability of PUA theory is subject to testing: i.e., do falsifiable PUA theories work better or worse than unfalsifiable ones? If unfalsifiable ones produce better results, then it’s a feature, not a bug. ;-)
Only in the same sense that the placebo effect is a “feature” of evidence-based medicine.
It’s okay if evidence-based medicine gets a tiny, tiny additional boost from the placebo effect. It’s good, in fact.
However when we are trying to figure out whether or not a treatment works we have to be absolutely sure we have ruled out the placebo effect as the causative factor. If we don’t do that then we can never find out which are the good treatments that have a real effect plus a placebo effect, and which are the fake treatments that only have a placebo effect.
Only if it turned out that method absolutely, totally did not matter and only confidence in the method mattered would it be rational to abandon the search for the truth and settle for belief in an unfalsifiable confidence-booster. It seems far more likely to me that there will in fact be approach methods that work better than others, and that only by disentangling the confounding factor of confidence from the real effect could you figure out what the real effect was and how strong it was.
It seems far more likely to me that there will in fact be approach methods that work better than others, and that only by disentangling the confounding factor of confidence from the real effect could you figure out what the real effect was and how strong it was.
This really, really underestimates the number of confounding factors. For any given man, the useful piece of information is what method will work for him, for women that:
Would be happy with him, and
He would be happy with
(Where “with” is defined as whatever sort of relationship both are happy with.)
This is a lot of confounding factors, and it’s pretty central to the tradeoff described in this post: do you go for something that’s inoffensive to lots of people, but not very attractive to anyone, or something that’s actually offensive to most people, but very attractive to your target audience?
You can’t do group randomized controls with something where individuality actually does count.
This is especially true of PUA advice like, “be in the moment” and “say something that amuses you”. How would you test these bits of advice, for example, while holding all other variables unchanged? By their very definition, they’re going to produce different behavior virtually every time you act on them.
There are two classes of claim here we need to divide up, but they share a common problem. First the classes, then the problem.
The first class is claims that are simply unfalsifiable. If there is no way even in theory that a proposition could be confirmed or falsified then that proposition is simply vacuous. There is nothing to say about it except that rational agents should discard the claim as meaningless and move on. If any element of PUA doctrine falls into this category then for LW purposes we should simply flag it as unfalsifiable and move on.
The second class is claims that are hard to prove or disprove because there are multiple confounding factors, but which with proper controls and a sufficiently large sample size we could in theory confirm or disconfirm. If a moderate amount of cologne works better than none at all or a large amount of cologne, for example, then if we got enough men to approach enough women then eventually if there’s a real effect we should be able to get a data pool that shows statistical significance despite those confounding effects.
The common problem both classes of claims have is that a rationalist is immediately going to ask someone who proposes such a claim “How do you think you know this?”. If a given claim is terribly difficult to confirm or disconfirm, and nobody has yet done the arduous legwork to check it, it’s very hard to see how a rational agent could think it is true or false. The same goes except more strongly for unfalsifiable claims.
For a PUA to argue that X is true, but that X is impossible to prove, is to open themselves up to the response “How do you know that, if it’s impossible to prove?”.
If there is no way even in theory that a proposition could be confirmed or falsified then that proposition is simply vacuous. There is nothing to say about it except that rational agents should discard the claim as meaningless and move on. If any element of PUA doctrine falls into this category then for LW purposes we should simply flag it as unfalsifiable and move on.
Sure… as long as you separate predictions from theory. When you reduce a PUA theory to what behaviors you expect someone believing that theory would produce, or what behaviors, if successful, would result in people believing such theories, you then have something suitable for testing, even if the theory is nonsensical on its face.
Lots of people believe in “The Secret” because it appears to produce results, despite the theory being utter garbage. But then, it turns out that some of what’s said is consistent with what actually makes people “luckier”… so there was a falsifiable prediction after all, buried under the nonsense.
If a group of people claim to produce results, then reduce their theory to more concrete predictions first, then test that. After all, if you discard alchemy because the theory is bunk, you miss the chance to discover chemistry.
Or, in more LW-ish speak: theories are not evidence, but even biased reports of actual experience are evidence of something. A Bayesian reductionist should be able to reduce even the craziest “woo” into some sort of useful probabilistic information… and there’s a substantial body of PUA material that’s considerably less “woo” than the average self-help book.
In the simplest form, this reduction could be just: person A claims that they were unsuccessful with women prior to adopting some set of PUA-trained behaviors. If the individual has numbers (even if somewhat imprecise) and there are a large number of people similar to person A, then this represents usable Bayesian evidence for that set of behaviors (or the training itself) being useful to persons with similar needs and desires as person A.
This is perfectly usable evidence that doesn’t require us to address the theory or its falsifiability at all.
Now, it is not necessarily evidence for the validity of person A’s favorite PUA theory!
Rather, it is evidence that something person A did differently was helpful for person A… and it remains an open question to determine what actually caused the improvement. For example, could it simply be that receiving PUA training somehow changes people? That it motivates them to approach women repeatedly, resulting in more confidence and familiarity with approaching women? Any number of other possible factors?
In other words, the actual theory put forth by the PUAs doing the teaching shouldn’t necessarily be at the top of the list of possibilities to investigate, even if the teaching clearly produces results...
And using theory-validity as a screening method for practical advice is pretty much useless, if you have “something to protect” (in LW speak). That is, if you need a method that works in an area where science is not yet settled, you cannot afford to discard practical advice on the basis of questionable theory: you will throw out way too much of the available information. (This applies to the self-help field as much as PUA.)
I’m perfectly happy to engage with PUA theories on that level, but the methodological obstacles to collecting good data are still the same. So the vital question is still the same, which is “How do these people think they know these things?”.
The only difference is that instead of addressing the question to the PUA who believes specific techniques A, B and C bring about certain outcomes, we address the question to the meta-PUA who believes that although specific techniques A, B and C are placebos that belief in the efficaciousness of those techniques has measurable effects.
However PUA devotees might not want to go down this argumentative path because the likely outcome is admitting that much of the content on PUA sites is superstition, and that the outcomes of the combined arsenal of PUA tips and techniques cannot currently be distinguished from the outcomes of a change of clothes, a little personal grooming and asking a bunch of women to go out with you.
PUA devotees like to position themselves as gurus with secret knowledge. If it turns out that the entire edifice is indistinguishable from superstition then they would be repositioned as people with poor social skills and misogynist world-views who reinvented a very old wheel and then constructed non-evidence-based folk beliefs around it.
So depending on the thesis you are arguing for, it might be safer to argue that PUA techniques do have non-placebo effects.
that the outcomes of the combined arsenal of PUA tips and techniques cannot currently be distinguished from the outcomes of a change of clothes, a little personal grooming and asking a bunch of women to go out with you.
Even if that were true (and I don’t think that’s anywhere near the case), you keep dropping out the critical meta-level for actual human beings to achieve instrumental results: i.e., motivation.
That is, even if “a change of clothes, a little grooming, and asking a bunch of women out” were actually the best possible approach, it’s kind of useless to just leave it at that, because quite a lot of actual human beings are incapable of motivating themselves to actually DO the necessary steps, using mere logical knowledge without an emotional component. (On LW, people generally use the term “akrasia” to describe this normal characteristic of human behavior as if it were some sort of strange and unexpected disease. ;-) )
To put it another way, the critical function of any kind of personal development training is to transmit a mental model to a human brain in a way such that the attached human will act in accordance with the model so transmitted.
After all, if this were not the case, then self-help books of any stripe could consist simply of short instruction sheets!
If it turns out that the entire edifice is indistinguishable from superstition …. it might be safer to argue that PUA techniques do have non-placebo effects.
“Placebo” and “superstition” are not interchangeable concepts. A placebo is a real effect, a superstition is an imaginary one.
That is, if I think my baseball batting performance is improved when I wear a red scarf, and it is, that’s a placebo effect. (Belief creating a real result.) If I think that it’s improved, but it actually isn’t, then that’s a superstition.
This means that placebo effects are instrumentally more useful than superstitions… unless of course the superstition gets you to do something that itself has a beneficial effect.
To the extent that PUA uses placebo effects on the performer of a technique, the usefulness of the effect is in the resulting non-placebo response of the recipient of the technique.
Meanwhile, there are tons of specific pieces of PUA advice that are easily testable in miniature that needn’t rely on either sort of effect.
For example, if PUAs of the Mystery school predict that “a set will open more frequently if you body-rock away from the group before establishing a false time constraint”, that prediction should be easily testable to determine its truth or falsehood, given objectively reducible definitions of “set”, “open”, “body rock”, and “false time constraint”. (All of which terms the Mystery method does quite objectively reduce.)
So, you could teach a bunch of people to do these things, send them out and videotape ’em, and then get a bunch of grad students to grade the sets as to whether they opened and how quickly (without seeing the PUA’s behavior), and voila… testable prediction.
On the level of such specific, immediately-responded-to actions and events, ISTM that PUAs have strong motivation to eliminate non-working or negatively-reinforced behaviors from their repertoire, especially when in the process of inventing them.
Of course, removing superstitious “extras” is unlikely for a given PUA guru to notice; I have observed that it is students of those gurus, or new, competing gurus who would push back with, “I haven’t seen any need to body rock”, or “Opinion openers are unnecessary”, or “indirect game is pointless”, etc. So, even though individual schools don’t often move in the direction of discarding old techniques, the field as a whole seems to evolve towards simplification where possible.
Indeed, there is at least one PU guru who says that nearly all of Mystery method is pointless superstition in the sense that guys who jump through all its hoops are succeeding not because of what they’re doing in the process, so much as what they’re not doing.
That, in essence, women either find you attractive or they don’t, and all that your “game” needs to do is not blow the attraction by saying or doing something stupid. ;-) His specific advice seems to focus more on figuring out how to tell whether a particular woman is attracted to you, and how to move as quickly as possible from that to doing something about it.
Note: I don’t believe this guru is saying that Mystery’s advice about social skills is wrong, merely that the use of those skills can be completely superfluous to a goal of having sex with attractive women, vs. a goal of being friends with groups of people and hanging out with them before having sex with some of the women, or getting into social circles containing high-status women. And I think he’s largely correct in this stance, especially if your objective isn’t to have sex with the highest-status beautiful woman present (which is Mystery method’s raison d’etre).
If your objective is to meet, say, the kinkiest girl with the dirtiest mind, or the sweetest, friendliest one, or the most adventurous one, or really almost any other criteria, Mystery’s elaborate refinements are superfluous, as they were developed to help him rapidly social-climb his way into his target’s circle of friends and disarm their ready defenses against guys coming to hit on her.
To put it another way: Mystery is using a narrow, red-line strategy specifically tuned to women who are the most attractive to a broad, blue-line spectrum of guys… because they were also his personal red line. If your red line is not those women, then Mystery method is not the tool you should use.
PUA style, in short, is very individual. Once you add back in the context of a given guru’s personality, physique, goals, and other personal characteristics, you find that it’s nowhere near as broad-spectrum/universal as the guru’s declarations appear. Once, I watched some videos online from a conference of PUA gurus who often had what sounded like contradictory advice… but which was intended for people with different personalities and different goals.
For example, one guy focused on making lots of female friends and going out with them a lot—he enjoys it, and then they play matchmaker for him. Another emphasized a lone-wolf strategy of “forced IOIs”, which is PUA code for acting in a way that forces a woman to very quickly indicate (nonverbally) whether she has any interest in him. Just looking at these two guys, you could tell that each had chosen a method that was a better match for their personality, and that neither would be happy using the other’s method, nor would they each be meeting the kind of women they wanted to meet!
So that’s why I keep saying that you’re ignoring the fact that PUA is not a single uniform thing, any more than, say, weight loss is. In theory, everybody can eat less and move more and this will make them lose weight. In practice, it ain’t nearly that simple: different people have different nutritional needs, for example, so the diet that’s healthy for one person can be very bad for another.
Thus, if you want, say, “honest, equal, supportive” PUA, then by all means, look for it. But don’t expect to find One True PUA Theory that will make all women do your bidding. It doesn’t exist. What exists in PUA is a vast assortment of vaguely related theories aimed at very individual goals and personality types.
(And, of more direct relevance to this particular sub-thread, far too many confounding factors to be of much use to group studies, unless you plan to run a lot of experiments.)
Speaking broadly, if the goal is Rational Romantic Relationships than any advice which doesn’t have actual existing evidence to back it up is not advice rational people should be taking.
If a whole bunch of different gurus are each flogging different techniques and none of them have evidence, then a rationalist should dismiss them all until they do have some evidence, just as we dismiss the alt-med gurus who flog different forms of alternative medicine without evidence. Without evidence PUA is no more the elephant in the Rationalist Romantic Relationship room than ayurveda is an elephant in the medical science room.
As far as the superstition/placebo distinction you are making I think you are simply wrong linguistically speaking. Nothing stops a superstition being a placebo, and in fact almost all of alternative medicine could legitimately be described as placebo and superstition.
Superstitions arise because of faulty cause/effect reasoning and may indeed have a placebo effect, like the red scarf you mention. I suspect but cannot prove that some parts of PUA doctrine arise in exactly the same way that belief in a lucky scarf arises. Someone tries it, they get lucky that time, and so from then on they try it every time and believe it helps.
If some pieces of PUA technique are testable, that’s great. They should test them and publish the results. Until they do their beliefs don’t really have a place if we’re talking about Rational Romantic Relationships. If they aren’t testable, then they’re unfalsifiable beliefs and rationalists should be committed to discarding unfalsifiable beliefs. PUA looks to me more like folklore than science, at this stage.
If a whole bunch of different gurus are each flogging different techniques and none of them have evidence, then a rationalist should dismiss them all until they do have some evidence
I agree with this statement… but as previously discussed, I mean Bayesian reductionist evidence.
Which means, anecdotes count, even if they still count for less than numbers and double-blind tests.
If some pieces of PUA technique are testable, that’s great. They should test them and publish the results. Until they do their beliefs don’t really have a place if we’re talking about Rational Romantic Relationships
You’re using a very non-LW definition of “rational” here, since the principles of Something To Protect, and avoiding the Failures Of Eld Science would say that it’s your job to find something and test it, not to demand that people bring you only advice that’s already vetted.
If you wait for Richard Wiseman to turn “The Secret” into “Luck Theory”, and you actually needed the instrumental result, then you lost.
That is, you lost the utility you could have had by doing the testing yourself.
For medical outcomes, doing the testing yourself is a bad idea because the worst-case scenario isn’t that you don’t get your goal, it’s that you do damage to yourself or die.
But for testing PUA or anything in personal development, your personal testing costs are ridiculously low, and the worst case is just that you don’t get the goal you were after.
This means that if the goal is actually important, and whatever scientifically-validated information you have isn’t getting you the goal, then you don’t just sit on your ass and wait for someone to hand you the research on a platter.
Anything else isn’t rational, where rational is defined (as on LW) as “winning”.
I agree with this statement… but as previously discussed, I mean Bayesian reductionist evidence.
Which means, anecdotes count, even if they still count for less than numbers and double-blind tests.
I think this is a misunderstanding of the correct application of Bayes’ Theorem. Bayes is not a magic wand, and GIGO still applies. Anecdotal evidence counts but you have to correctly estimate the probability that you would hear that anecdote in a world where PUA methods were just placebos sold to the sex-starved and nerdy, as opposed to the probability that you would hear that anecdote in a world where PUA methods have some objectively measurable effect. I think most of the time the correct estimate is that those probabilities are barely distinguishable at best.
A rationalist should have a clear distinction between Things That Are Probably True, and Things That Might Be True and Would Be Interesting To Try. The goal of the OP was to sum up the state of human knowledge with regard to Things That Are Probably True, which is the standard scholarly starting point in research.
It seemed to me that PUA techniques, lacking any objective evidence to back them up, should be filed under Things That Might Be True and Would Be Interesting To Try but that their devotees were claiming that they were the elephant in the OP’s room and that they had been unjustly excluded from the set of Things That Are Probably True.
I’m not against the ethos of going out and trying these things, as long as the testing costs really are low (i.e. you don’t pay good money for them). They might work, and even if they are just placebos you might get lucky anyway. However it’s not rational to actually believe they probably work in the absence of proper evidence, as opposed to going along with them for the sake of experiment, or to try to squeeze them in to a list of Things That Are Probably True.
I mean Bayesian reductionist evidence. Which means, anecdotes count, even if they still count for less than numbers and double-blind tests.
I think this is a misunderstanding of the correct application of Bayes’ Theorem.
That a comment opening with this quote-reply pair is voted above zero troubles me. It is a direct contradiction of one of the most basic premises of this site.
I would have voted it down were it not for the rest of the paragraph cited, which basically comes down to “anecdotes are Bayesian evidence, but with caveats related to the base rate, and not always positive evidence”. Which is, as best I can tell, correct. In isolation, the opening sentence does seem to incorrectly imply that anecdotes don’t count at all, and so I’d have phrased it differently if I was trying to make the same point, but a false start isn’t enough for a downvote if the full post is well-argued and not obviously wrong.
In context, I interpreted pjeby to be saying that anecdotes counted as evidence which should lead a Bayesian rationalist to believe the truth of PUA claims. If that was not their intention I got them totally wrong.
However if I interpreted them correctly they were indeed applying Bayes incorrectly, since we should expect a base rate of PUA-affirming anecdotes even if PUA techniques are placebos, and even in the total absence of any real effects whatsoever. It’s not evidence until the rate of observation exceeds the base rate of false claims we should expect to hear in the absence of a non-placebo effect, and if you don’t know what the base rate is you don’t have enough information to carry out a Bayesian update. You can’t update without P(B).
However it’s not rational to actually believe they probably work in the absence of proper evidence, as opposed to going along with them for the sake of experiment
The truth of this statement depends heavily on how you unpack “believe”. Brains have more than one way of “believing” things, after all. A person can not “believe” in ghosts, and yet feel scared in a “haunted” house. Or more relevant to the current thread, a person can “believe” they are attractive and worthy and have every right to go up to someone and say “hi”, yet still be unable to do it.
IOW, epistemic and instrumental beliefs are compartmentalized in humans by default… which makes a mockery of the idea that manipulating your instrumental beliefs will somehow stain your epistemic purity.
I’m not against the ethos of going out and trying these things, as long as the testing costs really are low (i.e. you don’t pay good money for them).
Relevant: willingness to spend money to change is correlated with willingness to actually change. That doesn’t mean spending money causes change, of course, I’m just pointing out that a person’s willingness to incur the costs of changing (whatever sort of cost) is strongly correlated with them taking action to change. (See Prochaska, Norcross, et al; whose research and meta-research of a dozen different types of change goals is summarized in the book “Changing For Good”.)
[Originally, I was going to include a bunch of information about my work with personal development clients that reflects the pattern described in the above-mentioned research, but since you appear to prefer research to experience, I’ve decided to skip it.]
Relevant: willingness to spend money to change is correlated with willingness to actually change. That doesn’t mean spending money causes change, of course, I’m just pointing out that a person’s willingness to incur the costs of changing (whatever sort of cost) is strongly correlated with them taking action to change. (See Prochaska, Norcross, et al; whose research and meta-research of a dozen different types of change goals is summarized in the book “Changing For Good”.)
I place a high value on not financially encouraging bad behaviour, and selling non-evidence-based interventions to people who may be desperate, irrational or ill-informed but who don’t deserve to be robbed counts as bad behaviour to me.
There’s a loss of utility beyond the mere loss of cash to myself if I give cash to a scammer, because it feeds the scammer and potentially encourages other scammers to join the market. This is the flip side of the coin that there is a gain in utility when I give cash to a worthwhile charity.
People willing to spend money on attracting a mate have a wide variety of options as to how they spend it, after all. If they are willing to actually change it’s not as if the only way to demonstrate this is to spend money on PUA training rather than clothes, transportation, food, drink, taxi fares and so on.
As I mentioned in the other sub-thread, it’s really tiring to have you continually reframing what I say to make attackable arguments out of it. Unless your sole interest in LessWrong is to score rhetorical points (i.e., trolling), it’s a rather bad idea to keep doing that to people.
Note that the text you quoted from my comment has nothing to do with PUA. It is a portion of my evidence that your professed approach to personal development (i.e., trying things only if they cost nothing) is Not Winning.
On LessWrong, rationality equals winning, not pretending to avoid losing. (Or more bluntly: attempting to signal your intelligence and status by avoiding the low-status work of actually trying things and possibly being mistaken.)
It is better to do something wrong—even repeatedly—and eventually succeed, than to sit on your ass and do nothing. Otherwise, you are less instrumentally rational than any random person who tries things at random until something works.
Meanwhile, any time that you do not spend winning, is time spent losing, no matter how you spin it as some sort of intellectual superiority.
So, on that note, I will now return to activities with a better ROI than continuing this discussion. ;-)
(And sometimes hearing them [anecdotes] counts as evidence against the phenomenon!)
Indeed. Careful reading is required when investigating any kind of “folklore”, as occasionally self-help authors provide anecdotes that (in the details) provide a very different picture of what is happening than what the author is saying is the point or moral of that anecdote.
Hi.… I haven’t read this whole thread, but I know one very important thing that immediately discredited PhilosophyTutor in my view.
I strongly feel that the best pua’s are not at all about merely extracting something from the woman they interact with. They claim they live by the motto “leave her better than you found her”. From my impression of Casanova, the ultimate pua, he lived by that too.
You’re absolutely right about the methodological issues. I’ve thought it myself; besides the enormous survivor bias of course.
But it is far more irrational to discount their findings on that ground alone, because the alternative, academic studies, are blinded by exactly the same ignore-the-elephant and keep-things-proper attitude that the original poster of this thread pointed out.
Take this into account: a lot of good pua’s may fall far short of the ideal amount of rigor, but at the same time, far exceed the average person’s rigor. I can’t condemn those who, without the perspective gained from this site, nevertheless seek to quantify things and really understand them.
Hi.… I haven’t read this whole thread, but I know one very important thing that immediately discredited PhilosophyTutor in my view. I strongly feel that the best pua’s are not at all about merely extracting something from the woman they interact with. They claim they live by the motto “leave her better than you found her”. From my impression of Casanova, the ultimate pua, he lived by that too.
How do they know whether they fulfill this motto well?
Take this into account: a lot of good pua’s may fall far short of the ideal amount of rigor, but at the same time, far exceed the average person’s rigor.
Given that their methodology is incompatible with scientific reasoning
Not something you have shown (or something that appears remotely credible).
and their attitudes incompatible with maximising global utility for all sentient stakeholders,
Not much better and also not a particularly good reason to exclude an information source from an analysis. (An example of a good reason would be “people say a bunch of prejudicial nonsense for all sorts of reasons and everybody concerned ends up finding it really, really annoying”).
it seems obvious to me that an ethical rationalist’s goals in relationship-seeking should be to seek a relationship that creates maximal utility for both parties
It is not clear to me that utilities can be easily compared. What tradeoff between my satisfaction and my partner’s satisfaction should I be willing to accept? I can see how to elicit my preferences (for things like partner happiness, relationship duration, and so on) and try to predict how the consequences of my actions will impact my preferences, but I don’t quite see how to add utilities, or compare the amount of satisfaction I could provide to multiple potential partners.
It’s obvious from even the briefest perusal of PUA texts that the PUA community are concerned very much with maximising their own utility and talking down the status of male outgroup members and women in general, but not with honestly seeking means to maximise the utility of all stakeholders.
It’s not clear that they want to talk down the status of women in general. Men becoming more attractive and less annoying to women seems to be better for women, and there’s quite a bit in the PUA literature of how to keep a long-term relationship going, if that’s what you want to do.
You are absolutely right that utilities cannot be easily compared and that this is a fundamental problem for utilitarian ethics.
We can approximate a comparison in some cases using proxies like money, or in some cases by assuming that if we average enough people’s considered preferences we can approach a real average preference. However these do not solve the fundamental problem that there is no way of measuring human happiness such that we could say with confidence “Action A will produce a net 10 units of happiness, and Action B will produce a net 11 units of happiness”.
In the case of human sexual relationships what you’d really have to do is conduct a longitudinal study looking at variables like reported happiness, incidence of mental illness, incidence of suicide, partner-assisted orgasms per unit time, longevity and so on.
That said this difficulty in totalling up net utilities is not a moral blank cheque. If women report distress after a one night stand with a PUA followed by cessation of contact then that has to be taken as evidence of caused disutility, and you can’t remove the moral burden that entails by pointing out that calculating net utility is difficult or postulating that their distress is their fault because they are “entitled”/”in denial”/etc.
conduct a longitudinal study looking at variables like
While this would give people more knowledge about how their actions turn into consequences, this doesn’t help people decide which consequences they prefer, and so only weakly helps them decide which actions they prefer.
If women report distress after a one night stand with a PUA followed by cessation of contact then that has to be taken as evidence of caused disutility, and you can’t remove the moral burden that entails
So, let’s drop the term utility, here, and see if that clarifies the moral burden. Suppose Bob and Alice go to a bar and meet; they both apply seduction techniques; they have sex that night. Alice’s interest in Bob increases; Bob’s interest in Alice decreases. What moral burdens are on each of them, and where did those moral burdens come from?
While this would give people more knowledge about how their actions turn into consequences, this doesn’t help people decide which consequences they prefer, and so only weakly helps them decide which actions they prefer.
I think it does help if people have pre-existing views about whether they like the internal experience of happiness, mental health, continued life, orgasms and so on, and about whether they can legitimately generalise those views to others. I don’t think I would be making an unreasonable assumption if I assumed that an arbitrarily chosen woman in a bar would most likely have a preference for the internal experience of happiness, mental health, continued life, orgasms and so on and hence that conduct likely to bring about those outcomes for her would produce utility and conduct likely to bring about the opposite would produce negative utility.
So, let’s drop the term utility, here, and see if that clarifies the moral burden. Suppose Bob and Alice go to a bar and meet; they both apply seduction techniques; they have sex that night. Alice’s interest in Bob increases; Bob’s interest in Alice decreases. What moral burdens are on each of them, and where did those moral burdens come from?
There is not enough information to say, and your chosen scenario is possibly not the best one for exploring the ethics of PUA behaviour since it firstly postulates that the female participant is also using seduction techniques (hopefully defined in some more specific sense than just trying to be attractive), and secondly it skips entirely over the ethical question of approaching someone in the first place and possibly getting them to participate in sex acts they may not have planned to engage in. By jumping straight to the next morning and asking that the moral path is forward from that point this scenario avoids arguably the most important ethical questions about PUA behaviour.
However I will answer the question as posed to avoid accusations that I am simply avoiding it. From a utilitarian perspective the moral burden is simply to maximise utility, so we need to know what are Bob and Alice’s utility functions are, and what Bob and Alice should reasonably think the other party’s utility function is like.
It might well be that Bob has neither the interest not the ability to sustain a mutually optimal ongoing relationship with Alice and in that case the utility-maximising path from that point forward and hence the ethical option is for Bob to leave and not contact Alice again. However if Bob knew in advance that this was the case and had reason to believe that Alice’s utility function placed a negative value on participating in a one night stand with a person who was not interested in a long-term relationship then Bob behaved unethically in getting to this position since he knowingly brought about a negative-utility outcome for a moral stakeholder.
I don’t think I would be making an unreasonable assumption if I assumed that an arbitrarily chosen woman in a bar would most likely have a preference for the internal experience of happiness, mental health, continued life, orgasms and so on and hence that conduct likely to bring about those outcomes for her would produce utility and conduct likely to bring about the opposite would produce negative utility.
Knowing that her weights on those things are positive gets me nowhere. What I need to know are their relative strengths, and this seems like an issue where (heterosexual) individuals are least poised to be able to generalize their own experience. It seems likely that a man could go through life thinking that everyone enjoys one night stands and sleeps great afterwards, and not until reading PUA literature realizes that women often freak out after them.
the female participant is also using seduction techniques (hopefully defined in some more specific sense than just trying to be attractive)
Suppose she flirts, or the equivalent (that is, rather than just seeking general attraction, she seeks targeted attraction at some point). If she never expresses any interest, it’s unlikely she and Bob will have sex (outside of obviously unethical scenarios).
this scenario avoids arguably the most important ethical questions about PUA behaviour.
What question do you think is most important?
we need to know what are Bob and Alice’s utility functions are, and what Bob and Alice should reasonably think the other party’s utility function is like.
Suppose Bob and Alice both believe that actions reveal preferences.
Bob behaved unethically in getting to this position since he knowingly brought about a negative-utility outcome for a moral stakeholder.
Suppose Alices enjoy one night stands, and Carols regret one night stands, though they agree to have sex after the first date. When Bob meets a woman, he can’t expect her to honestly respond whether she’s a Carol or an Alice if he asks her directly. What probability does he need that a woman he seduces in a bar will be an Alice for it to be ethical to seduce women in bars?
As well, if he believes that actions reveal preferences, should he expect that one night stands are a net utility gain or loss for Carols?
Knowing that her weights on those things are positive gets me nowhere. What I need to know are their relative strengths, and this seems like an issue where (heterosexual) individuals are least poised to be able to generalize their own experience. It seems likely that a man could go through life thinking that everyone enjoys one night stands and sleeps great afterwards, and not until reading PUA literature realizes that women often freak out after them.
Hopefully research like that cited in the OP can help with that. In the meantime we have to do the best we can with what we have, and engage in whatever behaviours maximise the expected utility of all stakeholders based on our existing, limited knowledge.
What question do you think is most important?
I think the most important question is “Is it ethical to obtain sex by deliberately faking social signals, given what we know of the consequences for both parties of this behaviour?”. A close second would be “Is it ethical to engage in dominance-seeking behaviour in a romantic relationship?”.
Suppose Alices enjoy one night stands, and Carols regret one night stands, though they agree to have sex after the first date. When Bob meets a woman, he can’t expect her to honestly respond whether she’s a Carol or an Alice if he asks her directly. What probability does he need that a woman he seduces in a bar will be an Alice for it to be ethical to seduce women in bars?
One approach would be to multiply the probability you have an Alice by the positive utility an Alice gets out of a one night stand, and multiply the probability that you have a Carol by the negative utility a Carol gets out of a one night stand, and see which figure was larger. That would be the strictly utilitarian approach to the question as proposed.
If we’re allowed to try to get out of the question as proposed, which is poor form in philosophical discussion and smart behaviour in real life, a good utilitarian would try to find ways to differentiate Alices and Carols, and only have one night stands with Alices.
A possible deontological approach would be to say “Ask them if they are an Alice or a Carol, and treat them as the kind of person they present themselves to be. If they lied it’s their fault”.
The crypto-sociopathic approach would be to say “This is all very complicated and confusing, so until someone proves beyond any doubt I’m hurting people I’ll just go on doing what feels good to me”.
I think the most important question is “Is it ethical to obtain sex by deliberately faking social signals, given what we know of the consequences for both parties of this behaviour?”.
“Deliberately faking social signals”? But, but, that barely makes any sense. They are signals. You give the best ones you can. Everybody else knows that you are trying to give the best signals that you can and so can make conclusions about your ability to send signals and also what other signals you will most likely give to them and others in the future. That is more or less what socializing is. I suppose blatant lies in a context where lying isn’t appropriate and elaborate creation of false high status identities could be qualify—but in those case I would probably use a more specific description.
A close second would be “Is it ethical to engage in dominance-seeking behaviour in a romantic relationship?”.
A third would be “could the majority of humans have a romantic relationship without dominance-seeking behavior?” and the fourth : “would most people find romantic relationships anywhere near as satisfying without dominance-seeking behavior?” (My money is on the “No”s.)
One more question: What principles would help establish how much dominance seeking behavior is enough to break the relationship or in some other way cause more damage than it’s worth, considering that part of dominance is ignoring feedback that it’s unwelcome?”
One more question: What principles would help establish how much dominance seeking behavior is enough to break the relationship or in some other way cause more damage than it’s worth
Yes, that part is hard, even on a micro scale. I have been frequently surprised that I underestimate how much dominance seeking would be optimal. I attribute this to mind-projection. ie “This means she would prefer me to do that? Wow. I’d never take that shit if it was directed at me. Hmm… I’m going do that for her benefit and be sure not to send any signal that I am doing it for compliance. It’s actually kind of fun.”
(Here I do mean actual unambiguous messages—verbal or through blatantly obvious social signalling by the partner. I don’t mean just “some source says that’s what women want”.)
considering that part of dominance is ignoring feedback that it’s unwelcome?
Fortunately we can choose which dominance seeking behaviors to accept and reject at the level of individual behavioral trait. We could also, if it was necessary for a particular relationship, play the role of someone who is ignoring feedback but actually absorb everything and process it in order to form the most useful model of how to navigate the relationship optimally. On the flip side we can signal and screen to avoid dominance seeking behaviors that we particularly don’t want and seek out and naturally reward those that we do want.
“Deliberately faking social signals”? But, but, that barely makes any sense. They are signals. You give the best ones you can. Everybody else knows that you are trying to give the best signals that you can and so can make conclusions about your ability to send signals and also what other signals you will most likely give to them and others in the future. That is more or less what socializing is. I suppose blatant lies in a context where lying isn’t appropriate and elaborate creation of false high status identities could be qualify—but in those case I would probably use a more specific description.
PUAs have trouble grasping that there is a difference between appearance and reality, which is ironic in some ways. It’s an implicit part of their doctrine that if you can pass yourself off as an “alpha” that you really are an “alpha”, in the sense of being the kind of person that women really do want to mate with.
However it seems obvious to me that the whole PUA strategy is to spoof their external signals in a way they hope will fool women into drawing incorrect conclusions about what is actually going on within the PUA’s mind and what characteristics the PUA is actually bringing to the relationship table. It’s a way for socially awkward nerds to believe they are camouflaging themselves as rough, tough, confident super-studs and helping themselves to reproductive opportunities while so camouflaged.
They excuse this moral failing by saying “Everybody else is doing it, hence it’s okay for me to do it only more so”.
However it’s well-established in general societal morals that obtaining sex by deception is a form of non-violent rape. If you’re having sex with someone knowing that they are ignorant of relevant facts which if they knew them would stop them having sex with you, then you are not having sex with their free and informed consent.
The fact that someone is a PUA using specific PUA techniques to misrepresent their real mind-state seems to me like highly relevant information in relationship decision-making.
A third would be “could the majority of humans have a romantic relationship without dominance-seeking behavior?” and the fourth : “would most people find romantic relationships anywhere near as satisfying without dominance-seeking behavior?” (My money is on the “No”s.)
Is there proper scientific evidence for this? If not do you acknowledge that this is at least potentially a moral excuse of the same form as “Everyone else is doing it, so it’s okay for me to do it”?
I suspect it would actually turn out that correctly socialised people would prefer and flourish more completely in relationships which are free of dominance games, and I think my naive folk-psychological guesswork is just as good as yours.
They excuse this moral failing by saying “Everybody else is doing it, hence it’s okay for me to do it only more so”.
I find that those with any significant degree of PUA competence are not particularly inclined to try to excuse themselves to others. Apart from being an unhealthy mindset to be stuck in it sends all the wrong signals. They would instead bock out any hecklers and go about their business. If people try to shame them specifically while they are flirting or socializing they may need to handle the situation actively but it is almost certainly not going to be with excuses.
However it’s well-established in general societal morals that obtaining sex by deception is a form of non-violent rape. If you’re having sex with someone knowing that they are ignorant of relevant facts which if they knew them would stop them having sex with you, then you are not having sex with their free and informed consent.
Acting confident and suppressing nervousness is not rape.
Is there proper scientific evidence for this?
It is a third and fourth question added to a list. Unless the first two were supposed to be scientific proclamations this doesn’t seem to be an appropriate demand.
If not do you acknowledge that this is at least potentially a moral excuse of the same form as “Everyone else is doing it, so it’s okay for me to do it”?
No to the “if not” implication—not presenting proper scientific evidence wouldn’t make it an excuse. No to the equivalence of these questions to that form. Most importantly: nothing is an ‘excuse’ unless the person giving it believes they doing something bad.
and I think my naive folk-psychological guesswork is just as good as yours.
I really don’t think naivety is a significant failing of mine.
I find that those with any significant degree of PUA competence are not particularly inclined to try to excuse themselves to others. Apart from being an unhealthy mindset to be stuck in it sends all the wrong signals. They would instead bock out any hecklers and go about their business. If people try to shame the specifically while they are flirting or socializing they may need to handle the situation actively but it is almost certainly not going to be with excuses.
So far in this conversation those I have mentally labelled pro-PUA have inevitably introduced scenarios where both parties are using “seduction techniques”, which I think is a term which is dangerous since it conflates honest signalling with spoofed signalling, or by claiming (as you did) that the idea of spoofing social signals “barely makes any sense”. I take those arguments to be excusing the act of spoofing social signals on the basis either that all women also spoof their social signals and that two wrongs make a right, or that there is in fact no such thing as social spoofing and that hence PUAs cannot be morally condemned for doing something which does not exist.
Acting confident and suppressing nervousness is not rape.
In and of itself, it seems to me that at least potentially it is deliberately depriving the target of access to relevant facts that they would wish to know before making a decision whether or not to engage socially, sexually or romantically with the suppressor.
However unless you believe that pick-up targets’ relevant decision-making would be totally unaffected by the knowledge that the person approaching them was a PUA using specific PUA techniques, then concealing that fact from the pick-up target is an attempt to obtain sex without the target’s free and informed consent. If you know fact X, and you know fact X is a potential deal-breaker with regard to their decision whether or not to sleep with you, you have a moral obligation to disclose X.
I really don’t think naivety is a significant failing of mine.
″ In this trifling particular, then, I appear to be wiser than he, because I do not fancy I know what I do not know”.
Socrates
Edit in response to edit: I was asked what I thought the most important ethical questions were with regard to PUA, and answered that question with two ethical questions. You responded by asking two factual questions of your own, which if answered in the negative would make my second question redundant, and stated that your money (which since you are posting here I took to mean that you have a Bayesian conviction that your answer is more likely to be right than not) was on the answer to those questions being negative.
You must have some basis for that probability estimate. Saying that it’s not an “appropriate demand” to ask for those bases doesn’t solve the problem that without access to your bases we can’t tell if your probability estimate is rational.
It is also a category error to put ethical questions and factual questions in the same bin and argue that because my ethical questions are not “scientific proclamations” that this means you don’t have to provide support for your factual probability estimates.
It is odd that a reply that is entirely to wedrifid quotes is made in response to NancyLebovitz comment which makes an entirely different point. Did you click the wrong ‘reply’ button?
I think the most important question is “Is it ethical to obtain sex by deliberately faking social signals, given what we know of the consequences for both parties of this behaviour?”.
This question seems malformed. “Deliberating faking social signals” is vague- but is typically not something that’s unethical (Is it unethical to exaggerate?). “What we know of the consequences” is unclear- what’s our common knowledge?
A close second would be “Is it ethical to engage in dominance-seeking behaviour in a romantic relationship?”.
Yes.
That would be the strictly utilitarian approach to the question as proposed.
And, of course, you saw the disconnect between your original statement and your new, more correct one.
Right?
If we’re allowed to try to get out of the question as proposed, which is poor form in philosophical discussion and smart behaviour in real life, a good utilitarian would try to find ways to differentiate Alices and Carols, and only have one night stands with Alices.
The reason I asked that question is because you put forth the claim that Bob’s fault was knowingly causing harm to someone. That’s not the real problem, though- people can ethically knowingly cause harm to others in a wide variety of situations, under any vaguely reasonable ethical system. Any system Bob has for trying to determine the difference between Alices and Carols will have some chance of failure, and so it’s necessary to use standard risk management, not shut down.
This question seems malformed. “Deliberating faking social signals” is vague- but is typically not something that’s unethical (Is it unethical to exaggerate?). “What we know of the consequences” is unclear- what’s our common knowledge?
Rhetorical questions are a mechanism that allows us to get out of making declarative statements, and when you find yourself using them that should be an immediate alert signal to yourself that you may be confused or that your premises bear re-examination.
Deceiving others to obtain advantage over them is prima facie unethical in many spheres of life, and I think Kant would say that it is always unethical. Some role-ethicists would argue that when playing roles such as “salesperson”, “advertiser” or “lawyer” that you have a moral license or even obligation to deceive others to obtain advantage but these seem to me like rationalisations rather than coherent arguments from supportable prior principles. Even if you buy that story in the case of lawyers, however, you’d need to make a separate case that romantic relationships are a sphere where deceiving others to obtain advantage is legitimate, as opposed to unethical.
PUA is to a large extent about spoofing social signals, in the attempt to let young, nerdy, white-collar IT workers signal that they have the physical and psychological qualities to lead a prehistoric tribe and bring home meat. The PUA mythology tries to equivocate between spoofing the signals to indicate that you have such qualities and actually having such qualities but I think competent rationalists should be able to keep their eye on the ball too well to fall for that. Consciously and subconsciously women want an outstanding male, not a mediocre one who is spoofing their social signals, and being able to spoof social signals does not make you an outstanding male.
Yes.
Okay. We come from radically different ethical perspectives such that it may be unlikely that we can achieve a meeting of minds. I feel that dominance-seeking in romantic relationships is a profound betrayal of trust in a sphere where your moral obligations to behave well are most compelling.
And, of course, you saw the disconnect between your original statement and your new, more correct one.
Right?
Can you point me to the text that you take to be “my original statement” and the text you take to be “my new, more correct statement”? There may be a disconnect but I’m currently unable to tell what text these constructs are pointing to, so I can’t explicate the specific difficulty.
The reason I asked that question is because you put forth the claim that Bob’s fault was knowingly causing harm to someone. That’s not the real problem, though- people can ethically knowingly cause harm to others in a wide variety of situations, under any vaguely reasonable ethical system.
People can ethically and knowingly burn each other to death in a wide variety of situations under any vaguely reasonable ethical system too, so that statement is effectively meaningless. It’s a truly general argument. (Yes, I exclude from reasonableness any moral system that would stop you burning one serial killer to death to prevent them bringing about some arbitrarily awful consequence if there were no better ways to prevent that outcome).
Any system Bob has for trying to determine the difference between Alices and Carols will have some chance of failure, and so it’s necessary to use standard risk management, not shut down.
We agree completely on that point, but it seems to me that a substantial subset of PUA practitioners and methodologies are aiming to deliberately increase the risk, not manage it. Their goals are to maximise the percentage of Alices who sleep with the PUA and also to maximise the percentage of Carols who sleep with the PUA.
It doesn’t seem unreasonable to go further and say that in large part the whole point of PUA is to bed Carols. Alices are up for a one night stand anyway, so manipulating them to suspend their usual protective strategies and engage in a one night stand with you would be as pointless as peeling a banana twice. It’s only the Carols who are not normally up for a one night stand that you need to manipulate in the first place. Hence that subset of PUA is all about maximising the risk of doing harm, not minimising that risk.
(Note that these ethical concerns are orthogonal to, not in conflict with, my equally serious methodological concerns about whether it’s rational to think PUA performs better than placebo given the available evidence).
It doesn’t seem unreasonable to go further and say that in large part the whole point of PUA is to bed Carols. Alices are up for a one night stand anyway, so manipulating them to suspend their usual protective strategies and engage in a one night stand with you would be as pointless as peeling a banana twice.
That sounds wrong. I dabbled in pickup a little bit and I would gladly accept a 2x boost in my attractiveness to Alices in exchange for total loss of attractiveness to Carols. If you think success with Alices is easy, I’d guess that either you didn’t try a lot, or you’re extremely attractive and don’t know it :-)
That sounds wrong. I dabbled in pickup a little bit and I would gladly accept a 2x boost in my attractiveness to Alices in exchange for total loss of attractiveness to Carols. If you think success with Alices is easy, I’d guess that either you didn’t try a lot, or you’re extremely attractive and don’t know it :-)
I wasn’t trying to say that bedding an Alice is “easy” full stop, just that if they find you attractive enough you won’t have to get them to lower their usual protective strategies to get them into bed the same night. That follows directly from how we have defined an Alice. Being an Alice doesn’t mean that they can’t be both choosy and in high demand though.
Carols are the ones who, regardless of how attractive they find you, don’t want to end up in bed that night and hence are the ones where the PUA has to specifically work to get them to lower their defences if the PUA wants that outcome.
ETA: This post seems to be getting hammered with downvotes, despite the fact that it’s doing nothing but clearing up a specific point of confusion about what was being expressed in the grandparent. I find that confusing. If the goal is to hide a subthread which is seen as unproductive it would seem more logical to hammer the parent.
Deceiving others to obtain advantage over them is prima facie unethical in many spheres of life
Irrelevant. Is all fair in love?
I feel that dominance-seeking in romantic relationships is a profound betrayal of trust in a sphere where your moral obligations to behave well are most compelling.
Are you claiming that all romantic relationships which include the domination of one party by the other betray trust? I think we have differing definitions of dominance or good behavior.
Can you point me
Sure! First statement:
Bob behaved unethically in getting to this position since he knowingly brought about a negative-utility outcome for a moral stakeholder.
Second statement:
One approach would be to multiply the probability you have an Alice by the positive utility an Alice gets out of a one night stand, and multiply the probability that you have a Carol by the negative utility a Carol gets out of a one night stand, and see which figure was larger. That would be the strictly utilitarian approach to the question as proposed.
The first statement is judging a decision solely by its outcome; the second statement is judging a decision by its expected value at time of decision-making. The second methodology is closer to correct than the first.
(In the post with the first statement, it was the conclusion of a hypothetical scenario: Bob knew X about Alice, and had sex with her then didn’t contact her. I wasn’t contesting that win-lose outcomes were inferior to win-win outcomes, but was pointing out that the uncertainties involved are significant for any discussion of the subject. There’s no reason to give others autonomy in an omniscient utilitarian framework: just get their utility function and run the numbers for them. In real life, however, autonomy is a major part of any interactions or decision-making, in large part because we cannot have omniscience.)
It doesn’t seem unreasonable to go further and say that in large part the whole point of PUA is to bed Carols.
That does not seem reasonable. Alices may be up for one night stands, but they only have sex with at most one guy a night. The challenge is being that guy.
See, ah, I think I’m against advocating deliberately unethical behavior / defection on LW.
The question is what ethical standard to use. Whether or not exaggeration is unfair in matters of romance has not been established, and I would argue that exaggeration has a far more entrenched position than radical honesty.
That is, I would argue that not exaggerating your desirability as a mate is defection, rather than cooperation, and defection of the lose-lose variety rather than the win-lose variety.
There’s a big difference between asserting something is “irrelevant” versus “incorrect” or “unestablished”.
What was irrelevant is that deceit is unethical in many spheres of life. If deceit is unethical for a scientist* but ethical for a general, then knowing that deceit is unethical for a scientist is irrelevant if discussing generals.
What has not been established is whether romance is more like science or war. I think the former position is far weaker than the latter.
* I had a hard time coming up with any role in which any form of deceit is questionable, and thus I suppose if I were out for points I would question the correctness of the assertion, rather than merely its relevance. Even for scientists, exaggeration- the original behavior under question- is often ethical.
Let me check… nope, it looks like utilitarian ethics holds that ethical actions are those that maximise positive outcomes (however defined) factoring in the consequences for all stakeholders. I can’t see anything in there excluding actions or outcomes related to sex from the usual sorts of calculations. So I’m going to go ahead and say that the answer is no from a utilitarian perspective.
Are you claiming that all romantic relationships which include the domination of one party by the other betray trust? I think we have differing definitions of dominance or good behavior.
If we can exclude those cases where one partner or another honestly and explicitly expresses a free, informed and rational preference to be dominated then mostly yes.
(From a utilitarian perspective we have to at least be philosophically open to the idea that a person who is sufficiently bad at managing their utility might be better off being dominated against their will by a sufficiently altruistic dominator. See The Taming of the Shrew or Overboard. Such cases are atypical).
The first statement is judging a decision solely by its outcome
I have located the source of the confusion. What I actually said in the earlier post was this:
“t might well be that Bob has neither the interest not the ability to sustain a mutually optimal ongoing relationship with Alice and in that case the utility-maximising path from that point forward and hence the ethical option is for Bob to leave and not contact Alice again. However if Bob knew in advance that this was the case and had reason to believe that Alice’s utility function placed a negative value on participating in a one night stand with a person who was not interested in a long-term relationship then Bob behaved unethically in getting to this position since he knowingly brought about a negative-utility outcome for a moral stakeholder.”
I was not judging a situation solely on its outcome, because it was an if/then statement explicitly predicated on Bob knowing in advance that Alice’s utility function would take a major hit.
I guess you just lost track of the context and thought I’d said something I hadn’t. Are we back on the same page together now?
That does not seem reasonable. Alices may be up for one night stands, but they only have sex with at most one guy a night. The challenge is being that guy.
Possibly the recency effect of having skimmed one of Roissy’s blog posts where he specifically singled out for ridicule a female blogger who was expressing regret and confusion after a one night stand colours my recollection, but I am sure I have read PUA materials in the past that had specific sections dedicated to the problem of overcoming the resistance of women who had a preference not to engage in sex on the first/second/nth date, a preference that is certainly not inherently irrational and which seems intuitively likely to correlate with a high probability of regretting a one night stand if it does not turn into an ongoing, happy relationship.
Speaking more broadly a stereo salesperson maximises their sales by selling a stereo to every customer who walks in wanting to buy a stereo, and selling a stereo to as many customers as possible who walk in not wanting to buy a stereo. I’m sure they would prefer all their customers to be the first kind but you maximise your income by getting the most out of both. Game-theory-rational PUAs who don’t have Alices on tap, or a reliable way of filtering out Carols, or who just plain find some Carols attractive and want to sleep with them, would out of either necessity or preference have an interest in maximising their per-Carol chances of bedding a Carol.
It should be noted that, from the perspective of a utilitarian agent in certain environments, it may be the utilitarian action to self-modify into a non-utilitarian agent. That is, an unmodified utilitarian agent participating in certain interactions with non-utilitarian agents may create greater utility by self-modifying into a non-utilitarian agent.
If we can exclude those cases where one partner or another honestly and explicitly expresses a free, informed and rational preference to be dominated then mostly yes.
How prevalent do you think those cases are?
I guess you just lost track of the context and thought I’d said something I hadn’t. Are we back on the same page together now?
Did what you wrote agree with the parenthetical paragraph I wrote explaining my interpretation? If so, we’re on the same page.
a high probability of regretting a one night stand if it does not turn into an ongoing, happy relationship.
Let’s go back to a question I asked a while back that wasn’t answered that is now relevant again, and explore it a little more deeply. What is a utility function? It rank orders actions*. Why do you think stating regret is more indicative of utility than actions taken? If, in the morning, someone claims they prefer X but at night they do ~X, then it seems that it is easier to discount their words than their actions. (An agent who prefers vice at night and virtue during the day is, rather than being inconsistent, trying to get the best of both worlds.)
(As well, Augustine’s prayer is relevant here: Grant me chastity and continence, but not yet.).
*Typically, utility functions are computed by assigning values to consequences, then figuring out the expected value of actions, but in order to make practical measurements it has to be considered with regards to actions.
I’m sure they would prefer all their customers to be the first kind but you maximise your income by getting the most out of both.
Right. But it’s not clear to me that it’s unethical for a salesman to sell to reluctant buyers. If you consider a third woman- Diana- who does not agree to have sex on the first date, then both of us would agree that having sex with Diana on the first date would be unethical, just like robbing someone and leaving them a stereo in exchange would be unethical. But pursuing Diana would not be, especially if it’s hard to tell the difference between her and Carol (or Alice) at first glance. Both Carols and Alices have an incentive to seem like Dianas while dating (also car-buying, though not stereo-buying), and so this isn’t an easy problem.
It seems odd to me to suggest a utilitarian should act as though Carols are Dianas.
Interesting question! However I think that we’d need to agree on a definition of “dominated” before any estimate would be meaningful. I’m happy to supply my estimate of prevalence for any definition that suits you.
For the definition I had in mind, which might be something like “in a relationship where one partner routinely makes the majority of important decisions on the basis of superior status” I would be surprised if it was below 0.1% or above 5%.
Did what you wrote agree with the parenthetical paragraph I wrote explaining my interpretation? If so, we’re on the same page.
Well no, I wouldn’t agree with that either, but that’s a separate issue. I don’t think it can be philosophically consistent to apply techniques which purportedly manipulate people by spoofing social signals that act on an unconscious level, distorting their sense of time and so forth and then excuse this on the basis that the agent you are manipulating has autonomy. If they had autonomy in the sense that excused you for attempts at manipulation you could not manipulate them, and if you can manipulate them then they lack the kind of strong autonomy that would give you a moral blank cheque.
Let’s go back to a question I asked a while back that wasn’t answered that is now relevant again, and explore it a little more deeply. What is a utility function? It rank orders actions*. Why do you think stating regret is more indicative of utility than actions taken?
I think it’s more indicative for a few reasons. Firstly conclusions made sober, rested and with time to reflect are more reliable than conclusions made drunk, late at night, horny and in the heat of the moment, and both parties to any such decisions know this in advance. Secondly wishful thinking (which you could also call self-delusion) plays a role, and before being—to borrow a phrase from Roissy—“pumped and dumped” by a PUA a woman might be a victim of cognitive bias that makes her act as if a long-term relationship with a supportive partner is a possibility whereas with hindsight this bias is less likely to distort her calculations. Thirdly the PUA literature that I have read explicitly advocates playing on these factors by not giving the target time to pause and reflect, and by deflecting questions about the future direction of the relationship rather than answering those questions honestly.
I conclude from this that part of PUA strategy is to attempt to manipulate women into making decisions which the PUA knows the women are less likely to make when they are behaving rationally. So not only do I think that stated regret is more indicative of someone’s reflective preferences than their actions the night before in general, but I also think that PUAs know this too.
As always there will be individual exceptions to the general rule.
But it’s not clear to me that it’s unethical for a salesman to sell to reluctant buyers.
Considering only the two parties directly involved, the salesperson and the buyer, it seems fairly clear to me that on average reluctant buyers are more likely to regret the purchase, and that transactions in which one party regrets the transaction are win/lose and not win/win.
Being a highly effective salesperson is not seen as unethical conduct in our current society, and that tends to very strongly influence people’s moral judgements, but I think from a utilitarian standpoint salesmanship that goes beyond providing information is obviously ethically questionable once you get past the default socialisation we share that salespersons are a normal part of life.
It seems odd to me to suggest a utilitarian should act as though Carols are Dianas.
I’m not completely clear on the Carol/Diana distinction being made here. Could you give me the definitions of these two characters as you were thinking of those definitions at the time you posted the parent?
The PUA mythology tries to equivocate between spoofing the signals to indicate that you have such qualities and actually having such qualities but I think competent rationalists should be able to keep their eye on the ball too well to fall for that.
This. But you forgot “using canine social structure as if it were identical to human social structure.”
My complaint with the whole “alpha” and “beta” terminology is that it doesn’t seem to be derived from canine social structure. The omega rank seems more appropriate to what PUAs call “beta.”
Reading more, it doesn’t seem like any of these terms are accurate even to canine society. They were based on observing unrelated gray wolves kept together in captivity, where their social structures bore little resemblance to their normal groupings in the wild (a breeding pair and their cubs). More accurate terms for would be “parents” and “offspring”, which match nicely to human families but aren’t that useful for picking up women in bars.
What about just “until someone proves scientifically”?
Even that weaker position still seems incompatible actually being a utility-maximising agent, since there is prima facie evidence that inducing women to enter into a one-night-stand against their better judgment leads to subsequent distress on the part of the women reasonably often.
A disciple of Bayes and Bentham doesn’t go around causing harm up until someone else shows that it’s scientifically proven that they are causing harm. They do whatever maximises expected utility for all stakeholders based on the best evidence available at the time.
Note that this judgment holds regardless of the relative effectiveness of PUA techniques compared to placebo. Even if PUA is completely useless, which would be surprising given placebo effects alone, it would still be unethical to seek out social transactions that predictably lead to harm for a stakeholder without greater counterbalancing benefits being obtained somehow.
Even that weaker position still seems incompatible actually being a utility-maximising agent, since there is prima facie evidence that inducing women to enter into a one-night-stand against their better judgment leads to subsequent distress on the part of the women reasonably often.
That isn’t a utility maximising agent regardless of whether it demands your ‘proof beyond any doubt’ or just the ‘until someone proves scientifically’. Utility maximising agents shut up and multiply. They use the subjectively objective probabilities and multiply them by the utility of each case.
The utility maximising agent you are talking about is one that you have declared to be a ‘good utilitarian’. It’s maximising everybody’s utility equally. Which also happens to mean that if Bob gains more utility from a one night stand than a Carol loses through self-flaggelation then Bob is morally obliged to seduce her. This is something which I assume you would consider reprehensible. (This is one of the reasons I’m not a good utilitarian. It would disgust me.)
Neither “utility maximiser” nor “good utilitarian” are applause lights which match this proclamation.
(Edited out the last paragraph—it was a claim that was too strong.)
That isn’t a utility maximising agent regardless of whether it demands your ‘proof beyond any doubt’ or just the ‘until someone proves scientifically’. Utility maximising agents shut up and multiply. They use the subjectively objective probabilities and multiply them by the utility of each case.
I took it for granted that the disutility experienced by the hypothetical distressed woman is great enough that a utility-maximiser would seek to have one-night-stands only with women who actually enjoyed them.
The utility maximising agent you are talking about is one that you have declared to be a ‘good utilitarian’. It’s maximising everybody’s utility equally. Which also happens to mean that if Bob gains more utility from a one night stand than a Carol loses through self-flaggelation then Bob is morally obliged to seduce her. This is something which I assume you would consider reprehensible. (This is one of the reasons I’m not a good utilitarian. It would disgust me.)
Given that Bob has the option of creating greater average utility by asking Alices home instead I don’t see this as a problem. What you are saying is true only in a universe where picking up Carol and engaging in a win/lose, marginally-positive-sum interaction with her is the single best thing Bob can do to maximise utility in the universe, and that’s a pretty strange universe.
I also think that PUAs are going to have to justify their actions in utilitarian terms if they are going to do it at all, since I really struggle to see how they could find a deontological or virtue-ethical justification for deceiving people and playing on their cognitive biases to obtain sex without the partner’s fully informed consent. So if the utilitarian justification falls over I think all justifications fall over, although I’m open to alternative arguments on that point.
I don’t think the Weak Gor Hypothesis holds and I don’t think that you maximise a woman’s utility function by treating her the way the misogynistic schools of PUA adovcate, but if you did then I would buy PUA as a utility-maximising strategy. I think it’s about the only way I can see any coherent argument being made that PUA is ethical, excluding the warm-and-fuzzy PUA schools mentioned earlier which I already acknowledged as True Scotsmen.
The second sentence is correct… and conclusively refutes the first.
I cannot reconstruct how you are parsing the first sentence so that it contradicts the second, and I’ve just tried very hard.
Given that Bob has the option of creating greater average utility by asking Alices home instead I don’t see this as a problem.
This seems to be a straw man. I don’t recall ever hearing someone advocating having sex with people that would experience buyers remorse over those that would remember the experience positively. That would be a rather absurd position.
What you are saying is true only in a universe where picking up Carol and engaging in a win/lose, marginally-positive-sum interaction with her is the single best thing Bob can do to maximise utility in the universe, and that’s a pretty strange universe.
Yes, Bob should probably be spending all of his time earning money and gaining power that can be directed to mitigating existential risk. This objection seems to be a distraction from the point. The argument you made is neither utilitarian nor based on maximising utility. That’s ok, moral assertions don’t need to be reframed as utilitarian or utility-maximising. They can be just fine as they are.
This seems to be a straw man. I don’t recall ever hearing someone advocating having sex with people that would experience buyers remorse over those that would remember the experience positively. That would be a rather absurd position.
If so forgive me—I have not seen a PUA in the wild ever mentioning the issue of differentiating targets on the basis of whether or not being picked up would be psychologically healthy for them, so my provisional belief is that they attached no utility or disutility to the matter of whether the pick-up target would remember the experience positively. Am I wrong on that point?
Yes, Bob should probably be spending all of his time earning money and gaining power that can be directed to mitigating existential risk. This objection seems to be a distraction from the point.
This is a general argument which, if it worked, would serve to excuse all sorts of suboptimal behaviour. Just because someone isn’t directing all their efforts at existential risk mitigation or relieving the effects of Third World poverty doesn’t mean that they can’t be judged on the basis of whether they are treating other people’s emotional health recklessly.
The argument you made is neither utilitarian or based on maximising utility. That’s ok, deontological moral assertions don’t need to be reframed as utilitarian or utility-maximising. They can be just fine as they are.
I don’t see how you get to that reading of what I wrote.
I see this as a perfectly valid utilitarian argument-form: There is prima facie evidence X causes significant harm, hence continuing to do X right up until there is scientifically validated evidence that X causes significant harm is inconsistent with utility maximisation.
There’s a suppressed premise in there, that suppressed premise being “there are easily-available alternatives to X”, but since in the specific case under discussion there are easily-available alternatives to picking women up using PUA techniques I didn’t think it strictly necessary to make that premise explicit.
There are separate, potential deontological objections to PUA behaviour, some of which I have already stated, but I don’t see how you got to the conclusion that this particular argument was deontological in nature.
If so forgive me—I have not seen a PUA in the wild ever mentioning the issue of differentiating targets on the basis of whether or not being picked up would be psychologically healthy for them, so my provisional belief is that they attached no utility or disutility to the matter of whether the pick-up target would remember the experience positively. Am I wrong on that point?
The goalposts have moved again. But my answer would be yes anyway.
Strictly speaking you moved them first since I never claimed that anyone was ” advocating having sex with people that would experience buyers remorse over those that would remember the experience positively.” (Emphasis on over). As opposed to advocating having sex with people disregarding the issue of whether that person would experience remorse, which is what I’d seen PUA advocates saying. I just put the goalposts back where they were originally without making an undue fuss about it, since goalposts wander due to imprecisions in communication without any mendacity required.
I think this conversation is suffering, not for the first time, from the fuzziness of the PUA term. It covers AMF and Soporno (who has a name which is unfortunate but memorable, if it is his real name) who do not appear to be advocating exploiting others for one’s personal utility, and it also covers people like Roissy who revel in doing so.
So I think I phrased that last post poorly. I should have made the declarative statement “many but not all of the PUA writers I have viewed encourage reckless or actively malevolent behaviour with regard to the emotional wellbeing of potential sexual partners, and I think those people are bad utilitarians (and also bad people by almost any deontological or virtue-ethical standard). People who are members of the PUA set who do not do this are not the intended target of this particular criticism”.
I was using “dark arts” here in the more narrow sense of “techniques designed to subvert the rationality of others by exploiting cognitive biases.” I’m not speaking of being an effective flirt, or wearing flattering makeup and clothing. The sort of things I had in mind are, to take a mild example, bringing a slightly less attractive “wingman” to make oneself look more attractive than one would alone, or to take a serious example, whisking a woman from bar to bar to create the illusion of longer-term acquaintance. I see this as wrong for essentially the same reason that spiking someone’s drink is wrong if they wouldn’t sleep with you sober.
To oversimplify somewhat, I tend to see society as divided into three groups: those who don’t generally aspire to rationality (the majority of the population), those who want to share the bounty of rationality to help others overcome their biases (Lesswrong), and those who would instead use their knowledge of rationality to exploit people in the first group. I acknowledge that I am more confused by the current negative karma of my grandparent than the karma of any other comment I have ever made on this site.
I acknowledge that I am more confused by the current negative karma of my grandparent than the karma of any other comment I have ever made on this site.
My observation is that most of the posts I have made that criticised PUA or PUA-associated beliefs have been voted down very quickly, but then they have bounced back up over the next day or so such that the overall karma delta is highly positive. One hypothesis that explains it is that there are a certain number of people reviewing this thread at short intervals who are downvoting posts critical of PUA, but that they are not the plurality of posters reviewing this thread.
ETA: Update on this. Posts critical of PUA ideology that are concealed from the main thread either by being voted to −3 or below, or by being a descendant of such, get voted into the ground, and as far as I can see this effect is largely insensitive to the intellectual value or lack thereof of the post. I hypothesise that the general LW readership doesn’t bother drilling down to see what’s going on in those subthreads and hence their opinions are not reflected in the vote count, while PUA-enthusiasts who vote along ideological lines do bother to drill down.
Posts critical of PUA that are well-written, logical, pertinent and visible to the general readership are voted up, overall.
One explanation is that the first to read your messages are those you responded to, who are those most likely to note any poorness of fit between what they said and what they are alleged or implied to have said or believed.
I acknowledge that I am more confused by the current negative karma of my grandparent than the karma of any other comment I have ever made on this site.
I’m shocked that it didn’t stay below 0. Forget any point it was trying to make about dating—it sends totally the wrong message about ‘lesswrong’ attitudes towards ‘dark arts’!
So, this gets at something that frequently confuses me when people start talking about personal utilities.
It seems that if I can reliably elicit the strength of my preferences for X and Y, and reliably predict how a given action will modify the X and Y in my environment, then I can reliably determine whether to perform that action, all else being equal. That seems just as true for X = “my happiness” and Y = “my partner’s happiness” as it is for X = “hot fudge” and Y = “peppermint”.
But you seem to be suggesting that that isn’t true… that in the first case, even if I know the strengths of my preferences for X and Y and how various possible actions lead to X and Y, there’s still another step (“adding the utilities”) that I have to perform before I can decide what actions to perform. Do I understand you right?
If so, can you say more about what exactly that step entails? That is… what is it you don’t know how to do here, and why do you want to do it?
You’re missing four letters. Call the strength of your preferences for X and Y A and B, and call your partner’s preferences for X and Y C and D. (This assumes that you and your partner both agree on your happiness measurements.)
I agree there’s a choice among available actions which maximizes AX+BY, and that there’s another choice that maximizes CX+DY. What I think is questionable is ascribing meaning to (A+C)X+(B+D)Y.
Notice there are an infinite number of A,B pairs that output the same action, and an infinite number of C,D pairs that output the same action, but when you put them together your choice of A,B and C,D pairs matters. What scaling to choose is also a point of contention, since it can alter actions.
So, we’re assuming here that there’s no problem comparing A and B, which means these valuations are normalized relative to some individual scale. The problem, as you say, is with the scaling factor between individuals. So it seems I end up with something like (AX + BY + FCX + FDY), where F is the value of my partner’s preferences relative to mine. Yes?
And as you say, there’s an infinite number of Fs and my choice of action depends on which F I pick.
And we’re rejecting the idea that F is simply the strength of my preference for my partner’s satisfaction. If that were the case, there’d be no problem calculating a result… though of course no guarantee that my partner and I would calculate the same result. Yes?
If so, I agree that that coming up with a correct value for F sure does seem like an intractable, and quite likely incoherent, problem.
Going back to the original statement… “an ethical rationalist’s goals in relationship-seeking should be to seek a relationship that creates maximal utility for both parties” seems to be saying F should approximate 1. Which is arbitrary, admittedly.
And we’re rejecting the idea that F is simply the strength of my preference for my partner’s satisfaction. If that were the case, there’d be no problem calculating a result… though of course no guarantee that my partner and I would calculate the same result. Yes?
Yes. If you and your partner agree- that is, A/B=C/D- then there’s no trouble. If you disagree, though, there’s no objectively correct way to determine the correct action.
Going back to the original statement… “an ethical rationalist’s goals in relationship-seeking should be to seek a relationship that creates maximal utility for both parties” seems to be saying F should approximate 1. Which is arbitrary, admittedly.
Possibly, though many cases with F=1 seem like things PhilosophyTutor would find unethical. It seems more meaningful to look at A and B.
incompatible with maximising global utility for all sentient stakeholders
You make a very good point here. But you see, women don’t find men who try to be nice to them attractive. They call it “clingy”, “creepy” behavior. Human male-female interaction is actually a signalling game, where the man being nice simply sends a signal of weakness. Women are genetically programmed to only let alpha sperm in, and the alpha is not a character who goes around being nice to strangers.
Think about the effect on her inclusive genetic fitness if she bears the child of a nice-guy who tries to maximize other people’s utility before his own, versus having the child of an alpha who puts himself first and likes to impregnate lots of women.
And let me disclaim: I don’t like it that the world is this way, I don’t morally support the programming that evolution has given to women. But I accept it and work within its bounds.
Perhaps one day we will reprogram ourselves? Maybe transhuman love will be of a different. But in human love, the heart is not heart shaped, it is shaped like a clenched fist.
You make a very good point here. But you see, women don’t find men who try to be nice to them attractive. They call it “clingy”, “creepy” behavior. Human male-female interaction is actually a signalling game, where the man being nice simply sends a signal of weakness. Women are genetically programmed to only let alpha sperm in, and the alpha is not a character who goes around being nice to strangers.
Oversimplified to the extent that it is basically not true.
You comment would be more useful if you said which ways it is oversimplified, and which additions and caveats you think are most important to restore it to being true.
But you see, women don’t find men who try to be nice to them attractive...Women are genetically programmed to only let alpha sperm in
Oversimplified to the extent that it is basically not true.
And yet I would bet that it is still closer to true than I approve of. In particular, closer to true than the mental model used by the naive “nice guy”/”beta”.
But you see, women don’t find men who try to be nice to them attractive. They call it “clingy”, “creepy” behavior. Human male-female interaction is actually a signalling game, where the man being nice simply sends a signal of weakness. Women are genetically programmed to only let alpha sperm in, and the alpha is not a character who goes around being nice to strangers.
Well, no. I’ve received quite a bit of help and favors from men who didn’t seem creepy or clingy, and have found a few creepy who weren’t being helpful. I don’t think my experience is unusual.
One of the big reasons that LW is unable to be rational about pickup is that we have a small group of vocal and highly non-average women here who take any comment which is supposed to be a useful observation about the mental behavior of the median young attractive woman to be about THEM IN PARTICULAR.
You, NancyLebovitz, are not the kind of woman that PU is aimed at. You do not go to night clubs regularly. You do not read gossip magazines and follow celebrity lifestyles, you do not obsess about makeup . You post on weird rationality websites. You are not the median young, attractive woman. And that goes for Alicorn too.
Even amongst the set of IQ + 1 sigma women you are almost certainly highly nontypical.
Comments about female psychology are not directed at you, they are not about you, your personal experience of YOUR OWN reactions are not meant to be well described by pick-up theory.
I do not mean this in a negative way. I mean you no offence; in fact you should take it as a compliment in the context of intelligence and rationality. I am merely making an epistemological point.
The next time I make a comment about PU, I will carefully disclaim that PU is primarily designed to analyse the average psychology of just one particular kind of woman: namely relatively young, culturally-western, hetero- or bi- sexual and relatively attractive.
The next time I make a comment about PU, I will carefully disclaim that PU is designed to analyse the average psychology of just one particular kind of woman
Especially important since major and well-respected proponents of PUA around here do not assume this premise, and in fact it is generally assumed that there are different areas of PUA that will help people of particular sex/gender/sexual orientation accomplish varying sorts of goals.
PU may well apply (to a certain extent) to almost all pre-menopausal hetero/bi women, but the case is much more clear cut for women who are also relatively young, culturally-western, hetero- or bi- sexual and relatively attractive, because that’s the subgroup of women where extensive field-testing of the concepts has been done.
PUA is a large field with many different subfields and schools of thought. There are those who aim for one-night-stands at bars, and those who aim to find the particular soulmate they’ve been searching for. There is PUA writing from the perspective of homosexuals, both men and women, teens, older folks, and all sorts of different perspectives.
If you think there is just one set of techniques in the field and they are only applicable to a small subset of humanity, then you’re not very familiar with PUA and should stop making blanket assertions about the field.
Pickup artist describes a man who considers himself to be skilled, or who tries to be skilled at meeting, attracting, and seducing women
So if we are indeed referring to the same thing by the phrase, then I think that I am correct in saying that
“women who are relatively young, culturally-western, hetero- or bi- sexual and relatively attractive, is the subgroup of women where extensive field-testing of the concepts has been done.”
There have been small offshoots into “girl game” and some guys focus more on older women, and I am explicitly not denying that there are results and facts there. But the core of the concept, the VAST majority of the field testing and online material is about quickly seducing “women who are relatively young, culturally-western, hetero- or bi- sexual and relatively attractive”
There have been small offshoots into “girl game” and some guys focus more on older women, and I am explicitly not denying that there are results and facts there.
It certainly looks like you are::
PU is designed to analyse the average psychology of just one particular kind of woman
Maybe you forgot a ‘not’ in there somewhere?
But the core of the concept, the VAST majority of the field testing and online material is about quickly seducing “women who are relatively young, culturally-western, hetero- or bi- sexual and relatively attractive”
It sounds like you’re making a strawman out of your own arguments. You made blanket statements about how this is a bad and misleading article because it ignores the truth about how women respond to men. When people pointed out that this is not true of particular women, you amended it to refer just to the vast majority of women, and now you’re amending it further to only apply to a particular goal regarding a minority of women.
So the takeaway from your arguments seems to be that you should not follow the advice given in the above post, in the case that you have a very specific goal with respect to a relatively small group of women.
If that is what you meant to say, then yes you needed to be specific about what special circumstance you thought the post doesn’t apply to. It is not particularly surprising that the advice given in the post only works for most people with most goals.
you should not follow the advice given in the above post, in the case that you have a very specific goal with respect to a relatively small group of women. … It is not particularly surprising that the advice given in the post only works for most people with most goals.
This goes too far. The vast majority of men are heterosexual, gender-normal, and the vast majority of those are most attracted to women who are not:
post-menopause/50+
ugly
lesbian (i.e. not attracted to men)
Pickup is popular because it tells men how to attract precisely those women who they desire most.
This goes too far. The vast majority of men are heterosexual, gender-normal, and the vast majority of those are most attracted to women who are not:
post-menopause/50+
ugly
lesbian (i.e. not attracted to men)
You left out:
Non-Western
Which was apparently important to your case above.
It’s an interesting claim, though I’m not buying it, and it is anyway irrelevant to my earlier claim.
Most people are not heterosexual, gender-normal men who are most attracted to women with none of those qualities. And most relationship goals are not seducing such people. And most people do not have that goal.
Probably ~40% of pepople are heterosexual, gender-normal men who are most attracted to women who are young and straight.
It seems like you are using weasel words to describe the goal of ~40% of the people on the planet as a “very specific goal”.
Let me put it another way. On a website with a strong majority heterosexual male readership, the article fails to mention what I think is the definitive body of knowledge to improve the dating lives of heterosexual men. You then criticize me because, of all people, just under half are heterosexual males, almost all of whom (surprise) like young, attractive, straight women; you use weasel words saying that my point is for a “very specific goal”, when in fact probably ~60-80% of people reading this site have the goal of attracting/keeping a young, attractive, hetero/bi woman.
TBH, I feel that you, and LW in general, are trying to use pedantry/weasel words/motivated cognition to close your eyes to the truth about attraction between men and women. Perhaps there is some subset of people here who want to know, but I feel that if I mention the subject I will end up arguing against some form of denial/motivated cognition, rather than discussing the subject in the spirit of a collaborative enquiry to get at the truth.
It seems like you are using weasel words to describe the goal of ~40% of the people on the planet as a “very specific goal”.
Theists comprise a much larger percentage of the global population than 40%, but that doesn’t mean we’d consider a goal like “being closer to God” to be particularly important or worthy of discussion here.
Let me put it another way. [ranting about definitiveness of PUA deleted]
Just FYI, some of us hate pro-PUA rants as much as we hate anti-PUA rants. Actually, I hate the pro-PUA rants more, because they do more harm than good.
Telling people they’re closing their eyes to the truth is not a rational method of persuasion in any environment, and certainly not here.
If you learned half as much from PUA as you think you have, you should have learned that if you want to catch fish, then don’t think like a fisherman, think like a fish.
In this discussion, you are not thinking like a fish.
Also note that I am just as pedantic when I’m talking about a subject that I like, and I’m sure people would back me up on this. Maybe I should step up the pedantry in general to make that clearer, to avoid this sort of accusation.
And nowhere here did I say something like “PUA should not be discussed” or “PUA is incorrect about its subject matter” or even “The particular sub-branch of PUA you have in mind is incorrect or useless”. Indeed, I think rational inquiry into relationships is a noble goal and often cite PUA as a rare area of discourse where beliefs are tested against the world in rapid iteration.
Rather, I was annoyed that you were making patently false claims and then when people called you on it you acted like they were doing something wrong. If you want to assert falsehoods, please do it elsewhere.
Ceteris paribus, I would regard pedantry as evidence of a vice in favor of truth-seeking, not in the opposite direction. I’m surprised you think otherwise.
when in fact probably ~60-80% of people reading this site have the goal of attracting/keeping a young, attractive, hetero/bi woman.
I find this hard to believe. As of the last survey only 33% are “single and looking”. If we combine that with the 24.2% that were “in a relationship”, assume they were all polyamorous, and that all of both groups were men, we still do not approach the lower bound of your estimate. It fails a basic sanity check.
I would assert that most people here would benefit more from attracting vastly atypical partners, and we are mostly outliers in more ways than one, so your generalizations are even less helpful here than in the world at large. But that belief is irrelevant to my above statements.
I find this hard to believe. As of the last survey only 33% are “single and looking”. If we combine that with the 24.2% that were “in a relationship”, assume they were all polyamorous, and that all of both groups were men, we still do not approach the lower bound of your estimate. It fails a basic sanity check.
You excluded ‘married’ from the check, which is the only thing that allows your “sanity failure” assertion to stand. This is either an error or disingenuous. ‘Married’ applies for the same reason ‘in a relationship’ applies. 24% are single but not looking, not the 57% that you suggest. The “all polyamorous” assumption is not needed given that keeping was included.
Agreed. I was not considering “attracting” and “keeping” as separate states; rather, I read it as “attracting or (attracting and keeping)” which clearly was not warranted. So if we assume everyone not “single but not looking” was male and interested in the sorts of things mentioned above, that’s 76%, which while still a stretch falls well within the range above.
PU may well apply (to a certain extent) to almost all pre-menopausal hetero/bi women, but the case is much more clear cut for women who are also relatively young, culturally-western, hetero- or bi- sexual and relatively attractive, because that’s the subgroup of women where extensive field-testing of the concepts has been done.
One must distinguish carefully between the set of women for which I (in a Bayesian sense) believe PU would apply to, versus the set of women for which I am stably highly confident that it applies to because of overwhelming field-testing.
Indeed, saying that “PU may well apply (to a certain extent) to almost all pre-menopausal hetero/bi women” does not logically entail that I think it doesn’t apply to post-menopausal women or lesbians etc. Personally I have no clue about lesbian attraction, and very little about how to attract post-menopausal women, so I make no claim in particular.
As I’ve pretty much argued before, people could escape the majority of needless wasteful friction if they were just willing to use words like “average” and/or “median” when that’s indeed what they mean instead of “all”.
You could have said “average women” from the start. Am not talking about “careful” disclaimers here—I’m just talking about the single word “average”, which by itself would have vastly improved your comment. And yet you didn’t choose to have that word. Why? Was one word so costly to you?
Or was rudeness and stereotyping intentionally being signalled here in a “Alphas don’t bother with politeness, that’s submissive behaviour” sort-of-thing?
“the average person could escape the majority of needless wasteful tension if they were just willing to use words … ”
since I am sure there is some person out there who overuses “average” when they really mean “all”, yes? And yet you didn’t choose to have that word. Why? Was one word so costly to you?
Surely you mean “the average person could escape the majority of needless wasteful tension if they were just willing to use words … ”
No, I’m sure I wasn’t talking about average people, I was talking about people collectively. If I added the word “all” it would be closer to my meaning that if I had added the word “average”.
But I guess I was right in my estimation about the intentionality of the signals you were giving, as you’re now reinforcing them.
Assuming for the sake of argument that women are sentient, but also that they have absolutely no free will when it comes to sexual relationships and that they can be piloted like a remote-controlled drone by a man who has cracked the human sexual signalling language (a hypothesis only slightly more extreme than the PUA hypothesis), that would still leave us with the question of how to maximise the utility of these strange, mindless creatures given that they are sentient and their utility counts as much as any other sentient being’s.
PUA might be compatible with this if you assume that just by chance the real utility function of the human female just happens to be maximised by the behaviour which maximises the utility of the PUA, which is to say that you maximise the utility of all human females by having a one night stand with them if you find them physically attractive but not inclined to be subservient, and a longer-term relationship with them under some circumstances if you want regular sex and you can manage the relationship so that you are dominant. (We could call this the Weak Gor Hypothesis).
However this has not been demonstrated, and it might turn out that in some cases women are happier if they are communicated with honestly, treated as equal partners in a relationship, given signals that they are high-status in the form of compliments and “romantic” gestures and so forth. If that was the case then ethically some weight would have to be given to these sources of utility, and it would be ethically questionable to talk down such behaviour as “beta” since it would have turned out that the alpha/beta distinction did not match up with a real distinction between utility-maximising and non-utility-maximising behaviour in all cases.
LOL. Given that IRL Goreans (male and female) exist, someone who wants that sort of thing needn’t try converting anyone from the general dating pool.
it might turn out that in some cases [people] are happier if they [receive more of what their “far” brains like]
I’ve paraphrased your comment to make it gender neutral and preference-neutral.
The thing is, what maximizes our happiness isn’t always what’s predictably enjoyable. (See prospect theory, fun theory, liking vs. wanting, variable reinforcement...) Excitement and variety are very often the spice of life.
Frankly, having a partner who does nothing but worship you is both annoying and unattractive… even though it might sound like a good idea on paper. (For one thing, you can feel pressured to reciprocate.)
I’m reminded of Eliezer’s “fun theory” posts about the evolution of concepts of heaven: that if you’re a poor farmer then no work to do and streets paved with gold sounds like heaven to you, but once you actually got there, it’d be bloody boring.
In the same way, a lot of romantic ideals for relationships sound like heaven only when you haven’t actually gotten there yet.
I think we need to be careful of false dichotomies and straw men, since so much of PUA doctrine/knowledge/dogma (pick your preferred term) is communicated in the form of dichotomies, which I suspect are false to at least a significant extent.
The possibility I advanced was that “women are happier if they are communicated with honestly, treated as equal partners in a relationship, given signals that they are high-status in the form of compliments and “romantic” gestures and so forth”. This does not seem to me to be the same thing as saying that women are happier with “a partner who does nothing but worship [them]”, although I can see how if you were trained to see relationships in terms of the PUA alpha/beta dichotomy it might seem to be the same thing to you. Most obviously treating someone as an equal partner is inconsistent with doing nothing but worshipping that person.
You also are asserting without evidence that the kind of relationship I just described would not be fun if you were actually in one, which seems to me to contain implicit status attack, since it assumes that I have never been in such a relationship and hence that I am speaking from a position of epistemological disadvantage compared to yourself.
Would I be far wrong if I guessed that your data set for this implicit assumption is based on interacting with a significant number of PUAs? If so the underlying problem may well simply be self-selection bias. The kind of people who have long-term relationships based on honesty, equality and support are probably unlikely to self-select for participation in PUA forums and hence their experiences and viewpoints will be under-represented in those circles compared to their prevalence in the general population.
The possibility I advanced was that “women are happier if they are communicated with honestly, treated as equal partners in a relationship, given signals that they are high-status in the form of compliments and “romantic” gestures and so forth”. This does not seem to me to be the same thing as saying that women are happier with “a partner who does nothing but worship [them]”
Actually, it’s my observation that men who consciously make an effort to do what you said, actually end up doing what I said, from the point of view of the people they interact with.
That is, they are poorly calibrated and overshoot the mark. (Been there, did that.)
You also are asserting without evidence that the kind of relationship I just described would not be fun if you were actually in one, which seems to me to contain implicit status attack, since it assumes that I have never been in such a relationship and hence that I am speaking from a position of epistemological disadvantage compared to yourself.
Hm. Sorry—the important piece left out of my explicit reasoning is above: i.e., that people who think they are “communicating honestly”, et al usually end up doing something completely different; it’s the absence of that which I implicitly assume you’ve had… and which is AFAICT a less common experience for men (with no implied connotations about status) if for no other reason than that women are on average better socially calibrated than men.
Would I be far wrong if I guessed that your data set for this implicit assumption is based on interacting with a significant number of PUAs?
Yes, you would. ;-)
The kind of people who have long-term relationships based on honesty, equality and support are probably unlikely to self-select for participation in PUA forums and hence their experiences and viewpoints will be under-represented in those circles compared to their prevalence in the general population.
Data point: I have been married for 15 years and would not classify myself as a PUA in any sense, although based on what statistics I’ve read about men in general, I would have to consider myself to have had above-average sexual success (though not drastically so) before I got married—largely due to behaviors PUAs would’ve described as social game, direct game, and qualifying. (However, the terms didn’t exist at the time, as far as I know—this was pre-internet for the most part.)
At no time were a lack of honesty, equality, or support a part of what I did or sought, so I’m not sure why you think they are anathema to PUA goals.
PUA literature, like so many other things, is largely what you make of it. When I look at it, I find the parts that are positive, life-affirming, and utility-increasing for everybody involved. So your objections look to me like strawman attacks.
One thing I have observed is that once I’ve read the parts of PUA theory that sound good (i.e., more politically correct), I find that on reading the less politically-correct things, they are actually advocating similar behaviors, and simply describing them differently. Some use more inflammatory and controversial language laced with all sorts of negative judgments about men and women; others emphasize empathy and helping men to see things from women’s point of view (without an added heap of patronizing the women in the process).
And yet, when it comes right down to it, they’re still saying to do the same things; it’s only the connotations of their speech that are different.
IOW, ISTM that you are arguing with the misogynistic connotations of some fragment of PUA theory that you’ve encountered; I disagree because the connotations are AFAICT superfluous to functional PUA advice, having had the opportunity to compare misogynistically-connotated and non-misogynistically-connotated descriptions of the same thing.
This is something that PUA and self-help in general have in common, btw: they are best read in such a way as to completely disregard connotation, judgment, and theory, in favor of simply extracting as directly as possible what precise behaviors are being recommended and what predictions are being made regarding the outcomes of those behaviors. Only after determining whether the behavior produces the predicted result, is it worth exploring (or refuting) the advocate’s theories about “how” or “why” it works.
Case in point: “The Secret” and other “law of attraction stuff”, much of which turns out to be scientifically valid, if (and only if) you completely ignore the nutty theories and focus on behavior and predictions. Richard Wiseman’s research into “luck theory” actually demonstrates that the behaviors and attitudes recommended by certain “law of attraction” proponents actually do make you luckier, by increasing the probability that you will notice and exploit serendipitous positive opportunities in your environment.
If Wiseman had simply dismissed “The Secret” as another nutty new-age misinterpretation of physics, that research couldn’t have been done. I suggest that if you seriously intend to research PUA (as opposed to making what seem to me like strawman arguments against it), you follow Wiseman’s example, and break down whatever you read into concrete behaviors and outcome predictions, minus any theories or political connotations of theories.
I think your position is going to turn out to be unfalsifiable on the point of whether relationships involving honesty, equality and mutual support actually exist. If your response to claims that they exist is to say “Well in my experience they don’t exist, the people who think they do are just deluded” I can’t provide any evidence that will change your views. After all, I could just be deluded.
As for whether I’m engaging with, and have read, the “real” PUA literature or the “good” PUA literature, I’m not sure whether or not this is an instance of the No True Scotsman argument. There’s no question that a large part of the PUA literature and community are misogynist and committed to an ideology that positions themselves as high-status and women and non-PUA men as low-status. As such that part of PUA culture is antithetical to the goals of LW as I understand them since those goals include maximising everyone’s utility.
If there’s a subset of positive-utility PUA thinking then that criticism does not apply and it’s at least possible that if they have scientific data to back up their claims then there is something useful to be found there.
I think it’s the PUA advocates’ burden of proof to show us that data though, if there really is an elephant of good data pertinent to pursuing high net-utility outcomes in the room. As opposed to some truisms which predate PUA culture by a very long time hidden under an encrustation of placebo superstitions.
I think your position is going to turn out to be unfalsifiable on the point of whether relationships involving honesty, equality and mutual support actually exist.
Huh? I didn’t say those things didn’t exist. I said I was not searching for a lack of those things (I even bolded the word “lack” so you wouldn’t miss it), and that I don’t see why you think that PUA requires such a lack.
No True Scotsman argument
Authentic Man Program and Johnny Soporno are the two schools I’m aware of that are strongly in the honesty and empowerment camps, AFAICT, and would constitute the closest things to “true scotsmen” for me. Most other things that I’ve seen have been a bit of a mixed bag, in that both empathetic and judgmental material (or honest and dishonest) can both be found in the same set of teachings.
Of notable interest to LW-ers, those two schools don’t advocate even the token dishonesty of false premises for starting a conversation, let alone dishonesty regarding anything more important than that.
(Now, if you want to say that these schools aren’t really PUA, then you’re going to be the one making a No True Scotsman argument. ;-) )
and it’s at least possible that if they have scientific data to back up their claims then there is something useful to be found there.
As I said, I’m less interested in “scientific” evidence than Bayesian evidence. The latter can be disappointingly orthogonal to the former, in that what’s generally good scientific evidence isn’t always good Bayesian evidence, and good Bayesian evidence isn’t always considered scientific.
More to the point, if your goals are more instrumental than epistemic, the reason why a particular thing works is of far less interest than whether it works and how it can be utilized.
I took a quick look at AMP and Soporno’s web sites and I’m more than happy to accept them as non-misogynistic dating advice sources aiming for mutually beneficial relationships. I wasn’t previously aware of them but I unconditionally accept them as True Scotsmen.
I’m now interested in how useful their advice is, either in instrumental or epistemic terms. Either would be significant, but if there is no hard evidence then the fact that their intentions are in step with those of LW doesn’t get them a free pass if they don’t have sound methodology behind their claims.
I’m aware Eliezer thinks there’s a difference between scientific evidence and Bayesian evidence but it’s my view that this is because he has a slightly unsophisticated understanding of what science is. My own view is that the sole difference between the two is that science commands you to suspend judgment until the null hypothesis is under p=0.05, at least for the purposes of what is allowed into the scientific canon as provisional fact, and Bayesians are more comfortable making bets with greater degrees of uncertainty.
Regardless, if your goals are genuinely instrumental you very much want to figure out what parts of the effect are due to placebo effects and what parts are due to real effects, so you can maximise your beneficial outcomes with a minimum of effort. If PUA is effective to some extent but solely due to placebo effects then it only merits a tiny footnote in a rationalist approach to relationships. If it has effects beyond placebo effects then and only then is there something interesting for rationalists to look at.
Regardless, if your goals are genuinely instrumental you very much want to figure out what parts of the effect are due to placebo effects and what parts are due to real effects, so you can maximise your beneficial outcomes with a minimum of effort.
There is a word for the problem that results from this way of thinking about instrumental advice. It’s called “akrasia”. ;-)
Again, if you could get people to do things without taking into consideration the various quirks and design flaws of the human brain (from our perspective), then self-help books would be little more than to-do lists.
In general, when I see somebody worrying about placebo effects in instrumental fields affected by motivation, I tend to assume that they are either:
Inhumanly successful and akrasia-free at all their chosen goals, (not bloody likely),
Not actually interested in the goal being discussed, having already solved it to their satisfaction (ala skinny people accusing fat people of lacking willpower), or
Very interested in the goal, but not actually doing anything about it, and thus very much in need of a reason to discount their lack of action by pointing to the lack of “scientifically” validated advice as their excuse for why they’re not doing that much.
I’d prefer not to discuss this at the ad hominem level. You can assume for the sake of argument whichever of those three assumptions you prefer is correct, if it suits you. I’m indifferent to your choice—it makes no difference to my utility. I make no assumptions about why you hold the views you do.
My view is that the rationalist approach is to take it apart to see how it works, and then maybe afterwards put the bits that actually work back together with a dollop of motivating placebo effect on top.
The best way to approach research into helping overweight people lose weight is to study human biochemistry and motivation, and see what combinations of each work best. Not to leave the two areas thoroughly entangled and dismiss those interested in disentangling them as having the wrong motivations. I think the same goes for forming and maintaining romantic relationships.
I’d prefer not to discuss this at the ad hominem level.
Me either. I was asking you for a fourth alternative on the presumption that you might have one.
FWIW, I don’t consider any of those alternatives somehow bad, nor is my intention to use the classification to score some sort of points. People who fall into category 3 are of particular interest to me, however, because they’re people who can potentially be helped by understanding what it is they’re doing.
To put it another way, it wasn’t a rhetorical question, but one of information. If you fall in category 1 or 2, we have little further to discuss, but that’s okay. If you fall in category 3, I’d like to help you out of it. If you fall in an as-yet-to-be-seen category 4, then I get to learn something.
So, win, win, win, win, in all four cases.
The best way to approach research into helping overweight people lose weight is to study human biochemistry and motivation, and see what combinations of each work best.
This is conflating things a bit: my reference to weight loss was pointing out that “universal” weight-loss advice doesn’t really exist, so a rationalist seeking to lose weight must personally test alternatives, if he or she cannot afford to wait for science to figure out the One True Theory of Weight Loss.
My view is that the rationalist approach is to take it apart to see how it works
This presupposes that you already have something that works, which you will not have unless you first test something. Even if you are only testing scientifically-validated principles, you must still find which are applicable to your individual situation and goals!
Heck, medical science uses different treatments for different kinds of cancer, and occasionally different treatments for the same kind of cancer, depending on the situation or the actual results on an individual - does this mean that medical science is irrational? If not, then pointing a finger at the variety of situation-specific PUA advice is just rhetoric, masquerading as reasoning.
I imagine you’d put me in category #2 as I’m currently in a happy long-term relationship. However my self-model says that three years ago when I was single and looking for a partner that I would still want to know what the actual facts about the universe were, so I’d put myself in category #4, the category of people for whom it’s reflexive to ask what the suitably blinded, suitably controlled evidence says whether or not they personally have a problem at that point in their lives with achieving relevant goals.
I think we should worry about placebo effects everywhere they get in the way of finding out how the universe actually works, whether they happen to be in instrumental fields affected by motivation or somewhere else entirely.
That didn’t mean that I chose celibacy until the peer-reviewed literature could show me an optimised mate-finding strategy, of course, but it does mean that I don’t pretend that guesswork based on my experience is a substitute for proper science.
The difference between your PUA example and medicine is that medicine usually has relevant evidence for every single one of those medical decisions. (Evidence-based medicine has not yet driven the folklore out of the hospital by a long chalk but the remaining pockets of irrationality are a Very Bad Thing). Engineers use different materials for different jobs, and photographers use different lenses for different shots too. I don’t see how the fact that these people do situation-specific things gets you to the conclusion that because PUAs are doing situation-specific things too they must be right.
I don’t see how the fact that these people do situation-specific things gets you to the conclusion that because PUAs are doing situation-specific things too they must be right.
It doesn’t. It just refutes your earlier rhetorical conflation of PUA with alternative medicine on the same grounds.
At this point, I’m rather tired of you continually reframing my positions to stronger positions, which you can then show are fallacies.
I’m not saying you’re doing it on purpose (you could just be misunderstanding me, after all), but you’ve been doing it a lot, and it’s really lowering the signal-to-noise ratio. Also, you appear to disagree with some of LW’s premises about what “rationality” is. So, I don’t think continued discussion along these lines is likely to be very productive.
It doesn’t. It just refutes your earlier rhetorical conflation of PUA with alternative medicine on the same grounds.
My intent was to show that in the absence of hard evidence PUA has the same epistemic claim on us as any other genre of folklore or folk-psychology, which is to say not much.
At this point, I’m rather tired of you continually reframing my positions to stronger positions, which you can then show are fallacies.
I admit I’m struggling to understand what your positions actually are, since you are asking me questions about my motivations and accusing me of “rhetoric, not reasoning” but not telling me what you believe to be true and why you believe it to be true. Or to put it another way, I don’t believe you have given me much actual signal to work with, and hence there is a very distinct limit to how much relevant signal I can send back to you.
Maybe we should reboot this conversation and start with you telling me what you believe about PUA and why you believe it?
Maybe we should reboot this conversation and start with you telling me what you believe about PUA and why you believe it?
Ok. I’ll hang in here for a bit, since you seem sincere.
Here’s one belief: PUA literature contains a fairly large number of useful, verifiable, observational predictions about the nonverbal aspects of interactions occurring between men and women while they are becoming acquainted and/or attracted.
Why do I believe this? Because their observational predictions match personal experiences I had prior to encountering the PUA literature. This suggests to me that when it comes to concrete behavioral observations, PUAs are reasonably well-calibrated.
For that reason, I view such PUA literature—where and only where it focuses on such concrete behavioral observations—as being relatively high quality sources of raw observational data.
In this, I find PUA literature to be actually better than the majority of general self-help and personal development material, as there is often nowhere near enough in the way of raw data or experiential-level observation in self-help books.
Of course, the limitation on my statements is the precise definition of “PUA literature”, as there’s definitely a selection effect going on. I tend to ignore PUA material that is excessively misogynistic on its face, simply because extracting the underlying raw data is too… tedious, let’s say. ;-) I also tend to ignore stuff that doesn’t seem to have any connection to concrete observations.
So, my definition of “PUA literature” is thus somewhat circular: I believe good stuff is good, having carefully selected which bits to label “good”. ;-)
Another aspect of my possible selection bias is that I don’t actually read PUA literature in order to do PUA!
I read PUA literature because of its relevance to topics such as confidence, fear, perceptions of self-worth, and other more common “self-help” topics that are of interest to me or to my customers. By comparison, PUA literature (again using my self-selected subset) contains much better raw data than traditional self-help books, because it comes from people who’ve relentlessly calibrated their observations against a harder goal than just, say, “feeling confident”.
Here’s one belief: PUA literature contains a fairly large number of useful, verifiable, observational predictions about the nonverbal aspects of interactions occurring between men and women while they are becoming acquainted and/or attracted.
Why do I believe this? Because their observational predictions match personal experiences I had prior to encountering the PUA literature. This suggests to me that when it comes to concrete behavioral observations, PUAs are reasonably well-calibrated.
The problem with this line of reasoning is that there are people who believe they have relentlessly calibrated their observations against reality using high quality sources of raw observational data and that as a result they have a system that lets them win at Roulette. (Barring high-tech means to track the ball’s vector or identifying an unbalanced wheel).
Roulette seems to be an apt comparison because based on the figures someone else quoted or linked to earlier about a celebrated PUAist hitting on 10 000 women and getting 300 of them into bed, the odds of a celebrated PUAist getting laid on a single approach even according to their own claims is not far off the odds of correctly predicting exactly which hole a Roulette ball will land in.
So when these people say “I tried a new approach where I flip flopped, be-bopped, body rocked, negged, nigged, nugged and nogged, then went for the Dutch Rudder and I believe this worked well” unless they tried this on a really large number of women so that they could detect changes in a base rate of 3% success I really don’t think they have any meaningful evidence. Did their success rate go up from 3% to 4% or what, and what are their error bars?
What’s the base rate for people not using PUA techniques anyway? People other than PUAs are presumably getting laid, so it’s got to be non-zero. The closer it is to 3% the less effect PUA techniques are likely to have.
I’ve already heard the response “Look, we don’t get just one bit of data as feedback. We PUAs get all sorts of nuanced feedback about what works and does not”. If that’s so and this feedback is doing some good this should be reflected in your hit rate for getting laid. If picking up women and getting them in to bed is an unfair metric for PUA effectiveness I really think it should be called something other than PUA.
My thinking is that you don’t have enough data to distinguish whether you are in a world where PUA training has a measurable effect, from a world where PUA have an unfalsifiable mythology that allows them to explain their hits and misses to themselves, and a collection of superstitions about what works and does not, but no actual knowledge that separates them in terms of success rate from those who simply scrub up, dress up and ask a bunch of women out.
I want to see that null hypothesis satisfactorily falsified before I allow that there is an elephant in the room.
Notice that nowhere in my post did I say pickup artists get laid, let alone that they get laid more often!
Nowhere did I state anything about their predictions of what behavior works to get laid!
I even explicitly pointed out that the information I’m most interested in obtaining from PUA literature, has notthing to do with getting laid!
So just by talking about the subject of getting laid, you demonstrate a complete failure to address what I actually wrote, vs. what you appear to have imagined I wrote.
So, please re-read what I actually wrote and respond only to what I actually wrote, if you’d like me to continue to engage in this discussion.
Okay. What observable outcomes do you think you can obtain at better-than-base-rate frequencies employing these supposed insights, and why do you think you can obtain them?
As I said earlier I think that if PUA insights cannot be cashed out in a demonstrable improvement in the one statistic which you would think would matter most to them, rate of getting laid, then there is grounds to question whether these supposed insights are of any use to anyone.
But if you would prefer to use some other metric I’m willing to look at the evidence.
That didn’t mean that I chose celibacy until the peer-reviewed literature could show me an optimised mate-finding strategy, of course, but it does mean that I don’t pretend that guesswork based on my experience is a substitute for proper science.
Guesswork based on your experience isn’t supposed to be a substitute for science. It’s the part of science that you do when choosing which phenomena you want to test, well before you get to the blinding and peer review.
The flip side is that proper science isn’t a substitute for either instrumental rationality or epistemic rationality. Limiting your understanding of the world entirely to what is already published in journals gives you a model of the world that is subjectively objectively wrong.
I don’t disagree but a potentially interesting research area isn’t an elephant in the room that demands attention in a literature review, and limiting yourself to proper science is no sin in a literature review either. Only when the lessons we can learn from proper science are exhausted should we start casting about in the folklore for interesting research areas, and we certainly shouldn’t put much weight on anecdotes from this folklore. In Bayesian terms such anecdotes should shift our prior probability very, very slightly if at all.
My own view is that the sole difference between the two is that science commands you to suspend judgment until the null hypothesis is under p=0.05, at least for the purposes of what is allowed into the scientific canon as provisional fact, and Bayesians are more comfortable making bets with greater degrees of uncertainty.
Why don’t you first describe one, then the other, then contrast them? Then, describe Eliezer’s view and contrast that with your position.
I’ll try to do it briefly, but it will be a bit tight. Let’s see how we go.
Bayes’ Theorem is part of the scientific toolbox. Pick up a first year statistics textbook and it will be in there, although not always under that name (look for “conditional probability” or similar constructs). Most of scientific methodology is about ensuring that you do your Bayesian updating right, by correctly establishing the base rate and the probability of your observations given the null hypothesis. (Scientists don’t state their P(A), but they certainly have an informal sense of what P(A) is likely to be and are more inclined to question a conclusion if it is unlikely than if it is likely).
If you’re doing Bayes right it’s the same as doing science, but I think some of the LW groupthink holds that you can do a valid Bayesian update in the absence of a rigorously established base rate, and so they think this is a difference between being a good Bayesian and being a good scientist. I think they are just being bad Bayesians since updating is no better than guesswork in the absence of a rigorously obtained P(B).
Eliezer (based on The Dilemma: Science or Bayes? ) doesn’t quite carve up science-culture from ideal-science-methodology the way I do, and infers that there is something wrong with Science because the culture doesn’t care about revising instrumentally-indistinguishable models to make them more Eliezer-intuitive. I think this has more to do with trying to win a status war with Science than with any differences in predicted observations that matter.
That doesn’t mean it doesn’t underlie the entire structure. As an analogy, to get from New York to Miami, one must generally go south. But instructions on how to get there will be a hodgepodge of walk north out of the building, west to the car, drive due east, then turn south...the plane takes off headed east...and turns south...etc. Showing that going south is one of several ways to turn while walking doesn’t mean its no conceptually different than north for getting fro New York to Miami. Similarly:
they think this is a difference between being a good Bayesian and being a good scientist.
If one is paid to do plumbing, then there is no difference between being a good plumber and a “good Bayesian”, and in that sense there is no difference between being a “good Bayesian” and a “good scientist”.
In the sense in which it is intended, there is a difference between being a “good Bayesian” and a “good scientist”. To continue the analogy, if one must go from Ramsey to JFK airport across the Tappan Zee Bridge, one’s route will be on a convoluted path to a bridge that’s in a monstrously inconvenient location. It was built there—at great additional expense as that is where the river is widest—to be just outside of the NY/NJ Port Authority’s jurisdiction. The best route from Ramsey to Miami may be that way, but that accommodates human failings, and is not the direct route. Likewise for every movement that is made in a direction not as the crow flies. Bayesian laws are the standard by which the crow flies, against which it makes sense to compare the inferior standards that better suit our personal and organizational deficiencies.
infers that there is something wrong with Science
Well, yes and no. It’s adequately suited for the accumulation of not-false beliefs, but it both could be better instrumentally designed for humans and is not the bedrock of thinking by which anything works. The thing that is essential to the method you described, “Scientists...have an informal sense of what P(A) is likely to be and are more inclined to question a conclusion if it is unlikely than if it is likely”. What abstraction describes the scientist’s thought process, the engine within the scientific method? I suggest it is Bayesian reasoning but even if it is not, one thing it cannot be is more of the Scientific method, as that would lead to recursion. If it is not Bayesian reasoning, then there are some things I am wrong about, and Bayesianism is a failed complete explanation, and the Scientific method is half of a quite adequate method—but they are still different from each other.
the probability of your observations given the null hypothesis.
P(B|~A) is inversely proportional to P(A|B) by Bayes’ Rule, so the direction is right—that’s why we can make planes that don’t fall out of the sky. But just using P(B|~A) isn’t what’s done, because scientists interject their subjective expectations here and pretend they do not. P(B|~A) doesn’t contain whether or not a researcher would have published something had she found a two tail rather than one tail test—a complaint about a paper I read just a few hours ago. What goes into p-values necessarily involves the arbitrary classes the scientist has decided evidence would fit in, and then measures his or her surprise at the class of evidence that is found. That’s not P(B|~A), it’s P(C|~A).
you can do a valid Bayesian update in the absence of a rigorously established base rate...updating is no better than guesswork in the absence of a rigorously obtained P(B)
Do you have examples of boundary cases that distinguish a rigorously established one with one that isn’t?
I think this has more to do with trying to win a status war with Science than with any differences in predicted observations that matter.
If one believes in qualitatively different beliefs, the rigorous and the non-rigorous, one falls into paradoxes such as the lottery paradox. It’s important to establish the actual nature of knowledge as probabilistic, and not be tricked into thinking science is a separate non-overlapping magisteria with other things.
With such actually correct understanding of how beliefs should work, we can think about improving our thinking rather than eternally and in vain trying to smooth out a ripple in a rug that has a table on each of its corners, hoping our mistaken view of the world has few harmful implications like “Jesus Christ is God’s only son” and not “life begins at conception”.
Or, we could not act on our most coherent world-views, only acting according to whatever fragment of thought our non-coherent attention presents to us. Not appealing.
It’s important to establish the actual nature of knowledge as probabilistic, and not be tricked into thinking science is a separate non-overlapping magisteria with other things.
Thank you for saying my point better than I was able to.
What abstraction describes the scientist’s thought process, the engine within the scientific method? I suggest it is Bayesian reasoning but even if it is not, one thing it cannot be is more of the Scientific method, as that would lead to recursion. If it is not Bayesian reasoning, no matter, Bayesianism is a failed complete explanation and the Scientific method is half an adequate method—they are still different from each other.
I don’t think scientists think about it much. That’s more the sort of thing philosophers of science think about. The smarter scientists do what is essentially Bayesian updating, although very few of them would actually put a number on their prior and calculate their posterior based on a surprising p value. They just know that it takes a lot of very good evidence to overturn a well-established theory, and not so much evidence to establish a new claim consistent with the existing scientific knowledge.
What goes into p-values necessarily involves the arbitrary classes the scientist has decided evidence would fit in, and then measures his or her surprise at the class of evidence that is found. That’s not P(B|~A), it’s P(C|~A).
Stating your hypothesis beforehand and specifying exactly what will and will not count as evidence before you collect your data is a very good way of minimising the effect of your own biases, but naughty scientists can and do take the opportunity to cook the experiment by strategically choosing what will count as evidence. Still, overall it’s better than letting scientists pore over the entrails of their experimental results and make up a hypothesis after the fact. If a great new hypothesis comes out of the data then you have do to your legwork and do a whole new experiment to test the new hypothesis, and that’s how it should be. If the effect is real it will keep. The universe won’t change on you.
Do you have examples of boundary cases that distinguish a rigorously established one with one that isn’t?
It’s not a binary distinction. Rather, if you’re unaware of the ways that people’s P(B) estimates can be wildly inaccurate and think that your naive P(B) estimates are likely to be accurate then you can update into all sorts of stupid and factually false beliefs even if you’re an otherwise perfect Bayesian.
The people who think that John Edward can talk to dead people might well be perfect Bayesians who just haven’t checked to see what the probability is that John Edward could produce the effects he produces in a world where he can’t talk to dead people. If you think the things he does are improbable then it’s technically correct to update to a greater belief in the hypothesis that he can channel dead people. It’s only if you know that his results are exactly what you’d expect in a world where he’s a fake that you can do the correct thing, which is not update your prior belief that the probability that he’s a fake is 99.99...9%.
If someone’s done some actual work to see if they can falsify the null hypothesis that PUS techniques are indistinguishable from a change, a comb, a shower and asking some women out I’d be interested in seeing it. In the absence of such work I think good Bayesians have to recognise that they don’t have a P(B) with small enough error bars to be very useful.
Stating your hypothesis beforehand and specifying exactly what will and will not count as evidence before you collect your data is a very good way of minimising the effect of your own biases
Exactly, it’s a cost and a deviation from ideal thinking to minimize the influence of scientists who receive no training in debiasing. So not “If you’re doing Bayes right it’s the same as doing science”, where “science” is an imperfect human construct designed to accommodate the more biased of scientists.
If a great new hypothesis comes out of the data then you have do to your legwork and do a whole new experiment to test the new hypothesis, and that’s how it should be. If the effect is real it will keep. The universe won’t change on you.
These are costs. It’s important, and in some contexts cheap, to know why and how things work instead of saying “I’ll ignore that since enough replication always solves such problems,” when one doesn’t know in which cases one is doing nearly pointless extra work and in which one isn’t doing enough replication. It’s an obviously sub-optimal solution along the lines of “thinking isn’t important; assume infinite resources.”
you can update into all sorts of stupid and factually false beliefs even if you’re an otherwise perfect Bayesian.
It’s praise through faint damnation of the laws of logic that they don’t prevent one from shooting one’s own foot off. Handcuffs are even better at that task, but they are less useful for figuring out what is true.
It’s not a binary distinction.
Exactly, so in “some of the LW groupthink holds that you can do a valid Bayesian update in the absence of a rigorously established base rate,” they are right, and “updating is no better than guesswork in the absence of a rigorously obtained P(B),” is not always true, such as when the following condition doesn’t apply, and it doesn’t here:
if you’re unaware of the ways that people’s P(B) estimates can be wildly inaccurate and think that your naive P(B) estimates are likely to be accurate
What do you think this site is for? People are reading and sharing research papers about biases in their free time. One could likewise criticize jet fuel for being inappropriate for an old fashioned coal powered locomotive. Yes, jet fuel will explode a train...this is not a flaw of jet fuel, and it does not mean that the coal-train is better at transporting things.
If someone’s done some actual work to see if they can falsify the null hypothesis that PUA techniques
That’s not the claim in question.
In any case, there are better ways to think about this subject than with null hypotheses. Those are social constructs focusing (decently) on optimizing preventing belief in untrue things, rather than determining what’s most likely true, here false beliefs have relatively less cost than in most of science, and will in any case only be held probabilistically.
Exactly, it’s a cost and a deviation from ideal thinking to minimize the influence of scientists who receive no training in debiasing. So not “If you’re doing Bayes right it’s the same as doing science”, where “science” is an imperfect human construct designed to accommodate the more biased of scientists.
There’s a very good reason why we do double-blind, placebo-controlled trials rather than just recruiting a bunch of people who browse LW to do experiments with, on the basis that since LWers are “trained in debiasing” they are immune to wishful thinking, confirmation bias, the experimenter effect, the placebo effect and so on.
I have a great deal more faith in methodological constructs that make it impossible for bias to have an effect than in people’s claims to “debiased” status.
Don’t get me wrong, I think that training in avoiding cognitive biases is very important because there are lots of important things we do where we don’t have the luxury of specifying our hypotheses in strictly instrumental terms beforehand, collecting data via suitably blinded proxies and analysing it just in terms of our initial hypothesis.
However my view is that if you think that scientific methodology is just a set of training wheels for people who haven’t clicked on all the sequences yet and that browsing LW makes you immune to the problems that scientific methodology exists specifically to prevent then it’s highly likely you overestimate your resistance to bias.
These are costs. It’s important, and in some contexts cheap, to know why and how things work instead of saying “I’ll ignore that since enough replication always solves such problems,” when one doesn’t know in which cases one is doing nearly pointless extra work and in which one isn’t doing enough replication. It’s an obviously sub-optimal solution along the lines of “thinking isn’t important; assume infinite resources.”
There’s also a cost to acting on the assumption that every correlation is meaningful in a world where we have so much data available to us that we can find arbitrarily large numbers of spurious correlations at P<0.01 if we try hard enough. Either way you’re spending resources, but spending resources in the cause of epistemological purity is okay with me. Spending resources on junk because you are not practising the correct purification rituals is not.
It’s praise through faint damnation of the laws of logic that they don’t prevent one from shooting one’s own foot off. Handcuffs are even better at that task, but they are less useful for figuring out what is true.
The accepted scientific methodology is more like a safety rope or seat belt. Sometimes annoying, almost always rational.
What do you think this site is for? People are reading and sharing research papers about biases in their free time. One could likewise criticize jet fuel for being inappropriate for an old fashioned coal powered locomotive. Yes, jet fuel will explode a train...this is not a flaw of jet fuel, and it does not mean that the coal-train is better at transporting things.
Rather than what a site is for I focus on what a site is.
In many, many ways this site has higher quality discourse than, say, the JREF forums and a population who on average are better versed in cognitive biases. However this discussion has made it obvious to me that on average the JREF forumites are far more aware than the LWers of the various ways that people’s estimates of P(B) can be wrong and can be manipulated.
They would never put it in those terms since Bayes is a closed book to them, but they are very well aware that you can work yourself into completely wrong positions if you aren’t sophisticated enough to correctly estimate the actual base rate at which one would expect to observe things like homeopathy apparently working, people apparently talking to the dead, people apparently having psychic powers, NLP apparently letting you seduce people and so on in worlds where none of these things did anything except act as placebos (at best).
If your P(B) is off then using Bayes Theorem is just being a mathematically precise idiot instead of an imprecise idiot. You’ll get to exactly the right degree of misguided belief, based on the degree to which you’re mistaken about the correct value of P(B,) but that’s still far worse than being someone who wouldn’t know Bayes from a bar of soap but who intuitively perceives something closer to the correct P(B).
The idea that LW browsers think they are liquid-fuelled jets while the scientists who do the actual work of moving society forward are boring old coal trains worries me. I think of LW’s “researchers” as a bunch of enthusiastic amateurs with cheap compasses and hand-drawn maps running around in the bushes in a mildly organised fashion, while scientists are painstakingly and one inch at a time building a gigantic sixteen-lane highway for us all to drive down.
There’s a very good reason why we do double-blind, placebo-controlled trials rather than just recruiting a bunch of people who browse LW to do experiments with
Yes, and people who actually understand the tradeoffs in using formal scientific reasoning and its deviations from the laws of reasoning are the only people in position to intelligently determine that. Those who say “always use the scientific method for important things” or, though I don’t know that there ever has been or ever will be such a person, “always recruit a bunch of people who browse LW,” are not thinking any more than a broken clock is ticking. As an analogy, coal trains are superior to jet planes for transporting millions of bushels of wheat from Alberta to Toronto. It would be inane and disingenuous for broken records always calling for the use of coal trains to either proclaim their greater efficiency in determining which vehicle to use to transport things because they got the wheat case right or pretend that they have a monopoly on calling for the use of trains.
With reasoning, one can intelligently determine a situation’s particulars and spend to eliminate a bias (for example by making a study double-blind) rather than doing that all the time or relying on skill in this case,and without relying on intuition to determine when. One can see that in an area, the costs of thinking something true when it isn’t exceeds the costs of thinking it’s false when it’s true, and set up correspondingly strict protocols, rather than blindly always paying in true things not believed, time, and money for the same, sometimes inadequate and sometimes excessive, amount of skepticism.
However my view is that if you think that scientific methodology is just a set of training wheels for people who haven’t clicked on all the sequences yet and that browsing LW makes you immune to the problems that scientific methodology exists specifically to prevent
My view is that if you think anyone who has interacted with you in this thread has that view you have poor reading comprehension skills.
There’s also a cost to acting on the assumption that every correlation is meaningful
So one can simply...not do that. And be a perfectly good Bayesian.
spending resources in the cause of epistemological purity is okay with me. Spending resources on junk because you are not practising the correct purification rituals is not.
It is not the case that every expenditure reducing the likelihood that something is wrong is optimal,as instead one could instead spend a bit on determining which areas ought to have extra expenditure reducing the likelihood that something is wrong there.
In any case, science has enshrined a particular few levels of spending on junk that it declares perfectly fine because the “correct” purification rituals have been done. I do not think that such spending on junk is justified because in those cases no, science is not strict enough. One can declare a set of arbitrary standards and declare spending according to them correct and ideologically pure or similar, but as one is spending fungible resources towards research goals this is spurious morality.
You’ll get to exactly the right degree of misguided belief...far worse than being someone who wouldn’t know Bayes from a bar of soap but who intuitively
Amazing, let me try one. If a Bayesian reasoner is hit by a meteor and put into a coma, he is worse off than a non-Bayesian who stayed indoors playing Xbox games and was not hit by a meteor. So we see that Bayesian reasoning is not sufficient to confer immortality and transcendence into a godlike being made of pure energy.
People on this site are well aware that if scientific studies following the same rules as the rest of science indicate that people have psychic powers, there’s something wrong with the scientific method and the scientists’ understanding of it because the notion that people have psychic powers are bullshit.
The idea that LW browsers think they are liquid-fuelled jets while the scientists who do the actual work of moving society forward are boring old coal trains worries me.
People here know that there is not some ineffable magic making science the right method in the laboratory and faith the right method in church, or science the right method in the laboratory and love the right method everywhere else, science the right method everywhere and always, etc., as would have been in accordance with people’s intuitions.
How unsurprising it is that actually understanding the benefits and drawbacks of science leads one to conclude that often science is not strict enough, and often too strict, and sometimes but rarely entirely inappropriate when used, and sometimes but rarely unused when it should be used, when heretofore everything was decided by boggling intuition.
Yes, and people who actually understand the tradeoffs in using formal scientific reasoning and its deviations from the laws of reasoning are the only people in position to intelligently determine that.
I’m not going to get into a status competition with you over who is in a position to determine what.
My view is that if you think anyone who has interacted with you in this thread has that view you have poor reading comprehension skills.
The most obvious interpretation of your statement that science is “an imperfect human construct designed to accommodate the more biased of scientists” and that “it’s a cost and a deviation from ideal thinking to minimize the influence of scientists who receive no training in debiasing” is that you think your LW expertise means that you wouldn’t need those safeguards. If I misinterpreted you I think it’s forgivable given your wording, but if I misinterpreted you then please help me out in understanding what you actually meant.
People on this site are well aware that if scientific studies following the same rules as the rest of science indicate that people have scientific powers, there’s something wrong with the scientific method and the scientists’ understanding of it because the notion that people have psychic powers are bullshit.
I’m responding under the assumption that the second “scientific” should read “psychic”. My point was not that people here didn’t get that—I imagine they all do. My point is that the evidence on the table to support PUA theories is vulnerable to all the same problems as the evidence supporting claimed psychic powers, and that when it came to this slightly harder problem some people here seemed to think that the evidence on the table for PUA was actually evidence we would not expect to see in a world where PUA was placebo plus superstition.
I think the JREF community would take one sniff of PUA and say “Looks like a scam based on a placebo”, and that they would be better Bayesians when they did so than anyone who looks at the same evidence and says “Seems legit!”.
(I suspect that the truth is that PUA has a small non-placebo effect, since we live in a universe with ample evidence that advertising and salesmanship have small non-placebo effects that are statistically significant if you get a big enough sample size. However I also suspect that PUAs have no idea which bits of PUA are the efficacious bits and which are superstition, and that they could achieve the modest gains possible much faster if they knew which was which).
I’m not going to get into a status competition with you over who is in a position to determine what.
OK, I will phrase it in different terms that make it explicit that I am making several claims here (one about what Bayesianism can determine, and one about what science can determine). It’s much like I said above:
It’s adequately suited for the accumulation of not-false beliefs, but it both could be better instrumentally designed for humans and is not the bedrock of thinking by which anything works. The thing that is essential to the method you described, “Scientists...have an informal sense of what P(A) is likely to be and are more inclined to question a conclusion if it is unlikely than if it is likely”. What abstraction describes the scientist’s thought process, the engine within the scientific method? I suggest it is Bayesian reasoning but even if it is not, one thing it cannot be is more of the Scientific method, as that would lead to recursion. If it is not Bayesian reasoning, then there are some things I am wrong about, and Bayesianism is a failed complete explanation, and the Scientific method is half of a quite adequate method—but they are still different from each other.
Some people claim Bayesian reasoning models intelligent agents’ learning about their environments, and agents’ deviations from it is failure to learn optimally. This model encompasses choosing when to use something like the scientific method and deciding when it is optimal to label beliefs not as “X% likely to be true, 1-X% likely to be untrue,” but rather “Good enough to rely on by virtue of being satisfactorily likely to be true,” and “Not good enough to rely on by virtue of being satisfactorily likely to be true”. If Bayesianism is wrong, and it may be, it’s wrong.
The scientific method is a somewhat diverse set of particular labeling systems declaring ideas “Good enough to rely on by virtue of being satisfactorily likely to be true,” and “Not good enough to rely on by virtue of being satisfactorily likely to be true.” Not only is the scientific method incomplete by virtue of using a black-box reasoning method inside of it, it doesn’t even claim to be able to adjudicate between circumstances in which it is to be used and in which it is not to be used. It is necessarily incomplete. Scientists’ reliance on intuition to decide when to use it and when not to may well be better than using Bayesian reasoning, particularly if Bayesianism is false, I grant that. But the scientific method doesn’t, correct me if I am wrong, purport to be able to formally decide whether or not a person should subject his or her religious beliefs to it.
The most obvious interpretation of your statement that science is “an imperfect human construct designed to accommodate the more biased of scientists” and that “it’s a cost and a deviation from ideal thinking to minimize the influence of scientists who receive no training in debiasing” is that you think your LW expertise means that you wouldn’t need those safeguards.
I disagree but here is a good example of where Bayesians can apply heuristics that aren’t first-order applications of Bayes rule. The failure mode of the heuristic is also easier to see than where science is accused of being too strict (though that’s really only a part of the total claim, the other parts are that science isn’t strict enough, that it isn’t near Pareto optimal according to its own tradeoffs in which it sacrifices truth, and that it is unfortunately taken as magical by its practitioners).
In those circumstances in which the Bayesian objection to science is that it is too strict, science can reply by ignoring that money is the unit of caring and declare its ideological purity and willingness to always sacrifice resources for greater certainty (such as when the sacrifice is withholding FDA approval of a drug already approved in Europe), “Either way you’re spending resources, but spending resources in the cause of epistemological purity is okay with me. Spending resources on junk because you are not practising the correct purification rituals is not.”
Here, however, the heuristic is “reading charitably”, in which the dangers of excess are really, really obvious. Nonetheless, even if I am wrong about what the best interpretation is, the extra-Bayesian ritual of reading (more) charitably would have had you thinking it more likely than you did that I had meant something more reasonable (and even more so, responding as if I did). It is logically possible that you were reading charitably ideally and my wording was simply terrible. This is a good example of how one can use heuristics other than Bayes’ rule once one discovers one is a human and therefore subject to bias. One can weigh the costs and benefits of it just like each feature of scientific testing.
For “an imperfect human construct designed to accommodate the more biased of scientists”, it would hardly do to assume scientists are all equally biased, and likewise for assuming the construct is optimal no matter the extent of bias in scientists. So the present situation could be improved upon by matching the social restrictions to the bias of scientists and also decreasing that bias. If mostly science isn’t strict enough, then perhaps it should be stricter in general (in many ways it should be) but the last thing to expect is that it is perfectly calibrated. It’s “imperfect”, I wouldn’t describe a rain dance as an “imperfect” method to get rain, it would be an “entirely useless” method. Science is “imperfect”, and it does very well to the extent thinking is warped to accommodate the more biased of scientists, and so something slightly different would be more optimal for the less biased ones.
″...it’s a cost and a deviation from ideal thinking to minimize the influence of scientists who receive no training in debiasing,” and less cost would be called for if they received such training, but not zero. Also, it is important to know that costs are incurred, lest evangelical pastors everywhere be correct when they declare science a “faith”. Science is roughly designed to prevent false things from being called “true” at the expense of true things not being called “true”. This currently occurs to different degrees in different sciences, and it should, and some of those areas should be stricter, and some should be less strict, and in all cases people shouldn’t be misled about belief such that they think there is a qualitative difference between a rigorously established base rate and one not so established, or science and predicting one’s child’s sickness when it vomits a certain color in the middle of the night.
My point is that the evidence on the table to support PUA theories is vulnerable to all the same problems as the evidence supporting claimed psychic powers
It’s not too similar since psychic powers have been found in controlled scientific studies, and they are (less than infinitely, but nearly) certainly not real. PUA theories were formed from people’s observations, then people developed ideas they thought based on the theories, then tested what they thought were the ideas, tested them insufficiently rigorously. Each such idea is barely more likely than the base rate for being correct due to all the failure nodes, but each is more likely, the way barely enriched uranium’s particles are more likely to be U-235 than natural uranium’s are. This is in line with “However I also suspect that PUAs have no idea which bits of PUA are the efficacious bits and which are superstition, and that they could achieve the modest gains possible much faster if they knew which was which”.
When it comes to action, as in psychological experiments in which one is given a single amount of money for correctly guessing the color of something between red and blue, and one determines 60% of the things are red, one should always guess red, one should act upon ideas most likely true if one must act, all else equal.
Any chance of turning this (and some of your other comments) into a top-level post? (perhaps something like, “When You Can (And Can’t) Do Better Than Science”?)
I think the first section should ignore the philosophy of science and cover the science of science, the sociology of it, and concede the sharpshooter’s fallacy, assuming that whatever science does it is trying to do. The task of improving upon the method is then not too normative, since one can simply achieve the same results with fewer resources/better results with the same resources. Also, that way science can’t blame perceived deficiencies on the methods of philosophy, as it could were one to evaluate science according to philosophy’s methods and standards. This section would be the biggest added piece of value that isn’t tying together things already on this site.
A section should look for edges with only one labeled node in the scientific methods where science requires input from a mystery method, such as how scientists generate hypotheses or how scientific revolutions occur. These show the incompleteness of the scientific method as a means to acquire knowledge, even if it is perfect at what it does. Formalization and improvement of the mystery methods would contribute to the scientific method, even if nothing formal within the model changes.
A section should discuss how science isn’t a single method (according to just about everybody), but instead a family of similar methods varying especially among fields. This weakens any claim idealizing science in general, as at most one could claim that a particular field’s method is ideal for human thought and discovery. Assuming each (or most) fields’ methods are ideal (this is the least convenient possible world for the critic of the scientific method as practiced), the costs and benefits of using that method rather than a related scientific method can be speculated upon. I expect to find, as policy debates should not be one sided, that were a field to use other fields’ methods it would have advantages and disadvantages; the simple case is choice of stricter p-value modulating wrong things believed at the expense of true things not believed.
Sections should discuss abuses of statistics, one covering violations of the law (failing to actually test P(B|~A) and instead testing P((B + (some random stuff) - (some other random stuff)|~A) and another covering systemic failures such as publication bias and failure to publish replications. This would be a good place to introduce intra-scientific debates about such things to show both that science isn’t a monolithic outlook that can be supported and how one side in the civil war is aligned with Bayesian critiques. To the extent science is not settled on what the sociology of science is, that is a mark of weakness—it may be perfectly calibrated, but it isn’t too discriminatory here.
A concession I imagine pro-science people might make is to concede the weakness of soft science, such as sociology. Nonetheless, sociology’s scientific method is deeply related to hard sciences’, and its shortcomings somewhat implicate them. What’s more, if sociology is so weak, one wonders whence the pro-science person gets their strong pro-science view. One possibility is that they get it purely from philosophy of science, (a school of which) they wholly endorse, but if that is the case they don’t have an objection in kind to those who also predict science as is works decently but have severe criticisms of it and ideas on how to improve upon it, i.e. Bayesians.
I think it’s fair to contrast the scientific view of science with a philosophical view of Bayesianism to see if they are of the same scope. If science has no position on whether or not science is an approximation of Bayesian reasoning, and Bayesianism does, that is at least one question addressed by the one and not the other. It would be easy to invent a method that’s not useful for finding truth that has a broader scope than science, e.g. answering “yes” to every yes or no question unless it would contradict a previous response. This alone would show they are not synonymous.
A problem with the title “When You Can (And Can’t) Do Better Than Science” is that it is binary, but I really want three things explicitly expressed: 1) When you can do better than science by being stricter than science, 2) when you can do better than science by being more lenient than science, 3) when you can’t do better than science. The equivocation and slipperiness surrounding what it is reasonable to do is a significant part of the last category, e.g. one doesn’t drive where the Tappan Zee Bridge should have been built. The other part is near-perfect ways science operates now according to a reasonable use of “can’t”; I wouldn’t expect science to be absolutely and exactly perfect anywhere, any more than I can be absolutely sure with a probability of 1 that the Flying Spaghetti Monster doesn’t exist.
Second order Bayesianism deserves mention as the thing being advocated. A “good Bayesian” may use heuristics to counteract bias other than just Bayes’ rule, such as the principle of charity, or pretending things are magic to counteract the effort heuristic, or reciting a large number of variably sized numbers to counteract the anchoring effect, etc.
Is there a better analogy than the driving to the airport one for why Bayes’ Rule being part of the scientific toolbox doesn’t show the scientific toolbox isn’t a rough approximation of how to apply Bayes’ Rule? The other one I thought of is light’s exhibiting quantum behavior directly, it being a subset of all that is physical, but all that is physical actually embodying quantum behavior.
A significant confusion is discussing beliefs as if they weren’t probabilistic and actions in some domains as if they ought not be influenced by anything not in a category of true belief “scientifically established”. Bayesianism explains why this is a useful approximation of how one should actually act and thereby permits one to deviate from it without having to claim something like “science doesn’t work”.
Not necessarily to reopen anything, but some notes:
the placebo effect
I’m not sure it’s at all possible to debias against this.
The accepted scientific methodology is more like a safety rope or seat belt.
I agree that those are better metaphors than handcuffs all else equal, but those things would not prevent one from shooting one’s foot, and so it didn’t fit the broader metaphor.
A better analogy would be a law that no medical treatment can be received until a second opinion is obtained, or something like that.
My own view is that the sole difference between the two is that science commands you to suspend judgment until the null hypothesis is under p=0.05, at least for the purposes of what is allowed into the scientific canon as provisional fact, and Bayesians are more comfortable making bets with greater degrees of uncertainty.
His view is only slightly more strict, yet he arrives at some very different conclusions. For example, under your framework Rhine’s ESP experiments are scientific hypothesis tests, and under his they are illogical. I am not convinced by Polanyi, but it is far from clear to me how you could show he is wrong. If you know how to show he is wrong and could explain that in a couple paragraphs (or point me to such a document) I would be very interested in reading it.
Are you familiar with Michael Polanyi Personal Knowledge?
I’m not familiar with his work, unfortunately.
However a quote from one of the reviews concerns me. The reviewer says:
The author furnishes a thought provoking analysis that demonstrates the sufficiency (perhaps not the necessity) of a pseudo-kantian mindset that makes intelligibility possible. Reductionists, various materialists, physicalists, and sundry naturalists will recoil at the prospect that universal immutable immaterial concepts, forms, and laws are essential epistemic conditions for human experience.
If that’s Polanyi’s position it seems both kooky and not immediately relevant to the topic, so unless you can take a shot at explaining what you think Polanyi’s insights are that are relevant to the topic at hand I think we should drop this and take it up elsewhere or by other means if you want to talk about it further.
As I said, I’m less interested in “scientific” evidence than Bayesian evidence. The latter can be disappointingly orthogonal to the former, in that what’s generally good scientific evidence isn’t always good Bayesian evidence, and good Bayesian evidence isn’t always considered scientific.
What are some examples of good scientific evidence that isn’t good bayesian evidence?
What are some examples of good scientific evidence that isn’t good bayesian evidence?
Uh, how about all of parapsychology, aka “the control group for the scientific method”. ;-) Psi experiments can reach p .05 under conventional methods without being good Bayesian evidence, as we’ve seen recently with that “future priming” psi experiment.
(Note that I said “scientific” not Scientific. ;-) )
Ok, I wouldn’t have necessarily classed that as ‘good scientific evidence’ but it seems to be useful Bayesian evidence so we must be looking at it from different angles.
and it might turn out that in some cases women are happier if they are communicated with honestly, treated as equal partners in a relationship, given signals that they are high-status in the form of compliments and “romantic” gestures and so forth
If they see this behavior from a stranger, they hate it like a bad smell. Yuck.
If they see a lot of this in a relationship, they begin to lose attration for him, and in the end hate him and cheat on him.
By the way, have you studied game theory? A man who always gives you treats and compliments is signalling his own low value, therefore his treats and compliments are devalued. Yes?
My personal belief is that female utility is maximized by a man who is alpha, who leads them rather than treating them as an equal, who keeps them “on their toes” by flirting with other chicks, but who occasionally surprises them with a big romantic gesture like a surprise weekend break, champagne on ice, hot sex in the penthouse suite. But he doesn’t do it all the time, his rewards are unpredictable. This is in line with what game theory would predict.
This is in line with what game theory would predict.
Perhaps the reason you’re being downvoted is because you’re confusing game theory with behaviorism. Variable reinforcement schedules, and all that.
Also, I expect if you phrased the last part of your comment, say, as:
“People enjoy a little variety and unpredictability from their partners, and generally prefer not to have to come up with all the ideas for what to do.”
It’d be less likely to be perceived as some sort of chauvinism. That statement, as it happens is true of both men and women.
(Likewise, the first part of your comment describes things that men do in response to women’s behavior, despite your writing it as if it were unique to women’s response to men.)
Finding ourselves with the ability to reflect on how our instinctual behavior and preferences are derived from inclusive genetic fitness necessitates neither fully accepting, nor fully rejecting these preferences.
I understand that, in seeking a romantic partner, there are qualities I value above those as determined by the blind idiot god. One of these qualities is reflectively the ability to rationally self-determine preferences, to the extent that such a thing is possible.
I liken my understanding to the fable of the oak and reed. I prefer, and indeed expect, potential romantic partners to signal appropriate … fertility, in a reductive sense. Likewise, I exhibit desirable behavioral cues (actually, much of the alpha male mentality is worthwhile in itself): confidence, leadership, non-neediness, etc. In neither case (hopefully) are these the qualities that are primarily desired, but merely the minimum threshold that our biology imposes on such endeavors.
Is finding a partner with such an understanding realistic, or even possible? Yes, to an extent. It is a very unfortunate fact of our society that females aren’t socialized in a way that facilities rationality, relative to males; a scarcity which makes such an individual that much more appealing. I have met some, and dated a very few of these. I’m still optimistic.
Finding ourselves with the ability to reflect on how our instinctual behavior and preferences are derived from inclusive genetic fitness necessitates neither fully accepting, nor fully rejecting these preferences.
Absolutely. Just to be clear, I never said, and in fact explicitly disclaimed the former. I agree 100%.
I should disclose immediately that I am one of the people who find the PUA community distasteful on a variety of levels, intellectual and ethical, and this may colour my viewpoint.
The PUA community may present themselves, and think of themselves, as a “disreputable source of accurate information” but in the absence of controlled trials I don’t think the claim to accuracy is well-founded. Sticking strictly to the scientific literature is not so much ignoring the elephant in the room as suspending judgment as to whether the elephant exists until we can turn the lights on.
If it’s been said already I apologise, but it seems obvious to me that an ethical rationalist’s goals in relationship-seeking should be to seek a relationship that creates maximal utility for both parties, and that scientific evidence about how to find suitable partners and behave in the relationship so as to maximise utility for both partners is a great potential source of human happiness. It’s obvious from even the briefest perusal of PUA texts that the PUA community are concerned very much with maximising their own utility and talking down the status of male outgroup members and women in general, but not with honestly seeking means to maximise the utility of all stakeholders.
Given that their methodology is incompatible with scientific reasoning and their attitudes incompatible with maximising global utility for all sentient stakeholders, I think it’s quite correct to leave their claims out of a LW analysis of human sexual relationships.
They write stuff on their version of ArXiv (called pick-up forums) then they go out and try it, and if it works repeatably it is incorporated into PU-lore.
What definition of science did you have in mind that this doesn’t fit?
There are a significant number of methodological problems with their evidence-gathering.
PUAs don’t change just one variable at a time, nor do they keep strict track of what they change and when so they can do a multivariate regression analysis. Instead they change lots of variables at once. A PUA would advocate that a “beta” change their clothes, scent, social environment(s), social signalling strategies and so forth all at once and see if their sexual success rate changed. However if this works you don’t know which changes did what.
The people doing the observation are the same people conducting the experiment which is obviously incompatible with proper blinding.
The people reporting the data stand to gain social status in the PUA hierarchy if they report success, and hence have an incentive to misreport their actual data. When a PUA reports that they successfully obtained coitus on one out of six attempts using a given methodology it is reasonable to suspect that some such reports come from people who actually took sixteen attempts, or from people who failed to obtain coitus given sixteen attempts and went home to angrily masturbate and then post on a PUA forum that they had obtained success. We can’t tell what the real success rate is without observing PUAs in the wild.
Even assuming honest reporting it seems intuitively likely that PUAs, like believers in psychic powers, are prone to reporting their hits and forgetting their misses. It’s a known human trait to massage our internal data this way and barring rigorous methodological safeguards it’s a safe assumption that this will bias any reported results.
There’s no comparison with a relevant base rate, which is a classic example of the base rate fallacy in action. We don’t know what the success rate for a well-groomed, well-spoken person who does not employ PUA social signalling tactics is compared with a similarly groomed and comported person using PUA social signalling tactics, for example.
A successful PUA was mentioned as having obtained coitus ~300 times out of ~10 000 approaches. That’s useless unless we know what success rate other methodologies would have produced. In any case people aren’t naturally such good statisticians that they can detect variations in frequency in a phenomenon that occurs one time in 33 at best with a sample size for a given experiment in the tens at most.
PUA mythology seems to me to have built-in safeguards against falsifiability. If a woman rejects a PUA then it can be explained away as her being “entitled” or “conflicted” or something similar. If a woman chooses a “beta” over a PUA then it can be explained away in similar terms or by saying that she has low self-esteem and doesn’t think she is worthy of an “alpha”, and/or postulating that if an “alpha” came along she would of course engage in an extra-marital affair with the “alpha”. As long as the PUAs are obtaining sex some of the time, or are claiming they are doing so, their theories aren’t falsifiable.
We shouldn’t trust a PUA’s reported opinion about their ability to obtain sex more often than chance any more than we should trust a claimed psychic’s reported opinion about their ability to predict the future more often than chance. Obviously our prior probability that they are reporting true facts about the universe should be higher for the PUA since their claims do not break the laws of physics, but their testimony should not give us strong reason to shift our prior.
You’re assuming that there’s no feedback other than a single yes/no bit per approach.
Note that this may be a feature, not a bug: a PUA with unwavering belief in their method will likely exude more confidence, regardless of the method employed.
I remember one pickup guru describing how when he was younger, he’d found this poem online that was supposed to be the perfect pickup line… and the first few times he used it, it was, because he utterly believed it would work. Later, he had to find other methods that allowed him to have a similar level of belief.
As has been mentioned elsewhere on LW, belief causes people to act differently—often in ways that would be difficult or impossible to convincingly fake if you lacked the belief. (e.g. microexpressions, muscle tension, and similar cues)
To put it another way, even the falsifiability of PUA theory is subject to testing: i.e., do falsifiable PUA theories work better or worse than unfalsifiable ones? If unfalsifiable ones produce better results, then it’s a feature, not a bug. ;-)
Only in the same sense that the placebo effect is a “feature” of evidence-based medicine.
It’s okay if evidence-based medicine gets a tiny, tiny additional boost from the placebo effect. It’s good, in fact.
However when we are trying to figure out whether or not a treatment works we have to be absolutely sure we have ruled out the placebo effect as the causative factor. If we don’t do that then we can never find out which are the good treatments that have a real effect plus a placebo effect, and which are the fake treatments that only have a placebo effect.
Only if it turned out that method absolutely, totally did not matter and only confidence in the method mattered would it be rational to abandon the search for the truth and settle for belief in an unfalsifiable confidence-booster. It seems far more likely to me that there will in fact be approach methods that work better than others, and that only by disentangling the confounding factor of confidence from the real effect could you figure out what the real effect was and how strong it was.
This really, really underestimates the number of confounding factors. For any given man, the useful piece of information is what method will work for him, for women that:
Would be happy with him, and
He would be happy with
(Where “with” is defined as whatever sort of relationship both are happy with.)
This is a lot of confounding factors, and it’s pretty central to the tradeoff described in this post: do you go for something that’s inoffensive to lots of people, but not very attractive to anyone, or something that’s actually offensive to most people, but very attractive to your target audience?
You can’t do group randomized controls with something where individuality actually does count.
This is especially true of PUA advice like, “be in the moment” and “say something that amuses you”. How would you test these bits of advice, for example, while holding all other variables unchanged? By their very definition, they’re going to produce different behavior virtually every time you act on them.
There are two classes of claim here we need to divide up, but they share a common problem. First the classes, then the problem.
The first class is claims that are simply unfalsifiable. If there is no way even in theory that a proposition could be confirmed or falsified then that proposition is simply vacuous. There is nothing to say about it except that rational agents should discard the claim as meaningless and move on. If any element of PUA doctrine falls into this category then for LW purposes we should simply flag it as unfalsifiable and move on.
The second class is claims that are hard to prove or disprove because there are multiple confounding factors, but which with proper controls and a sufficiently large sample size we could in theory confirm or disconfirm. If a moderate amount of cologne works better than none at all or a large amount of cologne, for example, then if we got enough men to approach enough women then eventually if there’s a real effect we should be able to get a data pool that shows statistical significance despite those confounding effects.
The common problem both classes of claims have is that a rationalist is immediately going to ask someone who proposes such a claim “How do you think you know this?”. If a given claim is terribly difficult to confirm or disconfirm, and nobody has yet done the arduous legwork to check it, it’s very hard to see how a rational agent could think it is true or false. The same goes except more strongly for unfalsifiable claims.
For a PUA to argue that X is true, but that X is impossible to prove, is to open themselves up to the response “How do you know that, if it’s impossible to prove?”.
Sure… as long as you separate predictions from theory. When you reduce a PUA theory to what behaviors you expect someone believing that theory would produce, or what behaviors, if successful, would result in people believing such theories, you then have something suitable for testing, even if the theory is nonsensical on its face.
Lots of people believe in “The Secret” because it appears to produce results, despite the theory being utter garbage. But then, it turns out that some of what’s said is consistent with what actually makes people “luckier”… so there was a falsifiable prediction after all, buried under the nonsense.
If a group of people claim to produce results, then reduce their theory to more concrete predictions first, then test that. After all, if you discard alchemy because the theory is bunk, you miss the chance to discover chemistry.
Or, in more LW-ish speak: theories are not evidence, but even biased reports of actual experience are evidence of something. A Bayesian reductionist should be able to reduce even the craziest “woo” into some sort of useful probabilistic information… and there’s a substantial body of PUA material that’s considerably less “woo” than the average self-help book.
In the simplest form, this reduction could be just: person A claims that they were unsuccessful with women prior to adopting some set of PUA-trained behaviors. If the individual has numbers (even if somewhat imprecise) and there are a large number of people similar to person A, then this represents usable Bayesian evidence for that set of behaviors (or the training itself) being useful to persons with similar needs and desires as person A.
This is perfectly usable evidence that doesn’t require us to address the theory or its falsifiability at all.
Now, it is not necessarily evidence for the validity of person A’s favorite PUA theory!
Rather, it is evidence that something person A did differently was helpful for person A… and it remains an open question to determine what actually caused the improvement. For example, could it simply be that receiving PUA training somehow changes people? That it motivates them to approach women repeatedly, resulting in more confidence and familiarity with approaching women? Any number of other possible factors?
In other words, the actual theory put forth by the PUAs doing the teaching shouldn’t necessarily be at the top of the list of possibilities to investigate, even if the teaching clearly produces results...
And using theory-validity as a screening method for practical advice is pretty much useless, if you have “something to protect” (in LW speak). That is, if you need a method that works in an area where science is not yet settled, you cannot afford to discard practical advice on the basis of questionable theory: you will throw out way too much of the available information. (This applies to the self-help field as much as PUA.)
I’m perfectly happy to engage with PUA theories on that level, but the methodological obstacles to collecting good data are still the same. So the vital question is still the same, which is “How do these people think they know these things?”.
The only difference is that instead of addressing the question to the PUA who believes specific techniques A, B and C bring about certain outcomes, we address the question to the meta-PUA who believes that although specific techniques A, B and C are placebos that belief in the efficaciousness of those techniques has measurable effects.
However PUA devotees might not want to go down this argumentative path because the likely outcome is admitting that much of the content on PUA sites is superstition, and that the outcomes of the combined arsenal of PUA tips and techniques cannot currently be distinguished from the outcomes of a change of clothes, a little personal grooming and asking a bunch of women to go out with you.
PUA devotees like to position themselves as gurus with secret knowledge. If it turns out that the entire edifice is indistinguishable from superstition then they would be repositioned as people with poor social skills and misogynist world-views who reinvented a very old wheel and then constructed non-evidence-based folk beliefs around it.
So depending on the thesis you are arguing for, it might be safer to argue that PUA techniques do have non-placebo effects.
Even if that were true (and I don’t think that’s anywhere near the case), you keep dropping out the critical meta-level for actual human beings to achieve instrumental results: i.e., motivation.
That is, even if “a change of clothes, a little grooming, and asking a bunch of women out” were actually the best possible approach, it’s kind of useless to just leave it at that, because quite a lot of actual human beings are incapable of motivating themselves to actually DO the necessary steps, using mere logical knowledge without an emotional component. (On LW, people generally use the term “akrasia” to describe this normal characteristic of human behavior as if it were some sort of strange and unexpected disease. ;-) )
To put it another way, the critical function of any kind of personal development training is to transmit a mental model to a human brain in a way such that the attached human will act in accordance with the model so transmitted.
After all, if this were not the case, then self-help books of any stripe could consist simply of short instruction sheets!
“Placebo” and “superstition” are not interchangeable concepts. A placebo is a real effect, a superstition is an imaginary one.
That is, if I think my baseball batting performance is improved when I wear a red scarf, and it is, that’s a placebo effect. (Belief creating a real result.) If I think that it’s improved, but it actually isn’t, then that’s a superstition.
This means that placebo effects are instrumentally more useful than superstitions… unless of course the superstition gets you to do something that itself has a beneficial effect.
To the extent that PUA uses placebo effects on the performer of a technique, the usefulness of the effect is in the resulting non-placebo response of the recipient of the technique.
Meanwhile, there are tons of specific pieces of PUA advice that are easily testable in miniature that needn’t rely on either sort of effect.
For example, if PUAs of the Mystery school predict that “a set will open more frequently if you body-rock away from the group before establishing a false time constraint”, that prediction should be easily testable to determine its truth or falsehood, given objectively reducible definitions of “set”, “open”, “body rock”, and “false time constraint”. (All of which terms the Mystery method does quite objectively reduce.)
So, you could teach a bunch of people to do these things, send them out and videotape ’em, and then get a bunch of grad students to grade the sets as to whether they opened and how quickly (without seeing the PUA’s behavior), and voila… testable prediction.
On the level of such specific, immediately-responded-to actions and events, ISTM that PUAs have strong motivation to eliminate non-working or negatively-reinforced behaviors from their repertoire, especially when in the process of inventing them.
Of course, removing superstitious “extras” is unlikely for a given PUA guru to notice; I have observed that it is students of those gurus, or new, competing gurus who would push back with, “I haven’t seen any need to body rock”, or “Opinion openers are unnecessary”, or “indirect game is pointless”, etc. So, even though individual schools don’t often move in the direction of discarding old techniques, the field as a whole seems to evolve towards simplification where possible.
Indeed, there is at least one PU guru who says that nearly all of Mystery method is pointless superstition in the sense that guys who jump through all its hoops are succeeding not because of what they’re doing in the process, so much as what they’re not doing.
That, in essence, women either find you attractive or they don’t, and all that your “game” needs to do is not blow the attraction by saying or doing something stupid. ;-) His specific advice seems to focus more on figuring out how to tell whether a particular woman is attracted to you, and how to move as quickly as possible from that to doing something about it.
Note: I don’t believe this guru is saying that Mystery’s advice about social skills is wrong, merely that the use of those skills can be completely superfluous to a goal of having sex with attractive women, vs. a goal of being friends with groups of people and hanging out with them before having sex with some of the women, or getting into social circles containing high-status women. And I think he’s largely correct in this stance, especially if your objective isn’t to have sex with the highest-status beautiful woman present (which is Mystery method’s raison d’etre).
If your objective is to meet, say, the kinkiest girl with the dirtiest mind, or the sweetest, friendliest one, or the most adventurous one, or really almost any other criteria, Mystery’s elaborate refinements are superfluous, as they were developed to help him rapidly social-climb his way into his target’s circle of friends and disarm their ready defenses against guys coming to hit on her.
To put it another way: Mystery is using a narrow, red-line strategy specifically tuned to women who are the most attractive to a broad, blue-line spectrum of guys… because they were also his personal red line. If your red line is not those women, then Mystery method is not the tool you should use.
PUA style, in short, is very individual. Once you add back in the context of a given guru’s personality, physique, goals, and other personal characteristics, you find that it’s nowhere near as broad-spectrum/universal as the guru’s declarations appear. Once, I watched some videos online from a conference of PUA gurus who often had what sounded like contradictory advice… but which was intended for people with different personalities and different goals.
For example, one guy focused on making lots of female friends and going out with them a lot—he enjoys it, and then they play matchmaker for him. Another emphasized a lone-wolf strategy of “forced IOIs”, which is PUA code for acting in a way that forces a woman to very quickly indicate (nonverbally) whether she has any interest in him. Just looking at these two guys, you could tell that each had chosen a method that was a better match for their personality, and that neither would be happy using the other’s method, nor would they each be meeting the kind of women they wanted to meet!
So that’s why I keep saying that you’re ignoring the fact that PUA is not a single uniform thing, any more than, say, weight loss is. In theory, everybody can eat less and move more and this will make them lose weight. In practice, it ain’t nearly that simple: different people have different nutritional needs, for example, so the diet that’s healthy for one person can be very bad for another.
Thus, if you want, say, “honest, equal, supportive” PUA, then by all means, look for it. But don’t expect to find One True PUA Theory that will make all women do your bidding. It doesn’t exist. What exists in PUA is a vast assortment of vaguely related theories aimed at very individual goals and personality types.
(And, of more direct relevance to this particular sub-thread, far too many confounding factors to be of much use to group studies, unless you plan to run a lot of experiments.)
Speaking broadly, if the goal is Rational Romantic Relationships than any advice which doesn’t have actual existing evidence to back it up is not advice rational people should be taking.
If a whole bunch of different gurus are each flogging different techniques and none of them have evidence, then a rationalist should dismiss them all until they do have some evidence, just as we dismiss the alt-med gurus who flog different forms of alternative medicine without evidence. Without evidence PUA is no more the elephant in the Rationalist Romantic Relationship room than ayurveda is an elephant in the medical science room.
As far as the superstition/placebo distinction you are making I think you are simply wrong linguistically speaking. Nothing stops a superstition being a placebo, and in fact almost all of alternative medicine could legitimately be described as placebo and superstition.
Superstitions arise because of faulty cause/effect reasoning and may indeed have a placebo effect, like the red scarf you mention. I suspect but cannot prove that some parts of PUA doctrine arise in exactly the same way that belief in a lucky scarf arises. Someone tries it, they get lucky that time, and so from then on they try it every time and believe it helps.
If some pieces of PUA technique are testable, that’s great. They should test them and publish the results. Until they do their beliefs don’t really have a place if we’re talking about Rational Romantic Relationships. If they aren’t testable, then they’re unfalsifiable beliefs and rationalists should be committed to discarding unfalsifiable beliefs. PUA looks to me more like folklore than science, at this stage.
I agree with this statement… but as previously discussed, I mean Bayesian reductionist evidence.
Which means, anecdotes count, even if they still count for less than numbers and double-blind tests.
You’re using a very non-LW definition of “rational” here, since the principles of Something To Protect, and avoiding the Failures Of Eld Science would say that it’s your job to find something and test it, not to demand that people bring you only advice that’s already vetted.
If you wait for Richard Wiseman to turn “The Secret” into “Luck Theory”, and you actually needed the instrumental result, then you lost.
That is, you lost the utility you could have had by doing the testing yourself.
For medical outcomes, doing the testing yourself is a bad idea because the worst-case scenario isn’t that you don’t get your goal, it’s that you do damage to yourself or die.
But for testing PUA or anything in personal development, your personal testing costs are ridiculously low, and the worst case is just that you don’t get the goal you were after.
This means that if the goal is actually important, and whatever scientifically-validated information you have isn’t getting you the goal, then you don’t just sit on your ass and wait for someone to hand you the research on a platter.
Anything else isn’t rational, where rational is defined (as on LW) as “winning”.
I think this is a misunderstanding of the correct application of Bayes’ Theorem. Bayes is not a magic wand, and GIGO still applies. Anecdotal evidence counts but you have to correctly estimate the probability that you would hear that anecdote in a world where PUA methods were just placebos sold to the sex-starved and nerdy, as opposed to the probability that you would hear that anecdote in a world where PUA methods have some objectively measurable effect. I think most of the time the correct estimate is that those probabilities are barely distinguishable at best.
A rationalist should have a clear distinction between Things That Are Probably True, and Things That Might Be True and Would Be Interesting To Try. The goal of the OP was to sum up the state of human knowledge with regard to Things That Are Probably True, which is the standard scholarly starting point in research.
It seemed to me that PUA techniques, lacking any objective evidence to back them up, should be filed under Things That Might Be True and Would Be Interesting To Try but that their devotees were claiming that they were the elephant in the OP’s room and that they had been unjustly excluded from the set of Things That Are Probably True.
I’m not against the ethos of going out and trying these things, as long as the testing costs really are low (i.e. you don’t pay good money for them). They might work, and even if they are just placebos you might get lucky anyway. However it’s not rational to actually believe they probably work in the absence of proper evidence, as opposed to going along with them for the sake of experiment, or to try to squeeze them in to a list of Things That Are Probably True.
Also, better placebo than nothing at all.
That a comment opening with this quote-reply pair is voted above zero troubles me. It is a direct contradiction of one of the most basic premises of this site.
I would have voted it down were it not for the rest of the paragraph cited, which basically comes down to “anecdotes are Bayesian evidence, but with caveats related to the base rate, and not always positive evidence”. Which is, as best I can tell, correct. In isolation, the opening sentence does seem to incorrectly imply that anecdotes don’t count at all, and so I’d have phrased it differently if I was trying to make the same point, but a false start isn’t enough for a downvote if the full post is well-argued and not obviously wrong.
In context, I interpreted pjeby to be saying that anecdotes counted as evidence which should lead a Bayesian rationalist to believe the truth of PUA claims. If that was not their intention I got them totally wrong.
However if I interpreted them correctly they were indeed applying Bayes incorrectly, since we should expect a base rate of PUA-affirming anecdotes even if PUA techniques are placebos, and even in the total absence of any real effects whatsoever. It’s not evidence until the rate of observation exceeds the base rate of false claims we should expect to hear in the absence of a non-placebo effect, and if you don’t know what the base rate is you don’t have enough information to carry out a Bayesian update. You can’t update without P(B).
The truth of this statement depends heavily on how you unpack “believe”. Brains have more than one way of “believing” things, after all. A person can not “believe” in ghosts, and yet feel scared in a “haunted” house. Or more relevant to the current thread, a person can “believe” they are attractive and worthy and have every right to go up to someone and say “hi”, yet still be unable to do it.
IOW, epistemic and instrumental beliefs are compartmentalized in humans by default… which makes a mockery of the idea that manipulating your instrumental beliefs will somehow stain your epistemic purity.
Relevant: willingness to spend money to change is correlated with willingness to actually change. That doesn’t mean spending money causes change, of course, I’m just pointing out that a person’s willingness to incur the costs of changing (whatever sort of cost) is strongly correlated with them taking action to change. (See Prochaska, Norcross, et al; whose research and meta-research of a dozen different types of change goals is summarized in the book “Changing For Good”.)
[Originally, I was going to include a bunch of information about my work with personal development clients that reflects the pattern described in the above-mentioned research, but since you appear to prefer research to experience, I’ve decided to skip it.]
I place a high value on not financially encouraging bad behaviour, and selling non-evidence-based interventions to people who may be desperate, irrational or ill-informed but who don’t deserve to be robbed counts as bad behaviour to me.
There’s a loss of utility beyond the mere loss of cash to myself if I give cash to a scammer, because it feeds the scammer and potentially encourages other scammers to join the market. This is the flip side of the coin that there is a gain in utility when I give cash to a worthwhile charity.
People willing to spend money on attracting a mate have a wide variety of options as to how they spend it, after all. If they are willing to actually change it’s not as if the only way to demonstrate this is to spend money on PUA training rather than clothes, transportation, food, drink, taxi fares and so on.
As I mentioned in the other sub-thread, it’s really tiring to have you continually reframing what I say to make attackable arguments out of it. Unless your sole interest in LessWrong is to score rhetorical points (i.e., trolling), it’s a rather bad idea to keep doing that to people.
Note that the text you quoted from my comment has nothing to do with PUA. It is a portion of my evidence that your professed approach to personal development (i.e., trying things only if they cost nothing) is Not Winning.
On LessWrong, rationality equals winning, not pretending to avoid losing. (Or more bluntly: attempting to signal your intelligence and status by avoiding the low-status work of actually trying things and possibly being mistaken.)
It is better to do something wrong—even repeatedly—and eventually succeed, than to sit on your ass and do nothing. Otherwise, you are less instrumentally rational than any random person who tries things at random until something works.
Meanwhile, any time that you do not spend winning, is time spent losing, no matter how you spin it as some sort of intellectual superiority.
So, on that note, I will now return to activities with a better ROI than continuing this discussion. ;-)
(And sometimes hearing them counts as evidence against the phenomenon!)
Indeed. Careful reading is required when investigating any kind of “folklore”, as occasionally self-help authors provide anecdotes that (in the details) provide a very different picture of what is happening than what the author is saying is the point or moral of that anecdote.
For what it is worth the majority are positioned as ‘acolytes’.
Hi.… I haven’t read this whole thread, but I know one very important thing that immediately discredited PhilosophyTutor in my view. I strongly feel that the best pua’s are not at all about merely extracting something from the woman they interact with. They claim they live by the motto “leave her better than you found her”. From my impression of Casanova, the ultimate pua, he lived by that too.
You’re absolutely right about the methodological issues. I’ve thought it myself; besides the enormous survivor bias of course.
But it is far more irrational to discount their findings on that ground alone, because the alternative, academic studies, are blinded by exactly the same ignore-the-elephant and keep-things-proper attitude that the original poster of this thread pointed out.
Take this into account: a lot of good pua’s may fall far short of the ideal amount of rigor, but at the same time, far exceed the average person’s rigor. I can’t condemn those who, without the perspective gained from this site, nevertheless seek to quantify things and really understand them.
How do they know whether they fulfill this motto well?
Whether someone does better than average is irrelevant to whether they do well enough. It’s possible, indeed very easy, to put more effort into rigor than the average person, and still fail to produce any valid Bayesian evidence.
Not something you have shown (or something that appears remotely credible).
Not much better and also not a particularly good reason to exclude an information source from an analysis. (An example of a good reason would be “people say a bunch of prejudicial nonsense for all sorts of reasons and everybody concerned ends up finding it really, really annoying”).
It is not clear to me that utilities can be easily compared. What tradeoff between my satisfaction and my partner’s satisfaction should I be willing to accept? I can see how to elicit my preferences (for things like partner happiness, relationship duration, and so on) and try to predict how the consequences of my actions will impact my preferences, but I don’t quite see how to add utilities, or compare the amount of satisfaction I could provide to multiple potential partners.
It’s not clear that they want to talk down the status of women in general. Men becoming more attractive and less annoying to women seems to be better for women, and there’s quite a bit in the PUA literature of how to keep a long-term relationship going, if that’s what you want to do.
You are absolutely right that utilities cannot be easily compared and that this is a fundamental problem for utilitarian ethics.
We can approximate a comparison in some cases using proxies like money, or in some cases by assuming that if we average enough people’s considered preferences we can approach a real average preference. However these do not solve the fundamental problem that there is no way of measuring human happiness such that we could say with confidence “Action A will produce a net 10 units of happiness, and Action B will produce a net 11 units of happiness”.
In the case of human sexual relationships what you’d really have to do is conduct a longitudinal study looking at variables like reported happiness, incidence of mental illness, incidence of suicide, partner-assisted orgasms per unit time, longevity and so on.
That said this difficulty in totalling up net utilities is not a moral blank cheque. If women report distress after a one night stand with a PUA followed by cessation of contact then that has to be taken as evidence of caused disutility, and you can’t remove the moral burden that entails by pointing out that calculating net utility is difficult or postulating that their distress is their fault because they are “entitled”/”in denial”/etc.
While this would give people more knowledge about how their actions turn into consequences, this doesn’t help people decide which consequences they prefer, and so only weakly helps them decide which actions they prefer.
So, let’s drop the term utility, here, and see if that clarifies the moral burden. Suppose Bob and Alice go to a bar and meet; they both apply seduction techniques; they have sex that night. Alice’s interest in Bob increases; Bob’s interest in Alice decreases. What moral burdens are on each of them, and where did those moral burdens come from?
I think it does help if people have pre-existing views about whether they like the internal experience of happiness, mental health, continued life, orgasms and so on, and about whether they can legitimately generalise those views to others. I don’t think I would be making an unreasonable assumption if I assumed that an arbitrarily chosen woman in a bar would most likely have a preference for the internal experience of happiness, mental health, continued life, orgasms and so on and hence that conduct likely to bring about those outcomes for her would produce utility and conduct likely to bring about the opposite would produce negative utility.
There is not enough information to say, and your chosen scenario is possibly not the best one for exploring the ethics of PUA behaviour since it firstly postulates that the female participant is also using seduction techniques (hopefully defined in some more specific sense than just trying to be attractive), and secondly it skips entirely over the ethical question of approaching someone in the first place and possibly getting them to participate in sex acts they may not have planned to engage in. By jumping straight to the next morning and asking that the moral path is forward from that point this scenario avoids arguably the most important ethical questions about PUA behaviour.
However I will answer the question as posed to avoid accusations that I am simply avoiding it. From a utilitarian perspective the moral burden is simply to maximise utility, so we need to know what are Bob and Alice’s utility functions are, and what Bob and Alice should reasonably think the other party’s utility function is like.
It might well be that Bob has neither the interest not the ability to sustain a mutually optimal ongoing relationship with Alice and in that case the utility-maximising path from that point forward and hence the ethical option is for Bob to leave and not contact Alice again. However if Bob knew in advance that this was the case and had reason to believe that Alice’s utility function placed a negative value on participating in a one night stand with a person who was not interested in a long-term relationship then Bob behaved unethically in getting to this position since he knowingly brought about a negative-utility outcome for a moral stakeholder.
Knowing that her weights on those things are positive gets me nowhere. What I need to know are their relative strengths, and this seems like an issue where (heterosexual) individuals are least poised to be able to generalize their own experience. It seems likely that a man could go through life thinking that everyone enjoys one night stands and sleeps great afterwards, and not until reading PUA literature realizes that women often freak out after them.
Suppose she flirts, or the equivalent (that is, rather than just seeking general attraction, she seeks targeted attraction at some point). If she never expresses any interest, it’s unlikely she and Bob will have sex (outside of obviously unethical scenarios).
What question do you think is most important?
Suppose Bob and Alice both believe that actions reveal preferences.
Suppose Alices enjoy one night stands, and Carols regret one night stands, though they agree to have sex after the first date. When Bob meets a woman, he can’t expect her to honestly respond whether she’s a Carol or an Alice if he asks her directly. What probability does he need that a woman he seduces in a bar will be an Alice for it to be ethical to seduce women in bars?
As well, if he believes that actions reveal preferences, should he expect that one night stands are a net utility gain or loss for Carols?
Hopefully research like that cited in the OP can help with that. In the meantime we have to do the best we can with what we have, and engage in whatever behaviours maximise the expected utility of all stakeholders based on our existing, limited knowledge.
I think the most important question is “Is it ethical to obtain sex by deliberately faking social signals, given what we know of the consequences for both parties of this behaviour?”. A close second would be “Is it ethical to engage in dominance-seeking behaviour in a romantic relationship?”.
One approach would be to multiply the probability you have an Alice by the positive utility an Alice gets out of a one night stand, and multiply the probability that you have a Carol by the negative utility a Carol gets out of a one night stand, and see which figure was larger. That would be the strictly utilitarian approach to the question as proposed.
If we’re allowed to try to get out of the question as proposed, which is poor form in philosophical discussion and smart behaviour in real life, a good utilitarian would try to find ways to differentiate Alices and Carols, and only have one night stands with Alices.
A possible deontological approach would be to say “Ask them if they are an Alice or a Carol, and treat them as the kind of person they present themselves to be. If they lied it’s their fault”.
The crypto-sociopathic approach would be to say “This is all very complicated and confusing, so until someone proves beyond any doubt I’m hurting people I’ll just go on doing what feels good to me”.
“Deliberately faking social signals”? But, but, that barely makes any sense. They are signals. You give the best ones you can. Everybody else knows that you are trying to give the best signals that you can and so can make conclusions about your ability to send signals and also what other signals you will most likely give to them and others in the future. That is more or less what socializing is. I suppose blatant lies in a context where lying isn’t appropriate and elaborate creation of false high status identities could be qualify—but in those case I would probably use a more specific description.
A third would be “could the majority of humans have a romantic relationship without dominance-seeking behavior?” and the fourth : “would most people find romantic relationships anywhere near as satisfying without dominance-seeking behavior?” (My money is on the “No”s.)
One more question: What principles would help establish how much dominance seeking behavior is enough to break the relationship or in some other way cause more damage than it’s worth, considering that part of dominance is ignoring feedback that it’s unwelcome?”
Yes, that part is hard, even on a micro scale. I have been frequently surprised that I underestimate how much dominance seeking would be optimal. I attribute this to mind-projection. ie “This means she would prefer me to do that? Wow. I’d never take that shit if it was directed at me. Hmm… I’m going do that for her benefit and be sure not to send any signal that I am doing it for compliance. It’s actually kind of fun.”
(Here I do mean actual unambiguous messages—verbal or through blatantly obvious social signalling by the partner. I don’t mean just “some source says that’s what women want”.)
Fortunately we can choose which dominance seeking behaviors to accept and reject at the level of individual behavioral trait. We could also, if it was necessary for a particular relationship, play the role of someone who is ignoring feedback but actually absorb everything and process it in order to form the most useful model of how to navigate the relationship optimally. On the flip side we can signal and screen to avoid dominance seeking behaviors that we particularly don’t want and seek out and naturally reward those that we do want.
Wow, really? How? I make the opposite mistake all the time (at least I think I do) so I’d be interested in hearing some examples.
PUAs have trouble grasping that there is a difference between appearance and reality, which is ironic in some ways. It’s an implicit part of their doctrine that if you can pass yourself off as an “alpha” that you really are an “alpha”, in the sense of being the kind of person that women really do want to mate with.
However it seems obvious to me that the whole PUA strategy is to spoof their external signals in a way they hope will fool women into drawing incorrect conclusions about what is actually going on within the PUA’s mind and what characteristics the PUA is actually bringing to the relationship table. It’s a way for socially awkward nerds to believe they are camouflaging themselves as rough, tough, confident super-studs and helping themselves to reproductive opportunities while so camouflaged.
They excuse this moral failing by saying “Everybody else is doing it, hence it’s okay for me to do it only more so”.
However it’s well-established in general societal morals that obtaining sex by deception is a form of non-violent rape. If you’re having sex with someone knowing that they are ignorant of relevant facts which if they knew them would stop them having sex with you, then you are not having sex with their free and informed consent.
The fact that someone is a PUA using specific PUA techniques to misrepresent their real mind-state seems to me like highly relevant information in relationship decision-making.
Is there proper scientific evidence for this? If not do you acknowledge that this is at least potentially a moral excuse of the same form as “Everyone else is doing it, so it’s okay for me to do it”?
I suspect it would actually turn out that correctly socialised people would prefer and flourish more completely in relationships which are free of dominance games, and I think my naive folk-psychological guesswork is just as good as yours.
I find that those with any significant degree of PUA competence are not particularly inclined to try to excuse themselves to others. Apart from being an unhealthy mindset to be stuck in it sends all the wrong signals. They would instead bock out any hecklers and go about their business. If people try to shame them specifically while they are flirting or socializing they may need to handle the situation actively but it is almost certainly not going to be with excuses.
Acting confident and suppressing nervousness is not rape.
It is a third and fourth question added to a list. Unless the first two were supposed to be scientific proclamations this doesn’t seem to be an appropriate demand.
No to the “if not” implication—not presenting proper scientific evidence wouldn’t make it an excuse. No to the equivalence of these questions to that form. Most importantly: nothing is an ‘excuse’ unless the person giving it believes they doing something bad.
I really don’t think naivety is a significant failing of mine.
So far in this conversation those I have mentally labelled pro-PUA have inevitably introduced scenarios where both parties are using “seduction techniques”, which I think is a term which is dangerous since it conflates honest signalling with spoofed signalling, or by claiming (as you did) that the idea of spoofing social signals “barely makes any sense”. I take those arguments to be excusing the act of spoofing social signals on the basis either that all women also spoof their social signals and that two wrongs make a right, or that there is in fact no such thing as social spoofing and that hence PUAs cannot be morally condemned for doing something which does not exist.
In and of itself, it seems to me that at least potentially it is deliberately depriving the target of access to relevant facts that they would wish to know before making a decision whether or not to engage socially, sexually or romantically with the suppressor.
However unless you believe that pick-up targets’ relevant decision-making would be totally unaffected by the knowledge that the person approaching them was a PUA using specific PUA techniques, then concealing that fact from the pick-up target is an attempt to obtain sex without the target’s free and informed consent. If you know fact X, and you know fact X is a potential deal-breaker with regard to their decision whether or not to sleep with you, you have a moral obligation to disclose X.
″ In this trifling particular, then, I appear to be wiser than he, because I do not fancy I know what I do not know”.
Socrates
Edit in response to edit: I was asked what I thought the most important ethical questions were with regard to PUA, and answered that question with two ethical questions. You responded by asking two factual questions of your own, which if answered in the negative would make my second question redundant, and stated that your money (which since you are posting here I took to mean that you have a Bayesian conviction that your answer is more likely to be right than not) was on the answer to those questions being negative.
You must have some basis for that probability estimate. Saying that it’s not an “appropriate demand” to ask for those bases doesn’t solve the problem that without access to your bases we can’t tell if your probability estimate is rational.
It is also a category error to put ethical questions and factual questions in the same bin and argue that because my ethical questions are not “scientific proclamations” that this means you don’t have to provide support for your factual probability estimates.
I certainly wouldn’t say is true either.
Like what?
It is odd that a reply that is entirely to wedrifid quotes is made in response to NancyLebovitz comment which makes an entirely different point. Did you click the wrong ‘reply’ button?
It looks like I did. Is the correct move in this situation to delete the misplaced post, repost it in the correct spot, and delete this one too?
I would just leave it. No big deal and there are already replies.
This question seems malformed. “Deliberating faking social signals” is vague- but is typically not something that’s unethical (Is it unethical to exaggerate?). “What we know of the consequences” is unclear- what’s our common knowledge?
Yes.
And, of course, you saw the disconnect between your original statement and your new, more correct one.
Right?
The reason I asked that question is because you put forth the claim that Bob’s fault was knowingly causing harm to someone. That’s not the real problem, though- people can ethically knowingly cause harm to others in a wide variety of situations, under any vaguely reasonable ethical system. Any system Bob has for trying to determine the difference between Alices and Carols will have some chance of failure, and so it’s necessary to use standard risk management, not shut down.
Rhetorical questions are a mechanism that allows us to get out of making declarative statements, and when you find yourself using them that should be an immediate alert signal to yourself that you may be confused or that your premises bear re-examination.
Deceiving others to obtain advantage over them is prima facie unethical in many spheres of life, and I think Kant would say that it is always unethical. Some role-ethicists would argue that when playing roles such as “salesperson”, “advertiser” or “lawyer” that you have a moral license or even obligation to deceive others to obtain advantage but these seem to me like rationalisations rather than coherent arguments from supportable prior principles. Even if you buy that story in the case of lawyers, however, you’d need to make a separate case that romantic relationships are a sphere where deceiving others to obtain advantage is legitimate, as opposed to unethical.
PUA is to a large extent about spoofing social signals, in the attempt to let young, nerdy, white-collar IT workers signal that they have the physical and psychological qualities to lead a prehistoric tribe and bring home meat. The PUA mythology tries to equivocate between spoofing the signals to indicate that you have such qualities and actually having such qualities but I think competent rationalists should be able to keep their eye on the ball too well to fall for that. Consciously and subconsciously women want an outstanding male, not a mediocre one who is spoofing their social signals, and being able to spoof social signals does not make you an outstanding male.
Okay. We come from radically different ethical perspectives such that it may be unlikely that we can achieve a meeting of minds. I feel that dominance-seeking in romantic relationships is a profound betrayal of trust in a sphere where your moral obligations to behave well are most compelling.
Can you point me to the text that you take to be “my original statement” and the text you take to be “my new, more correct statement”? There may be a disconnect but I’m currently unable to tell what text these constructs are pointing to, so I can’t explicate the specific difficulty.
People can ethically and knowingly burn each other to death in a wide variety of situations under any vaguely reasonable ethical system too, so that statement is effectively meaningless. It’s a truly general argument. (Yes, I exclude from reasonableness any moral system that would stop you burning one serial killer to death to prevent them bringing about some arbitrarily awful consequence if there were no better ways to prevent that outcome).
We agree completely on that point, but it seems to me that a substantial subset of PUA practitioners and methodologies are aiming to deliberately increase the risk, not manage it. Their goals are to maximise the percentage of Alices who sleep with the PUA and also to maximise the percentage of Carols who sleep with the PUA.
It doesn’t seem unreasonable to go further and say that in large part the whole point of PUA is to bed Carols. Alices are up for a one night stand anyway, so manipulating them to suspend their usual protective strategies and engage in a one night stand with you would be as pointless as peeling a banana twice. It’s only the Carols who are not normally up for a one night stand that you need to manipulate in the first place. Hence that subset of PUA is all about maximising the risk of doing harm, not minimising that risk.
(Note that these ethical concerns are orthogonal to, not in conflict with, my equally serious methodological concerns about whether it’s rational to think PUA performs better than placebo given the available evidence).
That sounds wrong. I dabbled in pickup a little bit and I would gladly accept a 2x boost in my attractiveness to Alices in exchange for total loss of attractiveness to Carols. If you think success with Alices is easy, I’d guess that either you didn’t try a lot, or you’re extremely attractive and don’t know it :-)
I wasn’t trying to say that bedding an Alice is “easy” full stop, just that if they find you attractive enough you won’t have to get them to lower their usual protective strategies to get them into bed the same night. That follows directly from how we have defined an Alice. Being an Alice doesn’t mean that they can’t be both choosy and in high demand though.
Carols are the ones who, regardless of how attractive they find you, don’t want to end up in bed that night and hence are the ones where the PUA has to specifically work to get them to lower their defences if the PUA wants that outcome.
ETA: This post seems to be getting hammered with downvotes, despite the fact that it’s doing nothing but clearing up a specific point of confusion about what was being expressed in the grandparent. I find that confusing. If the goal is to hide a subthread which is seen as unproductive it would seem more logical to hammer the parent.
You admit it’s not easy, then turn right back around and say it shouldn’t require a lot of effort.
Irrelevant. Is all fair in love?
Are you claiming that all romantic relationships which include the domination of one party by the other betray trust? I think we have differing definitions of dominance or good behavior.
Sure! First statement:
Second statement:
The first statement is judging a decision solely by its outcome; the second statement is judging a decision by its expected value at time of decision-making. The second methodology is closer to correct than the first.
(In the post with the first statement, it was the conclusion of a hypothetical scenario: Bob knew X about Alice, and had sex with her then didn’t contact her. I wasn’t contesting that win-lose outcomes were inferior to win-win outcomes, but was pointing out that the uncertainties involved are significant for any discussion of the subject. There’s no reason to give others autonomy in an omniscient utilitarian framework: just get their utility function and run the numbers for them. In real life, however, autonomy is a major part of any interactions or decision-making, in large part because we cannot have omniscience.)
That does not seem reasonable. Alices may be up for one night stands, but they only have sex with at most one guy a night. The challenge is being that guy.
See, ah, I think I’m against advocating deliberately unethical behavior / defection on LW.
Prude. :P
The question is what ethical standard to use. Whether or not exaggeration is unfair in matters of romance has not been established, and I would argue that exaggeration has a far more entrenched position than radical honesty.
That is, I would argue that not exaggerating your desirability as a mate is defection, rather than cooperation, and defection of the lose-lose variety rather than the win-lose variety.
That’s… not what you said.
There’s a big difference between asserting something is “irrelevant” versus “incorrect” or “unestablished”.
The treatment of ethics in PUA threads makes me somewhat nervous.
What was irrelevant is that deceit is unethical in many spheres of life. If deceit is unethical for a scientist* but ethical for a general, then knowing that deceit is unethical for a scientist is irrelevant if discussing generals.
What has not been established is whether romance is more like science or war. I think the former position is far weaker than the latter.
* I had a hard time coming up with any role in which any form of deceit is questionable, and thus I suppose if I were out for points I would question the correctness of the assertion, rather than merely its relevance. Even for scientists, exaggeration- the original behavior under question- is often ethical.
Let me check… nope, it looks like utilitarian ethics holds that ethical actions are those that maximise positive outcomes (however defined) factoring in the consequences for all stakeholders. I can’t see anything in there excluding actions or outcomes related to sex from the usual sorts of calculations. So I’m going to go ahead and say that the answer is no from a utilitarian perspective.
If we can exclude those cases where one partner or another honestly and explicitly expresses a free, informed and rational preference to be dominated then mostly yes.
(From a utilitarian perspective we have to at least be philosophically open to the idea that a person who is sufficiently bad at managing their utility might be better off being dominated against their will by a sufficiently altruistic dominator. See The Taming of the Shrew or Overboard. Such cases are atypical).
I have located the source of the confusion. What I actually said in the earlier post was this:
“t might well be that Bob has neither the interest not the ability to sustain a mutually optimal ongoing relationship with Alice and in that case the utility-maximising path from that point forward and hence the ethical option is for Bob to leave and not contact Alice again. However if Bob knew in advance that this was the case and had reason to believe that Alice’s utility function placed a negative value on participating in a one night stand with a person who was not interested in a long-term relationship then Bob behaved unethically in getting to this position since he knowingly brought about a negative-utility outcome for a moral stakeholder.”
I was not judging a situation solely on its outcome, because it was an if/then statement explicitly predicated on Bob knowing in advance that Alice’s utility function would take a major hit.
I guess you just lost track of the context and thought I’d said something I hadn’t. Are we back on the same page together now?
Possibly the recency effect of having skimmed one of Roissy’s blog posts where he specifically singled out for ridicule a female blogger who was expressing regret and confusion after a one night stand colours my recollection, but I am sure I have read PUA materials in the past that had specific sections dedicated to the problem of overcoming the resistance of women who had a preference not to engage in sex on the first/second/nth date, a preference that is certainly not inherently irrational and which seems intuitively likely to correlate with a high probability of regretting a one night stand if it does not turn into an ongoing, happy relationship.
Speaking more broadly a stereo salesperson maximises their sales by selling a stereo to every customer who walks in wanting to buy a stereo, and selling a stereo to as many customers as possible who walk in not wanting to buy a stereo. I’m sure they would prefer all their customers to be the first kind but you maximise your income by getting the most out of both. Game-theory-rational PUAs who don’t have Alices on tap, or a reliable way of filtering out Carols, or who just plain find some Carols attractive and want to sleep with them, would out of either necessity or preference have an interest in maximising their per-Carol chances of bedding a Carol.
It should be noted that, from the perspective of a utilitarian agent in certain environments, it may be the utilitarian action to self-modify into a non-utilitarian agent. That is, an unmodified utilitarian agent participating in certain interactions with non-utilitarian agents may create greater utility by self-modifying into a non-utilitarian agent.
(This seems obviously true. I removed the downvote!)
How prevalent do you think those cases are?
Did what you wrote agree with the parenthetical paragraph I wrote explaining my interpretation? If so, we’re on the same page.
Let’s go back to a question I asked a while back that wasn’t answered that is now relevant again, and explore it a little more deeply. What is a utility function? It rank orders actions*. Why do you think stating regret is more indicative of utility than actions taken? If, in the morning, someone claims they prefer X but at night they do ~X, then it seems that it is easier to discount their words than their actions. (An agent who prefers vice at night and virtue during the day is, rather than being inconsistent, trying to get the best of both worlds.)
(As well, Augustine’s prayer is relevant here: Grant me chastity and continence, but not yet.).
*Typically, utility functions are computed by assigning values to consequences, then figuring out the expected value of actions, but in order to make practical measurements it has to be considered with regards to actions.
Right. But it’s not clear to me that it’s unethical for a salesman to sell to reluctant buyers. If you consider a third woman- Diana- who does not agree to have sex on the first date, then both of us would agree that having sex with Diana on the first date would be unethical, just like robbing someone and leaving them a stereo in exchange would be unethical. But pursuing Diana would not be, especially if it’s hard to tell the difference between her and Carol (or Alice) at first glance. Both Carols and Alices have an incentive to seem like Dianas while dating (also car-buying, though not stereo-buying), and so this isn’t an easy problem.
It seems odd to me to suggest a utilitarian should act as though Carols are Dianas.
Interesting question! However I think that we’d need to agree on a definition of “dominated” before any estimate would be meaningful. I’m happy to supply my estimate of prevalence for any definition that suits you.
For the definition I had in mind, which might be something like “in a relationship where one partner routinely makes the majority of important decisions on the basis of superior status” I would be surprised if it was below 0.1% or above 5%.
Well no, I wouldn’t agree with that either, but that’s a separate issue. I don’t think it can be philosophically consistent to apply techniques which purportedly manipulate people by spoofing social signals that act on an unconscious level, distorting their sense of time and so forth and then excuse this on the basis that the agent you are manipulating has autonomy. If they had autonomy in the sense that excused you for attempts at manipulation you could not manipulate them, and if you can manipulate them then they lack the kind of strong autonomy that would give you a moral blank cheque.
I think it’s more indicative for a few reasons. Firstly conclusions made sober, rested and with time to reflect are more reliable than conclusions made drunk, late at night, horny and in the heat of the moment, and both parties to any such decisions know this in advance. Secondly wishful thinking (which you could also call self-delusion) plays a role, and before being—to borrow a phrase from Roissy—“pumped and dumped” by a PUA a woman might be a victim of cognitive bias that makes her act as if a long-term relationship with a supportive partner is a possibility whereas with hindsight this bias is less likely to distort her calculations. Thirdly the PUA literature that I have read explicitly advocates playing on these factors by not giving the target time to pause and reflect, and by deflecting questions about the future direction of the relationship rather than answering those questions honestly.
I conclude from this that part of PUA strategy is to attempt to manipulate women into making decisions which the PUA knows the women are less likely to make when they are behaving rationally. So not only do I think that stated regret is more indicative of someone’s reflective preferences than their actions the night before in general, but I also think that PUAs know this too.
As always there will be individual exceptions to the general rule.
Considering only the two parties directly involved, the salesperson and the buyer, it seems fairly clear to me that on average reluctant buyers are more likely to regret the purchase, and that transactions in which one party regrets the transaction are win/lose and not win/win.
Being a highly effective salesperson is not seen as unethical conduct in our current society, and that tends to very strongly influence people’s moral judgements, but I think from a utilitarian standpoint salesmanship that goes beyond providing information is obviously ethically questionable once you get past the default socialisation we share that salespersons are a normal part of life.
I’m not completely clear on the Carol/Diana distinction being made here. Could you give me the definitions of these two characters as you were thinking of those definitions at the time you posted the parent?
This. But you forgot “using canine social structure as if it were identical to human social structure.”
My complaint with the whole “alpha” and “beta” terminology is that it doesn’t seem to be derived from canine social structure. The omega rank seems more appropriate to what PUAs call “beta.”
Reading more, it doesn’t seem like any of these terms are accurate even to canine society. They were based on observing unrelated gray wolves kept together in captivity, where their social structures bore little resemblance to their normal groupings in the wild (a breeding pair and their cubs). More accurate terms for would be “parents” and “offspring”, which match nicely to human families but aren’t that useful for picking up women in bars.
We hope.
What about just “until someone proves scientifically”?
Even that weaker position still seems incompatible actually being a utility-maximising agent, since there is prima facie evidence that inducing women to enter into a one-night-stand against their better judgment leads to subsequent distress on the part of the women reasonably often.
A disciple of Bayes and Bentham doesn’t go around causing harm up until someone else shows that it’s scientifically proven that they are causing harm. They do whatever maximises expected utility for all stakeholders based on the best evidence available at the time.
Note that this judgment holds regardless of the relative effectiveness of PUA techniques compared to placebo. Even if PUA is completely useless, which would be surprising given placebo effects alone, it would still be unethical to seek out social transactions that predictably lead to harm for a stakeholder without greater counterbalancing benefits being obtained somehow.
That isn’t a utility maximising agent regardless of whether it demands your ‘proof beyond any doubt’ or just the ‘until someone proves scientifically’. Utility maximising agents shut up and multiply. They use the subjectively objective probabilities and multiply them by the utility of each case.
The utility maximising agent you are talking about is one that you have declared to be a ‘good utilitarian’. It’s maximising everybody’s utility equally. Which also happens to mean that if Bob gains more utility from a one night stand than a Carol loses through self-flaggelation then Bob is morally obliged to seduce her. This is something which I assume you would consider reprehensible. (This is one of the reasons I’m not a good utilitarian. It would disgust me.)
Neither “utility maximiser” nor “good utilitarian” are applause lights which match this proclamation.
(Edited out the last paragraph—it was a claim that was too strong.)
I took it for granted that the disutility experienced by the hypothetical distressed woman is great enough that a utility-maximiser would seek to have one-night-stands only with women who actually enjoyed them.
Given that Bob has the option of creating greater average utility by asking Alices home instead I don’t see this as a problem. What you are saying is true only in a universe where picking up Carol and engaging in a win/lose, marginally-positive-sum interaction with her is the single best thing Bob can do to maximise utility in the universe, and that’s a pretty strange universe.
I also think that PUAs are going to have to justify their actions in utilitarian terms if they are going to do it at all, since I really struggle to see how they could find a deontological or virtue-ethical justification for deceiving people and playing on their cognitive biases to obtain sex without the partner’s fully informed consent. So if the utilitarian justification falls over I think all justifications fall over, although I’m open to alternative arguments on that point.
I don’t think the Weak Gor Hypothesis holds and I don’t think that you maximise a woman’s utility function by treating her the way the misogynistic schools of PUA adovcate, but if you did then I would buy PUA as a utility-maximising strategy. I think it’s about the only way I can see any coherent argument being made that PUA is ethical, excluding the warm-and-fuzzy PUA schools mentioned earlier which I already acknowledged as True Scotsmen.
I cannot reconstruct how you are parsing the first sentence so that it contradicts the second, and I’ve just tried very hard.
This seems to be a straw man. I don’t recall ever hearing someone advocating having sex with people that would experience buyers remorse over those that would remember the experience positively. That would be a rather absurd position.
Yes, Bob should probably be spending all of his time earning money and gaining power that can be directed to mitigating existential risk. This objection seems to be a distraction from the point. The argument you made is neither utilitarian nor based on maximising utility. That’s ok, moral assertions don’t need to be reframed as utilitarian or utility-maximising. They can be just fine as they are.
If so forgive me—I have not seen a PUA in the wild ever mentioning the issue of differentiating targets on the basis of whether or not being picked up would be psychologically healthy for them, so my provisional belief is that they attached no utility or disutility to the matter of whether the pick-up target would remember the experience positively. Am I wrong on that point?
This is a general argument which, if it worked, would serve to excuse all sorts of suboptimal behaviour. Just because someone isn’t directing all their efforts at existential risk mitigation or relieving the effects of Third World poverty doesn’t mean that they can’t be judged on the basis of whether they are treating other people’s emotional health recklessly.
I don’t see how you get to that reading of what I wrote.
I see this as a perfectly valid utilitarian argument-form: There is prima facie evidence X causes significant harm, hence continuing to do X right up until there is scientifically validated evidence that X causes significant harm is inconsistent with utility maximisation.
There’s a suppressed premise in there, that suppressed premise being “there are easily-available alternatives to X”, but since in the specific case under discussion there are easily-available alternatives to picking women up using PUA techniques I didn’t think it strictly necessary to make that premise explicit.
There are separate, potential deontological objections to PUA behaviour, some of which I have already stated, but I don’t see how you got to the conclusion that this particular argument was deontological in nature.
The goalposts have moved again. But my answer would be yes anyway.
Strictly speaking you moved them first since I never claimed that anyone was ” advocating having sex with people that would experience buyers remorse over those that would remember the experience positively.” (Emphasis on over). As opposed to advocating having sex with people disregarding the issue of whether that person would experience remorse, which is what I’d seen PUA advocates saying. I just put the goalposts back where they were originally without making an undue fuss about it, since goalposts wander due to imprecisions in communication without any mendacity required.
I think this conversation is suffering, not for the first time, from the fuzziness of the PUA term. It covers AMF and Soporno (who has a name which is unfortunate but memorable, if it is his real name) who do not appear to be advocating exploiting others for one’s personal utility, and it also covers people like Roissy who revel in doing so.
So I think I phrased that last post poorly. I should have made the declarative statement “many but not all of the PUA writers I have viewed encourage reckless or actively malevolent behaviour with regard to the emotional wellbeing of potential sexual partners, and I think those people are bad utilitarians (and also bad people by almost any deontological or virtue-ethical standard). People who are members of the PUA set who do not do this are not the intended target of this particular criticism”.
The thing is, some (granted, not all) of what falls under PUA or “apply seduction techniques” falls unambiguously into the category of dark arts.
I find it hard to believe that we want to argue that, “Dark arts are bad, except when they can get you laid.”
Dark arts AREN”T bad in general! Nor is avadakadavraing anyone that you would have shot with a gun anyway.
Ah. I prefer not to argue “dark arts are bad,” rather “dark arts do not illuminate.” Tautologies have the virtue of being true.
(Put flippantly, sex is sometimes easier with the lights off.)
I was using “dark arts” here in the more narrow sense of “techniques designed to subvert the rationality of others by exploiting cognitive biases.” I’m not speaking of being an effective flirt, or wearing flattering makeup and clothing. The sort of things I had in mind are, to take a mild example, bringing a slightly less attractive “wingman” to make oneself look more attractive than one would alone, or to take a serious example, whisking a woman from bar to bar to create the illusion of longer-term acquaintance. I see this as wrong for essentially the same reason that spiking someone’s drink is wrong if they wouldn’t sleep with you sober.
To oversimplify somewhat, I tend to see society as divided into three groups: those who don’t generally aspire to rationality (the majority of the population), those who want to share the bounty of rationality to help others overcome their biases (Lesswrong), and those who would instead use their knowledge of rationality to exploit people in the first group. I acknowledge that I am more confused by the current negative karma of my grandparent than the karma of any other comment I have ever made on this site.
My observation is that most of the posts I have made that criticised PUA or PUA-associated beliefs have been voted down very quickly, but then they have bounced back up over the next day or so such that the overall karma delta is highly positive. One hypothesis that explains it is that there are a certain number of people reviewing this thread at short intervals who are downvoting posts critical of PUA, but that they are not the plurality of posters reviewing this thread.
ETA: Update on this. Posts critical of PUA ideology that are concealed from the main thread either by being voted to −3 or below, or by being a descendant of such, get voted into the ground, and as far as I can see this effect is largely insensitive to the intellectual value or lack thereof of the post. I hypothesise that the general LW readership doesn’t bother drilling down to see what’s going on in those subthreads and hence their opinions are not reflected in the vote count, while PUA-enthusiasts who vote along ideological lines do bother to drill down.
Posts critical of PUA that are well-written, logical, pertinent and visible to the general readership are voted up, overall.
One explanation is that the first to read your messages are those you responded to, who are those most likely to note any poorness of fit between what they said and what they are alleged or implied to have said or believed.
I’m shocked that it didn’t stay below 0. Forget any point it was trying to make about dating—it sends totally the wrong message about ‘lesswrong’ attitudes towards ‘dark arts’!
So, this gets at something that frequently confuses me when people start talking about personal utilities.
It seems that if I can reliably elicit the strength of my preferences for X and Y, and reliably predict how a given action will modify the X and Y in my environment, then I can reliably determine whether to perform that action, all else being equal. That seems just as true for X = “my happiness” and Y = “my partner’s happiness” as it is for X = “hot fudge” and Y = “peppermint”.
But you seem to be suggesting that that isn’t true… that in the first case, even if I know the strengths of my preferences for X and Y and how various possible actions lead to X and Y, there’s still another step (“adding the utilities”) that I have to perform before I can decide what actions to perform. Do I understand you right?
If so, can you say more about what exactly that step entails? That is… what is it you don’t know how to do here, and why do you want to do it?
You’re missing four letters. Call the strength of your preferences for X and Y A and B, and call your partner’s preferences for X and Y C and D. (This assumes that you and your partner both agree on your happiness measurements.)
I agree there’s a choice among available actions which maximizes AX+BY, and that there’s another choice that maximizes CX+DY. What I think is questionable is ascribing meaning to (A+C)X+(B+D)Y.
Notice there are an infinite number of A,B pairs that output the same action, and an infinite number of C,D pairs that output the same action, but when you put them together your choice of A,B and C,D pairs matters. What scaling to choose is also a point of contention, since it can alter actions.
So, we’re assuming here that there’s no problem comparing A and B, which means these valuations are normalized relative to some individual scale. The problem, as you say, is with the scaling factor between individuals. So it seems I end up with something like (AX + BY + FCX + FDY), where F is the value of my partner’s preferences relative to mine. Yes?
And as you say, there’s an infinite number of Fs and my choice of action depends on which F I pick.
And we’re rejecting the idea that F is simply the strength of my preference for my partner’s satisfaction. If that were the case, there’d be no problem calculating a result… though of course no guarantee that my partner and I would calculate the same result. Yes?
If so, I agree that that coming up with a correct value for F sure does seem like an intractable, and quite likely incoherent, problem.
Going back to the original statement… “an ethical rationalist’s goals in relationship-seeking should be to seek a relationship that creates maximal utility for both parties” seems to be saying F should approximate 1. Which is arbitrary, admittedly.
Yes. If you and your partner agree- that is, A/B=C/D- then there’s no trouble. If you disagree, though, there’s no objectively correct way to determine the correct action.
Possibly, though many cases with F=1 seem like things PhilosophyTutor would find unethical. It seems more meaningful to look at A and B.
You make a very good point here. But you see, women don’t find men who try to be nice to them attractive. They call it “clingy”, “creepy” behavior. Human male-female interaction is actually a signalling game, where the man being nice simply sends a signal of weakness. Women are genetically programmed to only let alpha sperm in, and the alpha is not a character who goes around being nice to strangers.
Think about the effect on her inclusive genetic fitness if she bears the child of a nice-guy who tries to maximize other people’s utility before his own, versus having the child of an alpha who puts himself first and likes to impregnate lots of women.
And let me disclaim: I don’t like it that the world is this way, I don’t morally support the programming that evolution has given to women. But I accept it and work within its bounds.
Perhaps one day we will reprogram ourselves? Maybe transhuman love will be of a different. But in human love, the heart is not heart shaped, it is shaped like a clenched fist.
Oversimplified to the extent that it is basically not true.
You comment would be more useful if you said which ways it is oversimplified, and which additions and caveats you think are most important to restore it to being true.
And yet I would bet that it is still closer to true than I approve of. In particular, closer to true than the mental model used by the naive “nice guy”/”beta”.
This is starting to remind me of what happened to nutritional advice in the 1980s:
In nutrition “complex carbohydrates good! fats bad!” was widely promulgated
In dating “niceness/agreeableness good! alpha behavior bad!” was widely promulgated
in about the same time frame—and looks like it was comparably bad advice...
Well, no. I’ve received quite a bit of help and favors from men who didn’t seem creepy or clingy, and have found a few creepy who weren’t being helpful. I don’t think my experience is unusual.
One of the big reasons that LW is unable to be rational about pickup is that we have a small group of vocal and highly non-average women here who take any comment which is supposed to be a useful observation about the mental behavior of the median young attractive woman to be about THEM IN PARTICULAR.
You, NancyLebovitz, are not the kind of woman that PU is aimed at. You do not go to night clubs regularly. You do not read gossip magazines and follow celebrity lifestyles, you do not obsess about makeup . You post on weird rationality websites. You are not the median young, attractive woman. And that goes for Alicorn too.
Even amongst the set of IQ + 1 sigma women you are almost certainly highly nontypical.
Comments about female psychology are not directed at you, they are not about you, your personal experience of YOUR OWN reactions are not meant to be well described by pick-up theory.
I do not mean this in a negative way. I mean you no offence; in fact you should take it as a compliment in the context of intelligence and rationality. I am merely making an epistemological point.
The next time I make a comment about PU, I will carefully disclaim that PU is primarily designed to analyse the average psychology of just one particular kind of woman: namely relatively young, culturally-western, hetero- or bi- sexual and relatively attractive.
Especially important since major and well-respected proponents of PUA around here do not assume this premise, and in fact it is generally assumed that there are different areas of PUA that will help people of particular sex/gender/sexual orientation accomplish varying sorts of goals.
PU may well apply (to a certain extent) to almost all pre-menopausal hetero/bi women, but the case is much more clear cut for women who are also relatively young, culturally-western, hetero- or bi- sexual and relatively attractive, because that’s the subgroup of women where extensive field-testing of the concepts has been done.
PUA is a large field with many different subfields and schools of thought. There are those who aim for one-night-stands at bars, and those who aim to find the particular soulmate they’ve been searching for. There is PUA writing from the perspective of homosexuals, both men and women, teens, older folks, and all sorts of different perspectives.
If you think there is just one set of techniques in the field and they are only applicable to a small subset of humanity, then you’re not very familiar with PUA and should stop making blanket assertions about the field.
The definition of “pick up artist” from wikipedia is:
So if we are indeed referring to the same thing by the phrase, then I think that I am correct in saying that
There have been small offshoots into “girl game” and some guys focus more on older women, and I am explicitly not denying that there are results and facts there. But the core of the concept, the VAST majority of the field testing and online material is about quickly seducing “women who are relatively young, culturally-western, hetero- or bi- sexual and relatively attractive”
It certainly looks like you are::
Maybe you forgot a ‘not’ in there somewhere?
It sounds like you’re making a strawman out of your own arguments. You made blanket statements about how this is a bad and misleading article because it ignores the truth about how women respond to men. When people pointed out that this is not true of particular women, you amended it to refer just to the vast majority of women, and now you’re amending it further to only apply to a particular goal regarding a minority of women.
So the takeaway from your arguments seems to be that you should not follow the advice given in the above post, in the case that you have a very specific goal with respect to a relatively small group of women.
If that is what you meant to say, then yes you needed to be specific about what special circumstance you thought the post doesn’t apply to. It is not particularly surprising that the advice given in the post only works for most people with most goals.
This goes too far. The vast majority of men are heterosexual, gender-normal, and the vast majority of those are most attracted to women who are not:
post-menopause/50+
ugly
lesbian (i.e. not attracted to men)
Pickup is popular because it tells men how to attract precisely those women who they desire most.
You left out:
Which was apparently important to your case above.
It’s an interesting claim, though I’m not buying it, and it is anyway irrelevant to my earlier claim.
Most people are not heterosexual, gender-normal men who are most attracted to women with none of those qualities. And most relationship goals are not seducing such people. And most people do not have that goal.
Probably ~40% of pepople are heterosexual, gender-normal men who are most attracted to women who are young and straight.
It seems like you are using weasel words to describe the goal of ~40% of the people on the planet as a “very specific goal”.
Let me put it another way. On a website with a strong majority heterosexual male readership, the article fails to mention what I think is the definitive body of knowledge to improve the dating lives of heterosexual men. You then criticize me because, of all people, just under half are heterosexual males, almost all of whom (surprise) like young, attractive, straight women; you use weasel words saying that my point is for a “very specific goal”, when in fact probably ~60-80% of people reading this site have the goal of attracting/keeping a young, attractive, hetero/bi woman.
TBH, I feel that you, and LW in general, are trying to use pedantry/weasel words/motivated cognition to close your eyes to the truth about attraction between men and women. Perhaps there is some subset of people here who want to know, but I feel that if I mention the subject I will end up arguing against some form of denial/motivated cognition, rather than discussing the subject in the spirit of a collaborative enquiry to get at the truth.
Theists comprise a much larger percentage of the global population than 40%, but that doesn’t mean we’d consider a goal like “being closer to God” to be particularly important or worthy of discussion here.
Just FYI, some of us hate pro-PUA rants as much as we hate anti-PUA rants. Actually, I hate the pro-PUA rants more, because they do more harm than good.
Telling people they’re closing their eyes to the truth is not a rational method of persuasion in any environment, and certainly not here.
If you learned half as much from PUA as you think you have, you should have learned that if you want to catch fish, then don’t think like a fisherman, think like a fish.
In this discussion, you are not thinking like a fish.
Like the saying goes, you catch more flies with fly pheremones...
Also note that I am just as pedantic when I’m talking about a subject that I like, and I’m sure people would back me up on this. Maybe I should step up the pedantry in general to make that clearer, to avoid this sort of accusation.
And nowhere here did I say something like “PUA should not be discussed” or “PUA is incorrect about its subject matter” or even “The particular sub-branch of PUA you have in mind is incorrect or useless”. Indeed, I think rational inquiry into relationships is a noble goal and often cite PUA as a rare area of discourse where beliefs are tested against the world in rapid iteration.
Rather, I was annoyed that you were making patently false claims and then when people called you on it you acted like they were doing something wrong. If you want to assert falsehoods, please do it elsewhere.
I don’t think that means what you think it means.
Ceteris paribus, I would regard pedantry as evidence of a vice in favor of truth-seeking, not in the opposite direction. I’m surprised you think otherwise.
I find this hard to believe. As of the last survey only 33% are “single and looking”. If we combine that with the 24.2% that were “in a relationship”, assume they were all polyamorous, and that all of both groups were men, we still do not approach the lower bound of your estimate. It fails a basic sanity check.
I would assert that most people here would benefit more from attracting vastly atypical partners, and we are mostly outliers in more ways than one, so your generalizations are even less helpful here than in the world at large. But that belief is irrelevant to my above statements.
ETA: bad sanity check.
You excluded ‘married’ from the check, which is the only thing that allows your “sanity failure” assertion to stand. This is either an error or disingenuous. ‘Married’ applies for the same reason ‘in a relationship’ applies. 24% are single but not looking, not the 57% that you suggest. The “all polyamorous” assumption is not needed given that keeping was included.
Agreed. I was not considering “attracting” and “keeping” as separate states; rather, I read it as “attracting or (attracting and keeping)” which clearly was not warranted. So if we assume everyone not “single but not looking” was male and interested in the sorts of things mentioned above, that’s 76%, which while still a stretch falls well within the range above.
Listen carefully to what I said, Thomblake:
One must distinguish carefully between the set of women for which I (in a Bayesian sense) believe PU would apply to, versus the set of women for which I am stably highly confident that it applies to because of overwhelming field-testing.
Indeed, saying that “PU may well apply (to a certain extent) to almost all pre-menopausal hetero/bi women” does not logically entail that I think it doesn’t apply to post-menopausal women or lesbians etc. Personally I have no clue about lesbian attraction, and very little about how to attract post-menopausal women, so I make no claim in particular.
As I’ve pretty much argued before, people could escape the majority of needless wasteful friction if they were just willing to use words like “average” and/or “median” when that’s indeed what they mean instead of “all”.
You could have said “average women” from the start. Am not talking about “careful” disclaimers here—I’m just talking about the single word “average”, which by itself would have vastly improved your comment. And yet you didn’t choose to have that word. Why? Was one word so costly to you?
Or was rudeness and stereotyping intentionally being signalled here in a “Alphas don’t bother with politeness, that’s submissive behaviour” sort-of-thing?
Surely you mean
“the average person could escape the majority of needless wasteful tension if they were just willing to use words … ”
since I am sure there is some person out there who overuses “average” when they really mean “all”, yes? And yet you didn’t choose to have that word. Why? Was one word so costly to you?
No, I’m sure I wasn’t talking about average people, I was talking about people collectively. If I added the word “all” it would be closer to my meaning that if I had added the word “average”.
But I guess I was right in my estimation about the intentionality of the signals you were giving, as you’re now reinforcing them.
Assuming for the sake of argument that women are sentient, but also that they have absolutely no free will when it comes to sexual relationships and that they can be piloted like a remote-controlled drone by a man who has cracked the human sexual signalling language (a hypothesis only slightly more extreme than the PUA hypothesis), that would still leave us with the question of how to maximise the utility of these strange, mindless creatures given that they are sentient and their utility counts as much as any other sentient being’s.
PUA might be compatible with this if you assume that just by chance the real utility function of the human female just happens to be maximised by the behaviour which maximises the utility of the PUA, which is to say that you maximise the utility of all human females by having a one night stand with them if you find them physically attractive but not inclined to be subservient, and a longer-term relationship with them under some circumstances if you want regular sex and you can manage the relationship so that you are dominant. (We could call this the Weak Gor Hypothesis).
However this has not been demonstrated, and it might turn out that in some cases women are happier if they are communicated with honestly, treated as equal partners in a relationship, given signals that they are high-status in the form of compliments and “romantic” gestures and so forth. If that was the case then ethically some weight would have to be given to these sources of utility, and it would be ethically questionable to talk down such behaviour as “beta” since it would have turned out that the alpha/beta distinction did not match up with a real distinction between utility-maximising and non-utility-maximising behaviour in all cases.
LOL. Given that IRL Goreans (male and female) exist, someone who wants that sort of thing needn’t try converting anyone from the general dating pool.
I’ve paraphrased your comment to make it gender neutral and preference-neutral.
The thing is, what maximizes our happiness isn’t always what’s predictably enjoyable. (See prospect theory, fun theory, liking vs. wanting, variable reinforcement...) Excitement and variety are very often the spice of life.
Frankly, having a partner who does nothing but worship you is both annoying and unattractive… even though it might sound like a good idea on paper. (For one thing, you can feel pressured to reciprocate.)
I’m reminded of Eliezer’s “fun theory” posts about the evolution of concepts of heaven: that if you’re a poor farmer then no work to do and streets paved with gold sounds like heaven to you, but once you actually got there, it’d be bloody boring.
In the same way, a lot of romantic ideals for relationships sound like heaven only when you haven’t actually gotten there yet.
I think we need to be careful of false dichotomies and straw men, since so much of PUA doctrine/knowledge/dogma (pick your preferred term) is communicated in the form of dichotomies, which I suspect are false to at least a significant extent.
The possibility I advanced was that “women are happier if they are communicated with honestly, treated as equal partners in a relationship, given signals that they are high-status in the form of compliments and “romantic” gestures and so forth”. This does not seem to me to be the same thing as saying that women are happier with “a partner who does nothing but worship [them]”, although I can see how if you were trained to see relationships in terms of the PUA alpha/beta dichotomy it might seem to be the same thing to you. Most obviously treating someone as an equal partner is inconsistent with doing nothing but worshipping that person.
You also are asserting without evidence that the kind of relationship I just described would not be fun if you were actually in one, which seems to me to contain implicit status attack, since it assumes that I have never been in such a relationship and hence that I am speaking from a position of epistemological disadvantage compared to yourself.
Would I be far wrong if I guessed that your data set for this implicit assumption is based on interacting with a significant number of PUAs? If so the underlying problem may well simply be self-selection bias. The kind of people who have long-term relationships based on honesty, equality and support are probably unlikely to self-select for participation in PUA forums and hence their experiences and viewpoints will be under-represented in those circles compared to their prevalence in the general population.
Actually, it’s my observation that men who consciously make an effort to do what you said, actually end up doing what I said, from the point of view of the people they interact with.
That is, they are poorly calibrated and overshoot the mark. (Been there, did that.)
Hm. Sorry—the important piece left out of my explicit reasoning is above: i.e., that people who think they are “communicating honestly”, et al usually end up doing something completely different; it’s the absence of that which I implicitly assume you’ve had… and which is AFAICT a less common experience for men (with no implied connotations about status) if for no other reason than that women are on average better socially calibrated than men.
Yes, you would. ;-)
Data point: I have been married for 15 years and would not classify myself as a PUA in any sense, although based on what statistics I’ve read about men in general, I would have to consider myself to have had above-average sexual success (though not drastically so) before I got married—largely due to behaviors PUAs would’ve described as social game, direct game, and qualifying. (However, the terms didn’t exist at the time, as far as I know—this was pre-internet for the most part.)
At no time were a lack of honesty, equality, or support a part of what I did or sought, so I’m not sure why you think they are anathema to PUA goals.
PUA literature, like so many other things, is largely what you make of it. When I look at it, I find the parts that are positive, life-affirming, and utility-increasing for everybody involved. So your objections look to me like strawman attacks.
One thing I have observed is that once I’ve read the parts of PUA theory that sound good (i.e., more politically correct), I find that on reading the less politically-correct things, they are actually advocating similar behaviors, and simply describing them differently. Some use more inflammatory and controversial language laced with all sorts of negative judgments about men and women; others emphasize empathy and helping men to see things from women’s point of view (without an added heap of patronizing the women in the process).
And yet, when it comes right down to it, they’re still saying to do the same things; it’s only the connotations of their speech that are different.
IOW, ISTM that you are arguing with the misogynistic connotations of some fragment of PUA theory that you’ve encountered; I disagree because the connotations are AFAICT superfluous to functional PUA advice, having had the opportunity to compare misogynistically-connotated and non-misogynistically-connotated descriptions of the same thing.
This is something that PUA and self-help in general have in common, btw: they are best read in such a way as to completely disregard connotation, judgment, and theory, in favor of simply extracting as directly as possible what precise behaviors are being recommended and what predictions are being made regarding the outcomes of those behaviors. Only after determining whether the behavior produces the predicted result, is it worth exploring (or refuting) the advocate’s theories about “how” or “why” it works.
Case in point: “The Secret” and other “law of attraction stuff”, much of which turns out to be scientifically valid, if (and only if) you completely ignore the nutty theories and focus on behavior and predictions. Richard Wiseman’s research into “luck theory” actually demonstrates that the behaviors and attitudes recommended by certain “law of attraction” proponents actually do make you luckier, by increasing the probability that you will notice and exploit serendipitous positive opportunities in your environment.
If Wiseman had simply dismissed “The Secret” as another nutty new-age misinterpretation of physics, that research couldn’t have been done. I suggest that if you seriously intend to research PUA (as opposed to making what seem to me like strawman arguments against it), you follow Wiseman’s example, and break down whatever you read into concrete behaviors and outcome predictions, minus any theories or political connotations of theories.
I think your position is going to turn out to be unfalsifiable on the point of whether relationships involving honesty, equality and mutual support actually exist. If your response to claims that they exist is to say “Well in my experience they don’t exist, the people who think they do are just deluded” I can’t provide any evidence that will change your views. After all, I could just be deluded.
As for whether I’m engaging with, and have read, the “real” PUA literature or the “good” PUA literature, I’m not sure whether or not this is an instance of the No True Scotsman argument. There’s no question that a large part of the PUA literature and community are misogynist and committed to an ideology that positions themselves as high-status and women and non-PUA men as low-status. As such that part of PUA culture is antithetical to the goals of LW as I understand them since those goals include maximising everyone’s utility.
If there’s a subset of positive-utility PUA thinking then that criticism does not apply and it’s at least possible that if they have scientific data to back up their claims then there is something useful to be found there.
I think it’s the PUA advocates’ burden of proof to show us that data though, if there really is an elephant of good data pertinent to pursuing high net-utility outcomes in the room. As opposed to some truisms which predate PUA culture by a very long time hidden under an encrustation of placebo superstitions.
Huh? I didn’t say those things didn’t exist. I said I was not searching for a lack of those things (I even bolded the word “lack” so you wouldn’t miss it), and that I don’t see why you think that PUA requires such a lack.
Authentic Man Program and Johnny Soporno are the two schools I’m aware of that are strongly in the honesty and empowerment camps, AFAICT, and would constitute the closest things to “true scotsmen” for me. Most other things that I’ve seen have been a bit of a mixed bag, in that both empathetic and judgmental material (or honest and dishonest) can both be found in the same set of teachings.
Of notable interest to LW-ers, those two schools don’t advocate even the token dishonesty of false premises for starting a conversation, let alone dishonesty regarding anything more important than that.
(Now, if you want to say that these schools aren’t really PUA, then you’re going to be the one making a No True Scotsman argument. ;-) )
As I said, I’m less interested in “scientific” evidence than Bayesian evidence. The latter can be disappointingly orthogonal to the former, in that what’s generally good scientific evidence isn’t always good Bayesian evidence, and good Bayesian evidence isn’t always considered scientific.
More to the point, if your goals are more instrumental than epistemic, the reason why a particular thing works is of far less interest than whether it works and how it can be utilized.
I took a quick look at AMP and Soporno’s web sites and I’m more than happy to accept them as non-misogynistic dating advice sources aiming for mutually beneficial relationships. I wasn’t previously aware of them but I unconditionally accept them as True Scotsmen.
I’m now interested in how useful their advice is, either in instrumental or epistemic terms. Either would be significant, but if there is no hard evidence then the fact that their intentions are in step with those of LW doesn’t get them a free pass if they don’t have sound methodology behind their claims.
I’m aware Eliezer thinks there’s a difference between scientific evidence and Bayesian evidence but it’s my view that this is because he has a slightly unsophisticated understanding of what science is. My own view is that the sole difference between the two is that science commands you to suspend judgment until the null hypothesis is under p=0.05, at least for the purposes of what is allowed into the scientific canon as provisional fact, and Bayesians are more comfortable making bets with greater degrees of uncertainty.
Regardless, if your goals are genuinely instrumental you very much want to figure out what parts of the effect are due to placebo effects and what parts are due to real effects, so you can maximise your beneficial outcomes with a minimum of effort. If PUA is effective to some extent but solely due to placebo effects then it only merits a tiny footnote in a rationalist approach to relationships. If it has effects beyond placebo effects then and only then is there something interesting for rationalists to look at.
There is a word for the problem that results from this way of thinking about instrumental advice. It’s called “akrasia”. ;-)
Again, if you could get people to do things without taking into consideration the various quirks and design flaws of the human brain (from our perspective), then self-help books would be little more than to-do lists.
In general, when I see somebody worrying about placebo effects in instrumental fields affected by motivation, I tend to assume that they are either:
Inhumanly successful and akrasia-free at all their chosen goals, (not bloody likely),
Not actually interested in the goal being discussed, having already solved it to their satisfaction (ala skinny people accusing fat people of lacking willpower), or
Very interested in the goal, but not actually doing anything about it, and thus very much in need of a reason to discount their lack of action by pointing to the lack of “scientifically” validated advice as their excuse for why they’re not doing that much.
Perhaps you can suggest a fourth alternative? ;-)
I’d prefer not to discuss this at the ad hominem level. You can assume for the sake of argument whichever of those three assumptions you prefer is correct, if it suits you. I’m indifferent to your choice—it makes no difference to my utility. I make no assumptions about why you hold the views you do.
My view is that the rationalist approach is to take it apart to see how it works, and then maybe afterwards put the bits that actually work back together with a dollop of motivating placebo effect on top.
The best way to approach research into helping overweight people lose weight is to study human biochemistry and motivation, and see what combinations of each work best. Not to leave the two areas thoroughly entangled and dismiss those interested in disentangling them as having the wrong motivations. I think the same goes for forming and maintaining romantic relationships.
Me either. I was asking you for a fourth alternative on the presumption that you might have one.
FWIW, I don’t consider any of those alternatives somehow bad, nor is my intention to use the classification to score some sort of points. People who fall into category 3 are of particular interest to me, however, because they’re people who can potentially be helped by understanding what it is they’re doing.
To put it another way, it wasn’t a rhetorical question, but one of information. If you fall in category 1 or 2, we have little further to discuss, but that’s okay. If you fall in category 3, I’d like to help you out of it. If you fall in an as-yet-to-be-seen category 4, then I get to learn something.
So, win, win, win, win, in all four cases.
This is conflating things a bit: my reference to weight loss was pointing out that “universal” weight-loss advice doesn’t really exist, so a rationalist seeking to lose weight must personally test alternatives, if he or she cannot afford to wait for science to figure out the One True Theory of Weight Loss.
This presupposes that you already have something that works, which you will not have unless you first test something. Even if you are only testing scientifically-validated principles, you must still find which are applicable to your individual situation and goals!
Heck, medical science uses different treatments for different kinds of cancer, and occasionally different treatments for the same kind of cancer, depending on the situation or the actual results on an individual - does this mean that medical science is irrational? If not, then pointing a finger at the variety of situation-specific PUA advice is just rhetoric, masquerading as reasoning.
I imagine you’d put me in category #2 as I’m currently in a happy long-term relationship. However my self-model says that three years ago when I was single and looking for a partner that I would still want to know what the actual facts about the universe were, so I’d put myself in category #4, the category of people for whom it’s reflexive to ask what the suitably blinded, suitably controlled evidence says whether or not they personally have a problem at that point in their lives with achieving relevant goals.
I think we should worry about placebo effects everywhere they get in the way of finding out how the universe actually works, whether they happen to be in instrumental fields affected by motivation or somewhere else entirely.
That didn’t mean that I chose celibacy until the peer-reviewed literature could show me an optimised mate-finding strategy, of course, but it does mean that I don’t pretend that guesswork based on my experience is a substitute for proper science.
The difference between your PUA example and medicine is that medicine usually has relevant evidence for every single one of those medical decisions. (Evidence-based medicine has not yet driven the folklore out of the hospital by a long chalk but the remaining pockets of irrationality are a Very Bad Thing). Engineers use different materials for different jobs, and photographers use different lenses for different shots too. I don’t see how the fact that these people do situation-specific things gets you to the conclusion that because PUAs are doing situation-specific things too they must be right.
It doesn’t. It just refutes your earlier rhetorical conflation of PUA with alternative medicine on the same grounds.
At this point, I’m rather tired of you continually reframing my positions to stronger positions, which you can then show are fallacies.
I’m not saying you’re doing it on purpose (you could just be misunderstanding me, after all), but you’ve been doing it a lot, and it’s really lowering the signal-to-noise ratio. Also, you appear to disagree with some of LW’s premises about what “rationality” is. So, I don’t think continued discussion along these lines is likely to be very productive.
My intent was to show that in the absence of hard evidence PUA has the same epistemic claim on us as any other genre of folklore or folk-psychology, which is to say not much.
I admit I’m struggling to understand what your positions actually are, since you are asking me questions about my motivations and accusing me of “rhetoric, not reasoning” but not telling me what you believe to be true and why you believe it to be true. Or to put it another way, I don’t believe you have given me much actual signal to work with, and hence there is a very distinct limit to how much relevant signal I can send back to you.
Maybe we should reboot this conversation and start with you telling me what you believe about PUA and why you believe it?
Ok. I’ll hang in here for a bit, since you seem sincere.
Here’s one belief: PUA literature contains a fairly large number of useful, verifiable, observational predictions about the nonverbal aspects of interactions occurring between men and women while they are becoming acquainted and/or attracted.
Why do I believe this? Because their observational predictions match personal experiences I had prior to encountering the PUA literature. This suggests to me that when it comes to concrete behavioral observations, PUAs are reasonably well-calibrated.
For that reason, I view such PUA literature—where and only where it focuses on such concrete behavioral observations—as being relatively high quality sources of raw observational data.
In this, I find PUA literature to be actually better than the majority of general self-help and personal development material, as there is often nowhere near enough in the way of raw data or experiential-level observation in self-help books.
Of course, the limitation on my statements is the precise definition of “PUA literature”, as there’s definitely a selection effect going on. I tend to ignore PUA material that is excessively misogynistic on its face, simply because extracting the underlying raw data is too… tedious, let’s say. ;-) I also tend to ignore stuff that doesn’t seem to have any connection to concrete observations.
So, my definition of “PUA literature” is thus somewhat circular: I believe good stuff is good, having carefully selected which bits to label “good”. ;-)
Another aspect of my possible selection bias is that I don’t actually read PUA literature in order to do PUA!
I read PUA literature because of its relevance to topics such as confidence, fear, perceptions of self-worth, and other more common “self-help” topics that are of interest to me or to my customers. By comparison, PUA literature (again using my self-selected subset) contains much better raw data than traditional self-help books, because it comes from people who’ve relentlessly calibrated their observations against a harder goal than just, say, “feeling confident”.
The problem with this line of reasoning is that there are people who believe they have relentlessly calibrated their observations against reality using high quality sources of raw observational data and that as a result they have a system that lets them win at Roulette. (Barring high-tech means to track the ball’s vector or identifying an unbalanced wheel).
Roulette seems to be an apt comparison because based on the figures someone else quoted or linked to earlier about a celebrated PUAist hitting on 10 000 women and getting 300 of them into bed, the odds of a celebrated PUAist getting laid on a single approach even according to their own claims is not far off the odds of correctly predicting exactly which hole a Roulette ball will land in.
So when these people say “I tried a new approach where I flip flopped, be-bopped, body rocked, negged, nigged, nugged and nogged, then went for the Dutch Rudder and I believe this worked well” unless they tried this on a really large number of women so that they could detect changes in a base rate of 3% success I really don’t think they have any meaningful evidence. Did their success rate go up from 3% to 4% or what, and what are their error bars?
What’s the base rate for people not using PUA techniques anyway? People other than PUAs are presumably getting laid, so it’s got to be non-zero. The closer it is to 3% the less effect PUA techniques are likely to have.
I’ve already heard the response “Look, we don’t get just one bit of data as feedback. We PUAs get all sorts of nuanced feedback about what works and does not”. If that’s so and this feedback is doing some good this should be reflected in your hit rate for getting laid. If picking up women and getting them in to bed is an unfair metric for PUA effectiveness I really think it should be called something other than PUA.
My thinking is that you don’t have enough data to distinguish whether you are in a world where PUA training has a measurable effect, from a world where PUA have an unfalsifiable mythology that allows them to explain their hits and misses to themselves, and a collection of superstitions about what works and does not, but no actual knowledge that separates them in terms of success rate from those who simply scrub up, dress up and ask a bunch of women out.
I want to see that null hypothesis satisfactorily falsified before I allow that there is an elephant in the room.
Once again, you are misstating my claims.
Notice that nowhere in my post did I say pickup artists get laid, let alone that they get laid more often!
Nowhere did I state anything about their predictions of what behavior works to get laid!
I even explicitly pointed out that the information I’m most interested in obtaining from PUA literature, has notthing to do with getting laid!
So just by talking about the subject of getting laid, you demonstrate a complete failure to address what I actually wrote, vs. what you appear to have imagined I wrote.
So, please re-read what I actually wrote and respond only to what I actually wrote, if you’d like me to continue to engage in this discussion.
Okay. What observable outcomes do you think you can obtain at better-than-base-rate frequencies employing these supposed insights, and why do you think you can obtain them?
As I said earlier I think that if PUA insights cannot be cashed out in a demonstrable improvement in the one statistic which you would think would matter most to them, rate of getting laid, then there is grounds to question whether these supposed insights are of any use to anyone.
But if you would prefer to use some other metric I’m willing to look at the evidence.
Guesswork based on your experience isn’t supposed to be a substitute for science. It’s the part of science that you do when choosing which phenomena you want to test, well before you get to the blinding and peer review.
The flip side is that proper science isn’t a substitute for either instrumental rationality or epistemic rationality. Limiting your understanding of the world entirely to what is already published in journals gives you a model of the world that is subjectively objectively wrong.
I don’t disagree but a potentially interesting research area isn’t an elephant in the room that demands attention in a literature review, and limiting yourself to proper science is no sin in a literature review either. Only when the lessons we can learn from proper science are exhausted should we start casting about in the folklore for interesting research areas, and we certainly shouldn’t put much weight on anecdotes from this folklore. In Bayesian terms such anecdotes should shift our prior probability very, very slightly if at all.
No ad hominem fallacy present in grandparent.
Why don’t you first describe one, then the other, then contrast them? Then, describe Eliezer’s view and contrast that with your position.
I’ll try to do it briefly, but it will be a bit tight. Let’s see how we go.
Bayes’ Theorem is part of the scientific toolbox. Pick up a first year statistics textbook and it will be in there, although not always under that name (look for “conditional probability” or similar constructs). Most of scientific methodology is about ensuring that you do your Bayesian updating right, by correctly establishing the base rate and the probability of your observations given the null hypothesis. (Scientists don’t state their P(A), but they certainly have an informal sense of what P(A) is likely to be and are more inclined to question a conclusion if it is unlikely than if it is likely).
If you’re doing Bayes right it’s the same as doing science, but I think some of the LW groupthink holds that you can do a valid Bayesian update in the absence of a rigorously established base rate, and so they think this is a difference between being a good Bayesian and being a good scientist. I think they are just being bad Bayesians since updating is no better than guesswork in the absence of a rigorously obtained P(B).
Eliezer (based on The Dilemma: Science or Bayes? ) doesn’t quite carve up science-culture from ideal-science-methodology the way I do, and infers that there is something wrong with Science because the culture doesn’t care about revising instrumentally-indistinguishable models to make them more Eliezer-intuitive. I think this has more to do with trying to win a status war with Science than with any differences in predicted observations that matter.
That doesn’t mean it doesn’t underlie the entire structure. As an analogy, to get from New York to Miami, one must generally go south. But instructions on how to get there will be a hodgepodge of walk north out of the building, west to the car, drive due east, then turn south...the plane takes off headed east...and turns south...etc. Showing that going south is one of several ways to turn while walking doesn’t mean its no conceptually different than north for getting fro New York to Miami. Similarly:
If one is paid to do plumbing, then there is no difference between being a good plumber and a “good Bayesian”, and in that sense there is no difference between being a “good Bayesian” and a “good scientist”.
In the sense in which it is intended, there is a difference between being a “good Bayesian” and a “good scientist”. To continue the analogy, if one must go from Ramsey to JFK airport across the Tappan Zee Bridge, one’s route will be on a convoluted path to a bridge that’s in a monstrously inconvenient location. It was built there—at great additional expense as that is where the river is widest—to be just outside of the NY/NJ Port Authority’s jurisdiction. The best route from Ramsey to Miami may be that way, but that accommodates human failings, and is not the direct route. Likewise for every movement that is made in a direction not as the crow flies. Bayesian laws are the standard by which the crow flies, against which it makes sense to compare the inferior standards that better suit our personal and organizational deficiencies.
Well, yes and no. It’s adequately suited for the accumulation of not-false beliefs, but it both could be better instrumentally designed for humans and is not the bedrock of thinking by which anything works. The thing that is essential to the method you described, “Scientists...have an informal sense of what P(A) is likely to be and are more inclined to question a conclusion if it is unlikely than if it is likely”. What abstraction describes the scientist’s thought process, the engine within the scientific method? I suggest it is Bayesian reasoning but even if it is not, one thing it cannot be is more of the Scientific method, as that would lead to recursion. If it is not Bayesian reasoning, then there are some things I am wrong about, and Bayesianism is a failed complete explanation, and the Scientific method is half of a quite adequate method—but they are still different from each other.
P(B|~A) is inversely proportional to P(A|B) by Bayes’ Rule, so the direction is right—that’s why we can make planes that don’t fall out of the sky. But just using P(B|~A) isn’t what’s done, because scientists interject their subjective expectations here and pretend they do not. P(B|~A) doesn’t contain whether or not a researcher would have published something had she found a two tail rather than one tail test—a complaint about a paper I read just a few hours ago. What goes into p-values necessarily involves the arbitrary classes the scientist has decided evidence would fit in, and then measures his or her surprise at the class of evidence that is found. That’s not P(B|~A), it’s P(C|~A).
Do you have examples of boundary cases that distinguish a rigorously established one with one that isn’t?
If one believes in qualitatively different beliefs, the rigorous and the non-rigorous, one falls into paradoxes such as the lottery paradox. It’s important to establish the actual nature of knowledge as probabilistic, and not be tricked into thinking science is a separate non-overlapping magisteria with other things.
With such actually correct understanding of how beliefs should work, we can think about improving our thinking rather than eternally and in vain trying to smooth out a ripple in a rug that has a table on each of its corners, hoping our mistaken view of the world has few harmful implications like “Jesus Christ is God’s only son” and not “life begins at conception”.
Or, we could not act on our most coherent world-views, only acting according to whatever fragment of thought our non-coherent attention presents to us. Not appealing.
Thank you for saying my point better than I was able to.
I don’t think scientists think about it much. That’s more the sort of thing philosophers of science think about. The smarter scientists do what is essentially Bayesian updating, although very few of them would actually put a number on their prior and calculate their posterior based on a surprising p value. They just know that it takes a lot of very good evidence to overturn a well-established theory, and not so much evidence to establish a new claim consistent with the existing scientific knowledge.
Stating your hypothesis beforehand and specifying exactly what will and will not count as evidence before you collect your data is a very good way of minimising the effect of your own biases, but naughty scientists can and do take the opportunity to cook the experiment by strategically choosing what will count as evidence. Still, overall it’s better than letting scientists pore over the entrails of their experimental results and make up a hypothesis after the fact. If a great new hypothesis comes out of the data then you have do to your legwork and do a whole new experiment to test the new hypothesis, and that’s how it should be. If the effect is real it will keep. The universe won’t change on you.
It’s not a binary distinction. Rather, if you’re unaware of the ways that people’s P(B) estimates can be wildly inaccurate and think that your naive P(B) estimates are likely to be accurate then you can update into all sorts of stupid and factually false beliefs even if you’re an otherwise perfect Bayesian.
The people who think that John Edward can talk to dead people might well be perfect Bayesians who just haven’t checked to see what the probability is that John Edward could produce the effects he produces in a world where he can’t talk to dead people. If you think the things he does are improbable then it’s technically correct to update to a greater belief in the hypothesis that he can channel dead people. It’s only if you know that his results are exactly what you’d expect in a world where he’s a fake that you can do the correct thing, which is not update your prior belief that the probability that he’s a fake is 99.99...9%.
If someone’s done some actual work to see if they can falsify the null hypothesis that PUS techniques are indistinguishable from a change, a comb, a shower and asking some women out I’d be interested in seeing it. In the absence of such work I think good Bayesians have to recognise that they don’t have a P(B) with small enough error bars to be very useful.
Exactly, it’s a cost and a deviation from ideal thinking to minimize the influence of scientists who receive no training in debiasing. So not “If you’re doing Bayes right it’s the same as doing science”, where “science” is an imperfect human construct designed to accommodate the more biased of scientists.
These are costs. It’s important, and in some contexts cheap, to know why and how things work instead of saying “I’ll ignore that since enough replication always solves such problems,” when one doesn’t know in which cases one is doing nearly pointless extra work and in which one isn’t doing enough replication. It’s an obviously sub-optimal solution along the lines of “thinking isn’t important; assume infinite resources.”
It’s praise through faint damnation of the laws of logic that they don’t prevent one from shooting one’s own foot off. Handcuffs are even better at that task, but they are less useful for figuring out what is true.
Exactly, so in “some of the LW groupthink holds that you can do a valid Bayesian update in the absence of a rigorously established base rate,” they are right, and “updating is no better than guesswork in the absence of a rigorously obtained P(B),” is not always true, such as when the following condition doesn’t apply, and it doesn’t here:
What do you think this site is for? People are reading and sharing research papers about biases in their free time. One could likewise criticize jet fuel for being inappropriate for an old fashioned coal powered locomotive. Yes, jet fuel will explode a train...this is not a flaw of jet fuel, and it does not mean that the coal-train is better at transporting things.
That’s not the claim in question.
In any case, there are better ways to think about this subject than with null hypotheses. Those are social constructs focusing (decently) on optimizing preventing belief in untrue things, rather than determining what’s most likely true, here false beliefs have relatively less cost than in most of science, and will in any case only be held probabilistically.
There’s a very good reason why we do double-blind, placebo-controlled trials rather than just recruiting a bunch of people who browse LW to do experiments with, on the basis that since LWers are “trained in debiasing” they are immune to wishful thinking, confirmation bias, the experimenter effect, the placebo effect and so on.
I have a great deal more faith in methodological constructs that make it impossible for bias to have an effect than in people’s claims to “debiased” status.
Don’t get me wrong, I think that training in avoiding cognitive biases is very important because there are lots of important things we do where we don’t have the luxury of specifying our hypotheses in strictly instrumental terms beforehand, collecting data via suitably blinded proxies and analysing it just in terms of our initial hypothesis.
However my view is that if you think that scientific methodology is just a set of training wheels for people who haven’t clicked on all the sequences yet and that browsing LW makes you immune to the problems that scientific methodology exists specifically to prevent then it’s highly likely you overestimate your resistance to bias.
There’s also a cost to acting on the assumption that every correlation is meaningful in a world where we have so much data available to us that we can find arbitrarily large numbers of spurious correlations at P<0.01 if we try hard enough. Either way you’re spending resources, but spending resources in the cause of epistemological purity is okay with me. Spending resources on junk because you are not practising the correct purification rituals is not.
The accepted scientific methodology is more like a safety rope or seat belt. Sometimes annoying, almost always rational.
Rather than what a site is for I focus on what a site is.
In many, many ways this site has higher quality discourse than, say, the JREF forums and a population who on average are better versed in cognitive biases. However this discussion has made it obvious to me that on average the JREF forumites are far more aware than the LWers of the various ways that people’s estimates of P(B) can be wrong and can be manipulated.
They would never put it in those terms since Bayes is a closed book to them, but they are very well aware that you can work yourself into completely wrong positions if you aren’t sophisticated enough to correctly estimate the actual base rate at which one would expect to observe things like homeopathy apparently working, people apparently talking to the dead, people apparently having psychic powers, NLP apparently letting you seduce people and so on in worlds where none of these things did anything except act as placebos (at best).
If your P(B) is off then using Bayes Theorem is just being a mathematically precise idiot instead of an imprecise idiot. You’ll get to exactly the right degree of misguided belief, based on the degree to which you’re mistaken about the correct value of P(B,) but that’s still far worse than being someone who wouldn’t know Bayes from a bar of soap but who intuitively perceives something closer to the correct P(B).
The idea that LW browsers think they are liquid-fuelled jets while the scientists who do the actual work of moving society forward are boring old coal trains worries me. I think of LW’s “researchers” as a bunch of enthusiastic amateurs with cheap compasses and hand-drawn maps running around in the bushes in a mildly organised fashion, while scientists are painstakingly and one inch at a time building a gigantic sixteen-lane highway for us all to drive down.
Yes, and people who actually understand the tradeoffs in using formal scientific reasoning and its deviations from the laws of reasoning are the only people in position to intelligently determine that. Those who say “always use the scientific method for important things” or, though I don’t know that there ever has been or ever will be such a person, “always recruit a bunch of people who browse LW,” are not thinking any more than a broken clock is ticking. As an analogy, coal trains are superior to jet planes for transporting millions of bushels of wheat from Alberta to Toronto. It would be inane and disingenuous for broken records always calling for the use of coal trains to either proclaim their greater efficiency in determining which vehicle to use to transport things because they got the wheat case right or pretend that they have a monopoly on calling for the use of trains.
With reasoning, one can intelligently determine a situation’s particulars and spend to eliminate a bias (for example by making a study double-blind) rather than doing that all the time or relying on skill in this case,and without relying on intuition to determine when. One can see that in an area, the costs of thinking something true when it isn’t exceeds the costs of thinking it’s false when it’s true, and set up correspondingly strict protocols, rather than blindly always paying in true things not believed, time, and money for the same, sometimes inadequate and sometimes excessive, amount of skepticism.
My view is that if you think anyone who has interacted with you in this thread has that view you have poor reading comprehension skills.
So one can simply...not do that. And be a perfectly good Bayesian.
It is not the case that every expenditure reducing the likelihood that something is wrong is optimal,as instead one could instead spend a bit on determining which areas ought to have extra expenditure reducing the likelihood that something is wrong there.
In any case, science has enshrined a particular few levels of spending on junk that it declares perfectly fine because the “correct” purification rituals have been done. I do not think that such spending on junk is justified because in those cases no, science is not strict enough. One can declare a set of arbitrary standards and declare spending according to them correct and ideologically pure or similar, but as one is spending fungible resources towards research goals this is spurious morality.
Amazing, let me try one. If a Bayesian reasoner is hit by a meteor and put into a coma, he is worse off than a non-Bayesian who stayed indoors playing Xbox games and was not hit by a meteor. So we see that Bayesian reasoning is not sufficient to confer immortality and transcendence into a godlike being made of pure energy.
People on this site are well aware that if scientific studies following the same rules as the rest of science indicate that people have psychic powers, there’s something wrong with the scientific method and the scientists’ understanding of it because the notion that people have psychic powers are bullshit.
People here know that there is not some ineffable magic making science the right method in the laboratory and faith the right method in church, or science the right method in the laboratory and love the right method everywhere else, science the right method everywhere and always, etc., as would have been in accordance with people’s intuitions.
How unsurprising it is that actually understanding the benefits and drawbacks of science leads one to conclude that often science is not strict enough, and often too strict, and sometimes but rarely entirely inappropriate when used, and sometimes but rarely unused when it should be used, when heretofore everything was decided by boggling intuition.
Grammar nitpick: should be “is bullshit,” referring to the singular “notion.”
I’m not going to get into a status competition with you over who is in a position to determine what.
The most obvious interpretation of your statement that science is “an imperfect human construct designed to accommodate the more biased of scientists” and that “it’s a cost and a deviation from ideal thinking to minimize the influence of scientists who receive no training in debiasing” is that you think your LW expertise means that you wouldn’t need those safeguards. If I misinterpreted you I think it’s forgivable given your wording, but if I misinterpreted you then please help me out in understanding what you actually meant.
I’m responding under the assumption that the second “scientific” should read “psychic”. My point was not that people here didn’t get that—I imagine they all do. My point is that the evidence on the table to support PUA theories is vulnerable to all the same problems as the evidence supporting claimed psychic powers, and that when it came to this slightly harder problem some people here seemed to think that the evidence on the table for PUA was actually evidence we would not expect to see in a world where PUA was placebo plus superstition.
I think the JREF community would take one sniff of PUA and say “Looks like a scam based on a placebo”, and that they would be better Bayesians when they did so than anyone who looks at the same evidence and says “Seems legit!”.
(I suspect that the truth is that PUA has a small non-placebo effect, since we live in a universe with ample evidence that advertising and salesmanship have small non-placebo effects that are statistically significant if you get a big enough sample size. However I also suspect that PUAs have no idea which bits of PUA are the efficacious bits and which are superstition, and that they could achieve the modest gains possible much faster if they knew which was which).
OK, I will phrase it in different terms that make it explicit that I am making several claims here (one about what Bayesianism can determine, and one about what science can determine). It’s much like I said above:
Some people claim Bayesian reasoning models intelligent agents’ learning about their environments, and agents’ deviations from it is failure to learn optimally. This model encompasses choosing when to use something like the scientific method and deciding when it is optimal to label beliefs not as “X% likely to be true, 1-X% likely to be untrue,” but rather “Good enough to rely on by virtue of being satisfactorily likely to be true,” and “Not good enough to rely on by virtue of being satisfactorily likely to be true”. If Bayesianism is wrong, and it may be, it’s wrong.
The scientific method is a somewhat diverse set of particular labeling systems declaring ideas “Good enough to rely on by virtue of being satisfactorily likely to be true,” and “Not good enough to rely on by virtue of being satisfactorily likely to be true.” Not only is the scientific method incomplete by virtue of using a black-box reasoning method inside of it, it doesn’t even claim to be able to adjudicate between circumstances in which it is to be used and in which it is not to be used. It is necessarily incomplete. Scientists’ reliance on intuition to decide when to use it and when not to may well be better than using Bayesian reasoning, particularly if Bayesianism is false, I grant that. But the scientific method doesn’t, correct me if I am wrong, purport to be able to formally decide whether or not a person should subject his or her religious beliefs to it.
I disagree but here is a good example of where Bayesians can apply heuristics that aren’t first-order applications of Bayes rule. The failure mode of the heuristic is also easier to see than where science is accused of being too strict (though that’s really only a part of the total claim, the other parts are that science isn’t strict enough, that it isn’t near Pareto optimal according to its own tradeoffs in which it sacrifices truth, and that it is unfortunately taken as magical by its practitioners).
In those circumstances in which the Bayesian objection to science is that it is too strict, science can reply by ignoring that money is the unit of caring and declare its ideological purity and willingness to always sacrifice resources for greater certainty (such as when the sacrifice is withholding FDA approval of a drug already approved in Europe), “Either way you’re spending resources, but spending resources in the cause of epistemological purity is okay with me. Spending resources on junk because you are not practising the correct purification rituals is not.”
Here, however, the heuristic is “reading charitably”, in which the dangers of excess are really, really obvious. Nonetheless, even if I am wrong about what the best interpretation is, the extra-Bayesian ritual of reading (more) charitably would have had you thinking it more likely than you did that I had meant something more reasonable (and even more so, responding as if I did). It is logically possible that you were reading charitably ideally and my wording was simply terrible. This is a good example of how one can use heuristics other than Bayes’ rule once one discovers one is a human and therefore subject to bias. One can weigh the costs and benefits of it just like each feature of scientific testing.
For “an imperfect human construct designed to accommodate the more biased of scientists”, it would hardly do to assume scientists are all equally biased, and likewise for assuming the construct is optimal no matter the extent of bias in scientists. So the present situation could be improved upon by matching the social restrictions to the bias of scientists and also decreasing that bias. If mostly science isn’t strict enough, then perhaps it should be stricter in general (in many ways it should be) but the last thing to expect is that it is perfectly calibrated. It’s “imperfect”, I wouldn’t describe a rain dance as an “imperfect” method to get rain, it would be an “entirely useless” method. Science is “imperfect”, and it does very well to the extent thinking is warped to accommodate the more biased of scientists, and so something slightly different would be more optimal for the less biased ones.
″...it’s a cost and a deviation from ideal thinking to minimize the influence of scientists who receive no training in debiasing,” and less cost would be called for if they received such training, but not zero. Also, it is important to know that costs are incurred, lest evangelical pastors everywhere be correct when they declare science a “faith”. Science is roughly designed to prevent false things from being called “true” at the expense of true things not being called “true”. This currently occurs to different degrees in different sciences, and it should, and some of those areas should be stricter, and some should be less strict, and in all cases people shouldn’t be misled about belief such that they think there is a qualitative difference between a rigorously established base rate and one not so established, or science and predicting one’s child’s sickness when it vomits a certain color in the middle of the night.
It’s not too similar since psychic powers have been found in controlled scientific studies, and they are (less than infinitely, but nearly) certainly not real. PUA theories were formed from people’s observations, then people developed ideas they thought based on the theories, then tested what they thought were the ideas, tested them insufficiently rigorously. Each such idea is barely more likely than the base rate for being correct due to all the failure nodes, but each is more likely, the way barely enriched uranium’s particles are more likely to be U-235 than natural uranium’s are. This is in line with “However I also suspect that PUAs have no idea which bits of PUA are the efficacious bits and which are superstition, and that they could achieve the modest gains possible much faster if they knew which was which”.
When it comes to action, as in psychological experiments in which one is given a single amount of money for correctly guessing the color of something between red and blue, and one determines 60% of the things are red, one should always guess red, one should act upon ideas most likely true if one must act, all else equal.
Any chance of turning this (and some of your other comments) into a top-level post? (perhaps something like, “When You Can (And Can’t) Do Better Than Science”?)
Yes.
I think the first section should ignore the philosophy of science and cover the science of science, the sociology of it, and concede the sharpshooter’s fallacy, assuming that whatever science does it is trying to do. The task of improving upon the method is then not too normative, since one can simply achieve the same results with fewer resources/better results with the same resources. Also, that way science can’t blame perceived deficiencies on the methods of philosophy, as it could were one to evaluate science according to philosophy’s methods and standards. This section would be the biggest added piece of value that isn’t tying together things already on this site.
A section should look for edges with only one labeled node in the scientific methods where science requires input from a mystery method, such as how scientists generate hypotheses or how scientific revolutions occur. These show the incompleteness of the scientific method as a means to acquire knowledge, even if it is perfect at what it does. Formalization and improvement of the mystery methods would contribute to the scientific method, even if nothing formal within the model changes.
A section should discuss how science isn’t a single method (according to just about everybody), but instead a family of similar methods varying especially among fields. This weakens any claim idealizing science in general, as at most one could claim that a particular field’s method is ideal for human thought and discovery. Assuming each (or most) fields’ methods are ideal (this is the least convenient possible world for the critic of the scientific method as practiced), the costs and benefits of using that method rather than a related scientific method can be speculated upon. I expect to find, as policy debates should not be one sided, that were a field to use other fields’ methods it would have advantages and disadvantages; the simple case is choice of stricter p-value modulating wrong things believed at the expense of true things not believed.
Sections should discuss abuses of statistics, one covering violations of the law (failing to actually test P(B|~A) and instead testing P((B + (some random stuff) - (some other random stuff)|~A) and another covering systemic failures such as publication bias and failure to publish replications. This would be a good place to introduce intra-scientific debates about such things to show both that science isn’t a monolithic outlook that can be supported and how one side in the civil war is aligned with Bayesian critiques. To the extent science is not settled on what the sociology of science is, that is a mark of weakness—it may be perfectly calibrated, but it isn’t too discriminatory here.
A concession I imagine pro-science people might make is to concede the weakness of soft science, such as sociology. Nonetheless, sociology’s scientific method is deeply related to hard sciences’, and its shortcomings somewhat implicate them. What’s more, if sociology is so weak, one wonders whence the pro-science person gets their strong pro-science view. One possibility is that they get it purely from philosophy of science, (a school of which) they wholly endorse, but if that is the case they don’t have an objection in kind to those who also predict science as is works decently but have severe criticisms of it and ideas on how to improve upon it, i.e. Bayesians.
I think it’s fair to contrast the scientific view of science with a philosophical view of Bayesianism to see if they are of the same scope. If science has no position on whether or not science is an approximation of Bayesian reasoning, and Bayesianism does, that is at least one question addressed by the one and not the other. It would be easy to invent a method that’s not useful for finding truth that has a broader scope than science, e.g. answering “yes” to every yes or no question unless it would contradict a previous response. This alone would show they are not synonymous.
A problem with the title “When You Can (And Can’t) Do Better Than Science” is that it is binary, but I really want three things explicitly expressed: 1) When you can do better than science by being stricter than science, 2) when you can do better than science by being more lenient than science, 3) when you can’t do better than science. The equivocation and slipperiness surrounding what it is reasonable to do is a significant part of the last category, e.g. one doesn’t drive where the Tappan Zee Bridge should have been built. The other part is near-perfect ways science operates now according to a reasonable use of “can’t”; I wouldn’t expect science to be absolutely and exactly perfect anywhere, any more than I can be absolutely sure with a probability of 1 that the Flying Spaghetti Monster doesn’t exist.
Second order Bayesianism deserves mention as the thing being advocated. A “good Bayesian” may use heuristics to counteract bias other than just Bayes’ rule, such as the principle of charity, or pretending things are magic to counteract the effort heuristic, or reciting a large number of variably sized numbers to counteract the anchoring effect, etc.
Is there a better analogy than the driving to the airport one for why Bayes’ Rule being part of the scientific toolbox doesn’t show the scientific toolbox isn’t a rough approximation of how to apply Bayes’ Rule? The other one I thought of is light’s exhibiting quantum behavior directly, it being a subset of all that is physical, but all that is physical actually embodying quantum behavior.
A significant confusion is discussing beliefs as if they weren’t probabilistic and actions in some domains as if they ought not be influenced by anything not in a category of true belief “scientifically established”. Bayesianism explains why this is a useful approximation of how one should actually act and thereby permits one to deviate from it without having to claim something like “science doesn’t work”.
Thoughts?
Not necessarily to reopen anything, but some notes:
I’m not sure it’s at all possible to debias against this.
I agree that those are better metaphors than handcuffs all else equal, but those things would not prevent one from shooting one’s foot, and so it didn’t fit the broader metaphor.
A better analogy would be a law that no medical treatment can be received until a second opinion is obtained, or something like that.
Are you familiar with Michael Polanyi Personal Knowledge?
His view is only slightly more strict, yet he arrives at some very different conclusions. For example, under your framework Rhine’s ESP experiments are scientific hypothesis tests, and under his they are illogical. I am not convinced by Polanyi, but it is far from clear to me how you could show he is wrong. If you know how to show he is wrong and could explain that in a couple paragraphs (or point me to such a document) I would be very interested in reading it.
I’m not familiar with his work, unfortunately.
However a quote from one of the reviews concerns me. The reviewer says:
If that’s Polanyi’s position it seems both kooky and not immediately relevant to the topic, so unless you can take a shot at explaining what you think Polanyi’s insights are that are relevant to the topic at hand I think we should drop this and take it up elsewhere or by other means if you want to talk about it further.
What are some examples of good scientific evidence that isn’t good bayesian evidence?
Uh, how about all of parapsychology, aka “the control group for the scientific method”. ;-) Psi experiments can reach p .05 under conventional methods without being good Bayesian evidence, as we’ve seen recently with that “future priming” psi experiment.
(Note that I said “scientific” not Scientific. ;-) )
Ok, I wouldn’t have necessarily classed that as ‘good scientific evidence’ but it seems to be useful Bayesian evidence so we must be looking at it from different angles.
If they see this behavior from a stranger, they hate it like a bad smell. Yuck.
If they see a lot of this in a relationship, they begin to lose attration for him, and in the end hate him and cheat on him.
By the way, have you studied game theory? A man who always gives you treats and compliments is signalling his own low value, therefore his treats and compliments are devalued. Yes?
My personal belief is that female utility is maximized by a man who is alpha, who leads them rather than treating them as an equal, who keeps them “on their toes” by flirting with other chicks, but who occasionally surprises them with a big romantic gesture like a surprise weekend break, champagne on ice, hot sex in the penthouse suite. But he doesn’t do it all the time, his rewards are unpredictable. This is in line with what game theory would predict.
Note that “utility” is not the same thing as “sexual pleasure”.
Perhaps the reason you’re being downvoted is because you’re confusing game theory with behaviorism. Variable reinforcement schedules, and all that.
Also, I expect if you phrased the last part of your comment, say, as:
“People enjoy a little variety and unpredictability from their partners, and generally prefer not to have to come up with all the ideas for what to do.”
It’d be less likely to be perceived as some sort of chauvinism. That statement, as it happens is true of both men and women.
(Likewise, the first part of your comment describes things that men do in response to women’s behavior, despite your writing it as if it were unique to women’s response to men.)
Finding ourselves with the ability to reflect on how our instinctual behavior and preferences are derived from inclusive genetic fitness necessitates neither fully accepting, nor fully rejecting these preferences.
I understand that, in seeking a romantic partner, there are qualities I value above those as determined by the blind idiot god. One of these qualities is reflectively the ability to rationally self-determine preferences, to the extent that such a thing is possible.
I liken my understanding to the fable of the oak and reed. I prefer, and indeed expect, potential romantic partners to signal appropriate … fertility, in a reductive sense. Likewise, I exhibit desirable behavioral cues (actually, much of the alpha male mentality is worthwhile in itself): confidence, leadership, non-neediness, etc. In neither case (hopefully) are these the qualities that are primarily desired, but merely the minimum threshold that our biology imposes on such endeavors.
Is finding a partner with such an understanding realistic, or even possible? Yes, to an extent. It is a very unfortunate fact of our society that females aren’t socialized in a way that facilities rationality, relative to males; a scarcity which makes such an individual that much more appealing. I have met some, and dated a very few of these. I’m still optimistic.
Absolutely. Just to be clear, I never said, and in fact explicitly disclaimed the former. I agree 100%.