Truth is entangled, and who gets to mate with whom is one of the biggest truths in human social interaction—because mating behavior is very strongly selected by evolution. If you close your mind to the truths about human mating behavior, you’ll mess up your entire map of human social interaction.
If we are going to develop rationality to the point where we see an increase in uptake of rational thinking by millions of people, we can’t just ignore massively important parts of real-world human behavior.
I have a question, since you seem to know a lot about human sociality. What exactly is wrong with handling the dilemmas you describe by saying to the other humans, “I am slightly more committed to this group’s welfare, particularly to that of its weakest members, than most of its members are. If you suffer a serious loss of status/well-being I will still help you in order to display affiliation to this group even though you will no longer be in a position to help me. I am substantially more kind and helpful to the people I like and substantially more vindictive and aggressive towards those I dislike. I am generally stable in who I like. I am much more capable and popular than most members of this group, demand appropriate consideration, and grant appropriate consideration to those more capable than myself. I adhere to simple taboos so that my reputation and health are secure and so that I am unlikely to contaminate the reputations or health of my friends. I currently like you and dislike your enemies but I am somewhat inclined towards ambivalence on regarding whether I like you right now so the pay-off would be very great for you if you were to expend resources pleasing me and get me into the stable ‘liking you’ region of my possible attitudinal space. Once there, I am likely to make a strong commitment to a friendly attitude towards you rather than wasting cognitive resources checking a predictable parameter among my set of derivative preferences.”?
Saying this explicitly is extremely weak evidence of it being true. In fact, because it sounds pre-prepared, comprehensive and calculated most humans won’t believe you. Human courtship rituals are basically ways of signaling all of this but are much harder to fake.
When human females ask “Will you buy me a drink?” they’re testing to see if the male does in fact “demand appropriate consideration”.
Also, relative status and genetic fitness are extremely important in human coupling decisions and your statement does not sufficiently cover those.
Let X be ‘I am slightly more committed to this group’s welfare, particularly to that of its weakest members, than most of its members are. If you suffer a serious loss of status/well-being I will still help you in order to display affiliation to this group even though you will no longer be in a position to help me. I am substantially more kind and helpful to the people I like and substantially more vindictive and aggressive towards those I dislike. I am generally stable in who I like. I am much more capable and popular than most members of this group, demand appropriate consideration, and grant appropriate consideration to those more capable than myself. I adhere to simple taboos so that my reputation and health are secure and so that I am unlikely to contaminate the reputations or health of my friends. I currently like you and dislike your enemies but I am somewhat inclined towards ambivalence on regarding whether I like you right now so the pay-off would be very great for you if you were to expend resources pleasing me and get me into the stable ‘liking you’ region of my possible attitudinal space. Once there, I am likely to make a strong commitment to a friendly attitude towards you rather than wasting cognitive resources checking a predictable parameter among my set of derivative preferences.′
Then, instead of saying my previous suggestion, say something like, ‘I would precommit to acting in such a way that X if and only if you would precommit to acting in such a way that you could truthfully say, “X if and only if you would precommit to acting in such a way that you could truthfully say X.”’
(Edit: Note, if you haven’t already, that the above is just a special case of the decision theory, “I would adhere to rule system R if and only if (You would adhere to R if and only if I would adhere to R).” )
Wouldn’t the mere ability to recognize such a symmetric decision theory be strong evidence of X being true?
If I understood you correctly, I think that people do do this kind of thing, except it’s all nonverbal and implicit. E.g. Using hard to fake tests for the other person’s decision theory is a way to make the other person honestly reveal what’s going on inside them. Another component is use of strong emotions, which are sort of like a precommitment mechanism for people, because once activated, they are stable.
Yes, I understand the signal must be hard to fake. But if the concern is merely about optimizing signal quality, wouldn’t it be an even stronger mechanism to noticeably couple your payoff profile to a credible mechanism?
Just as a sketch, find some “punisher” that noticeably imposes disutility (like repurposing the signal faker’s means toward paperclip production, since that’s such such a terrible outcome, apparently) on you whenever you deviate from your purported decision theory. It’s rather trivial to have a publicly-viewable database of who is coupled to the punisher (and by what decision theory), and to make it verifiable that any being with which you are interacting matches a specific database entry.
This has the effect of elevating your signal quality to that of the punisher’s. Then, it’s just a problem of finding a reliable punisher.
We do. That’s one of the functions of reputation and gossip among humans, and also the purpose of having a legal system. But it doesn’t work perfectly: we have yet to find a reliable punisher, and if we did find one it would probably need to constantly monitor everyone and invade their privacy.
Attention Users: please provide me with your decision theory, and what means I should use to enforce your decision theory so that you can reliably claim to adhere to it.
For this job, I request 50,000 USD as compensation, and I ask that it be given to User:Kevin.
Why is this being downvoted? Even those Clippy’s proposed strategy doesn’t work at all for reasons that Jack explained, he is asking an excellent question. For people (and AIs) without social experience and knowledge, it is very, very important for them to know why people can’t just talk all this stuff through explicitly. They should be asking exactly these sorts of questions so they an update.
A guess: because everything in quotes in Clippy’s comment is a copy and paste of a generic comment it posted a week ago.
I don’t actually know myself, though—I upvoted Clippy’s comment because I thought it was funny. Copying an earlier comment and asking for feedback on it where it’s semi-relevant is exactly in keeping with what I imagine the Clippy character to be.
I have little problem with the way that Robin Hanson discusses status, signalling, and human interactions including mating. He doesn’t give advice to the people on OB on how to pick up chicks though. If you are not interested in the practicalities it is enough to know that women test for a variety of personality and material traits in potential mates (with different tests dependent upon the women’s personality). You don’t need to know what tests go with what personality. Knowing that the majority of women like dominant, smooth talking, humorous men is useful in predicting what men will cultivate in themselves. But I don’t need to know how to fake it.
I think it’s the “faking it” part I and many other people find objectionable.
This is where you and several other people here make a critical mistake. You view various aspects of human mating behavior exclusively in terms of signaling objective traits, and then you add a moral dimension to it by trying to judge whether these objective traits supposedly being signaled are true or fake.
In reality, however, human social behavior—and especially mating behavior—is about much more complex higher-order signaling strategies, which are a product of a long and complicated evolutionary interplay of strategies for signaling, counter-signaling, fake signaling, and fake signaling detection—as well as the complex game-theoretic questions of what can ultimately be inferred from one’s signaled intentions. Nobody has disentangled this whole complicated mess into a complete and coherent theory yet, though some basic principles have been established pretty conclusively, both by the academic evolutionary psychology and by people generalizing informally from practical experiences. However, the key point is that in a species practicing higher-order signaling strategies, signaling ability itself becomes an adaptive trait. You’re not supposed to just signal objective traits directly; you also have to demonstrate your skill in navigating through the complex signaling games. It’s a self-reinforcing feedback cycle, where at the end of the day, your signaling skills matter in their own right, just like your other abilities for navigating through the world matter—and most things being signaled are in fact meta-signals about these traits.
Therefore, where you see “faking it” and “head games” and whatnot, in reality it’s just humans practicing their regular social behaviors. You’ll miss the point spectacularly if you analyze these behaviors in terms of simple announcements of objective traits and plain intentions and direct negotiations based on these announcements, where anything beyond that is deceitful faking. Learning how to play the signaling games better is no more deceitful than, say, practicing basic social norms of politeness instead of just honestly blurting out your opinions of other people to their faces.
I agree with you, and pjeby, who made similar points: the complexity of actual social games is higher than they appear on the surface, and much signaling is about signaling ability itself. But these insights also imply that the value of “running social interactions in software” is limited. Our general purpose cognitive machinery is unlikely to be able to reproduce the throughput and latency characteristics of a dedicated social coprocessor, and can really only handle relatively simple games, or situations where you have a lot of time to think. In other words, trying to play mating games with an NT “in software” is kind of like trying to play basketball “in software”.
Your argument is fallacious because it rests on overstretching the software/hardware analogy. Human brain contains highly reconfigurable hardware, and if some particular computations are practiced enough, the brain will eventually start synthesizing specialized circuits for them, thus dramatically boosting their speed and accuracy. Or to say it the traditional way, practice makes perfect.
Whether it’s throwing darts, programming computers, speaking a foreign language, or various social interactions, if you’re lacking any experience, your first attempts will be very clumsy, as your general cognitive circuits struggle ineptly to do the necessary computations. After enough practice, though, specialized hardware gradually takes over and things start going much more smoothly; you just do what it takes without much conscious thinking. You may never match someone with greater natural talent or who has much more accumulated practice initially, but the improvements can certainly be dramatic. (And even before that, you might be surprised how well some simple heuristics work.)
“Practice makes perfect” has a rather different emphasis from Roko’s suggestion of “running social interactions in software”, which is what I was addressing.
But to answer your point, I agree that improvements in social skills from practice can be dramatic, but probably not for everyone, just like not everyone can learn how to program computers. It would be interesting to see some empirical data on how much improvement can be expected, and what the distribution of outcomes is, so people can make more informed choices about how much effort to put into practicing social skills.
I’m also curious what the “simple heuristics” that you mention are.
“Practice makes perfect” has a rather different emphasis from Roko’s suggestion of “running social interactions in software”, which is what I was addressing.
Fair enough, if you’re talking only about the initial stage where you’re running things purely “in software,” before any skill buildup.
But to answer your point, I agree that improvements in social skills from practice can be dramatic, but probably not for everyone, just like not everyone can learn how to program computers. It would be interesting to see some empirical data on how much improvement can be expected, and what the distribution of outcomes is, so people can make more informed choices about how much effort to put into practicing social skills.
From what I’ve observed in practice, people with normal (and especially above average) intelligence and without extraordinary problems (like e.g. a severe speech disorder) who start at a low social skill level can see significant improvements with fairly modest efforts. In this regard, the situation is much better than with technical or math skills, where you have to acquire a fairly high level of mastery to be able to put them to any productive use at all.
I don’t deny that some people with extremely bad social skills are sincerely content with their lives. However, my impression is that a very considerable percentage would be happy to change it but believe that it’s impossible, or at least far more difficult than it is. Many such people, especially the more intelligent ones, would greatly benefit from exposure to explicit analyses of human social behaviors (both mating and otherwise) that unfortunately fall under the hypocritical norms against honest and explicit discussion that I mentioned in my above comment. So they remain falsely convinced that there is something deeply mysterious, inconceivable, and illogical about what they’re lacking.
I’m also curious what the “simple heuristics” that you mention are.
Well, which ones are the most effective for a particular person will depend on his concrete problems. But often bad social skills are to a significant degree—though never completely—due to behaviors that can be recognized and avoided using fairly simple rules. An example would be, say, someone who consistently overestimates how much people are interested in what he has to say and ends up being a bore. If he starts being more conservative in estimating his collocutors’ interest before starting his diatribes, it can be a tremendous first step.
This is admittedly a pretty bland and narrow example; unfortunately, pieces of advice that would be more generally applicable tend to be very un-PC to discuss due to the above mentioned hypocritical norms.
But more to the point: the real world is full of instances where verbalized whiter-than-white morality is thrown out of the window, in some cases to such a large extent that the verbalized rules are not the actual rules, and people consider you a defective person if you actually follow verbalized rules rather than just paying lipservice to them.
I understand that this is often the case, and that this is how “pick ups” often work in the real world. The thing is, I just think that human’s sexual rituals are ingrained so deeply in our little monkey brains, that I don’t think generalizing from what works in that domain to the broader world of “refining the art of human rationality” is a really good idea. This particular domain of human behavior is so ridiculously irrational that I don’t think it serves as a good model for ordinary, everyday human irrationality. So if you’re reasoning by analogy to it, you’re basically patterning against a superstimulus
This particular domain of human behavior is so ridiculously irrational
No! Not at all. Quite the contrary: in the original post I was careful to show that a shit-test is actually an application of an advanced concept from game theory—using a credential to solve a cheap talk problem in a signaling game!
To put it more clearly, it’s not that this domain of human behavior is actually particularly irrational. In reality, it has its well-defined rules, and men who have the knowledge and ability to behave according to these rules are, at least in a libertine society such as ours, awarded with high status in the eyes of others—and lots of sex, of course, if they choose to employ their abilities in practice. In contrast, men who are particularly bad at it suffer an extreme low status penalty; they are are a target of derision and scorn both privately and in the popular culture. However, what complicates the situation is that this is one of those areas where humans practice extreme hypocrisy, in that you’re expected not just to navigate the rules of the game cleverly, but also to pretend that they don’t exist, and to discuss the topic openly only with mystical reverence and unrealistic idealizations. Realistic open discussions are perceived as offensive and sacrilegious. It’s an enormous bias.
He who fights with monsters should look to it that he himself does not become a monster. And when you gaze long into an abyss the abyss also gazes into you.
Friedrich Nietzsche
I don’t really agree but I think this describes the fear that underlies much of the hostility to discussing these controversial topics.
I think you’re partly correct, but some other biases are in fact more relevant here. However, going deeper into this would look too much like attacking other people’s motives, which would be perceived as both unproductive and hostile, so I’d rather not delve into that line of discussion.
I would also like to know more about biases you mentioned, can PM me this too? Or just post it here for everyone to read, because it’s a very big teaser on a topic which you seem to have a lot of interesting insights.
Have you never encountered this attitude amongst religious people over atheism? The idea that atheism is an inherently dangerous idea, that merely engaging with it risks infection. That atheism might be a kind of aqua regia for morality, capable of dissolving all that is good and right in the world into some kind of nihilistic nightmare. Even (or perhaps especially) those who think atheism might be true see it as potentially dangerous, that gazing into the abyss may permanently damage the seeker’s moral core. This belief, whether implicit or explicit, seems quite common among the religious and I think explains some of the hostility born of fear that is sometimes observed in the reactions to atheism and atheists.
I’m suggesting something similar may underlie some of the reactions to discussions of the below-the-surface game theoretic realities of human social interaction. People fear that if they gaze into that abyss they risk losing or destroying things they value highly, like traditional concepts of love, loyalty or compassion. I think this fear is misguided, and personally prefer the truth be told, though the heavens fall regardless, but I can understand and to some extent sympathize with the sentiment that I think sometimes underlies it.
People fear that if they gaze into that abyss they risk losing or destroying things they value
Yes, and no. My objection to the citation of PUA tactics is motivated by fear that it could lead down the dark path… but not fear that it might be true. Rather, it’s fear that something that might be true in one narrow domain might get applied as a general rule in broader domains where it is no longer applicable.
In PUA circles, “winning” is defined by getting laid. So if you go to a meat-market and try your PUA tactics all night long, you may end up getting rejected 50 times, but be successful once, and your brain records that as a “win”, cause you didn’t go home alone (just like audiences at psychic shows remember the “hits” and forget the “misses”). But does that really tell you that PUA theory correctly describes typical social interaction? No, it just tells you that there is a certain, small minority of people on whom PUA tactics work, but they are a non-representative sample of a non-representative sample.
So when you then take one of these PUA tactics, which isn’t even effective on the vast majority of people even in the meat-market pickup context, and start talking as if it was a universal truth applicable to all manner of human social interactions, it makes my head explode.
So where does my “fear” come in? Well, here’s the thing… I suspect that a large portion of the audience for PUA material is AS spectrum, or otherwise non-GPU possessing people, who have trouble finding sex/romance partners on their own, so they learn some PUA techniques. Fine. But these techniques often require the abandoning of “black and white morality”, as has been said earlier on this thread. Applied solely to the realm of picking up women, I don’t necessarily have a problem with that—“all’s fair in love and war” after all. But the thing is, most NTs are able to compartmentalize this kind of thing. I know many NT, “ladies man” types who are perfectly moral, ethical, upstanding people in just about every other way imaginable, but who have no problem lying to women to get in their pants. I find this a bit distasteful, but I don’t object to it, I just recognize that this is how the world works. But the thing is, many AS/non-GPU people have difficulty compartmentalizing things like this in the same way NTs do.
So I fear that if you teach these kind of dark arts to the non-compartmentalizing, non-NT crowd, they’re going to take away from it the message that abandoning “black and white morality” is the way to go about fitting in in the NT world, in areas beyond the meat-market. I fear that we may end up unintentionally creating the next generation of Bill Gates and Henry Kissingers.
You make a fair point that PUA probably doesn’t explain all of human interaction—it explains just the bare minimum needed to get that 1 in 50 hit rate, so the majority of girls could be PUA-invulnerable and we wouldn’t know it. But you also claim that a hit rate of 1 in 50 is bad and shouldn’t be considered a “win”, and I take objection to this. Do you also think that a good mathematician should be able to solve any problem in the world or give up their title? Or do you have an alternative theory that can beat PUA at PUA’s game? (Then you should head over to their forums and if you’re right, they will adopt your theory en masse.) If not, why should we suppress the best theory we’ve got at the moment?
If your goal is to pick up women, then yes, absolutely 1 in 50 is a “win”. But if your goal is to refine the art of human rationality, I just don’t see how it’s relevant.
The thing is, with any model (PUA or otherwise), there are many reasons you could lose out on the 49 in 50 (to go with your terminology for now):
They aren’t into your body type, facial structure, height, race, or some other superficial characteristic
They have preferences that are explained by your model, but you messed up or otherwise failed to fulfill them (Similarly: they have preferences that are explained by your model, but you didn’t go far enough in following the model.) This is exacerbated by the tendency of people to go for partners at the edge of what they can realistically expect to attract, which makes it really easy to fall just a tiny bit short of fulfilling their preferences. Even when your improve your attractiveness, then you may set your sights on a higher tier of partners, and you will still be on the edge of being accepted. P(rejection | you go for a random person in the population you are into) is much less than P(rejection | you go after the most desirable person in that population who you still consider a realistic prospect).
They have preferences that are explained by your model, but someone else around fulfilled them better (or they weren’t single)
Taking into account these factors, from the start we know that there is a ceiling for success of under 50. Let’s say that at least one of these factors apply 50% of the time. Then we are really seeing a max success rate of 1 in 25. 1 in 10 max success rate out of 50 is even plausible. If you only pursue people on the higher edge of your attractiveness bracket, then the number could go even lower, and one success looks more and more impressive.
When you expect to meet rejection >50% of the time via your model, using rejection to test your model is difficult. It’s hard to test such theories in isolation. At what point do you abandon or modify your model, and at what point to you protect it with an ad hoc hypothesis? A protective belt of ad hoc hypotheses isn’t always bad. Sometimes you have actual evidence inducing belief in the presence or absence of the type of factors I mention, but the data for assessing those factors is also very messy.
Stated in a more general form, the problem we are trying to solve is: how do I select between models of human interactions with only my biased anecdotal experience, the biased anecdotal experience of others (who I select in a biased non-representative fashion), and perhaps theories (e.g. evolutionary psychology) with unclear applicability or research studies performed in non-naturalistic settings with unclear generalizability? Whew, what a mouthful!
This is not a trivial problem, and the answers matter. It is exactly the kind of problem where we should be refining the art of human rationality. And an increase in success on this problem (e.g. 1 in 500 to 1 in 50, to continue the trend of pulling numbers out of thin air to illustrate a point ) suggests that we have learned something about rationality.
This is not a trivial problem, and the answers matter. … suggests that we have learned something about rationality.
I actually agree with this completely, and I think your analysis is rather insightful. Your conclusion seems to be that PUA topics are deserving of further study and analysis, and I have no problem with that… I only have a problem with assuming PUA-isms to be true, and citing them as “everybody knows that...” examples when illustrating completely unrelated points.
how do I select between models of human interactions with only my biased anecdotal experience, the biased anecdotal experience of others (who I select in a biased non-representative fashion), and perhaps theories (e.g. evolutionary psychology) with unclear applicability or research studies performed in non-naturalistic settings with unclear generalizability?
This is well put. The issue you raise is why I tried to be a little more explicit about the priors that I was using here. Obviously it’s a long way from giving the explicit probabilities that would be necessary to automate the Bayesian updating, but at least we can make a start at identifying where our priors differ.
If your goal is to pick up women, then yes, absolutely 1 in 50 is a “win”
Sure… maybe for when you’re starting out as a rank beginner, doing “cold approach” and “night game”. But my success rate at “social circle game” was an order of magnitude better than that before I knew any PUA stuff in the first place… and in retrospect I can easily see how that success was based on me accidentally doing a lot of things that are explicitly taught to PUAs for that type of game.
Hell, even during the brief period where I went to nightclubs and danced with girls, there are times that I realize in retrospect I was getting major IOIs and would’ve gotten laid if I’d simply had even a single ounce of clue or game in my entire body… and at a better success rate than 1 in 50.
So, I’m not sure where you pulled the 1 in 50 number from, but in my experience it’s not even remotely credible as a “success” for a PUA, if you mean that the PUA has to ask 50 to get 1 yes.
However, if you mean that a PUA can take 50 women who are attracted to him, and then chooses from them only the one or two that he finds most desirable, then I would agree that that’s indeed a success from his POV. ;-)
(And I would also guess that most PUAs would agree that this is much closer to their idea of “winning”, and that even a PUA of modest or average ability should be able to do much better than your original estimate, even for nightclub game.)
AAARGH! You’re still totally responding to this as if we were having this discussion on a PUA forum, rather than on LW.
The 1 in 50 number was totally pulled out of my ass, a hypothetical intended to illustrate the idea that if a given technique works only 1 in X times, but that’s enough to result in getting laid, your brain is likely to count that as a “win”, and ignore the (X − 1) times it failed, leading you to incorrectly assume that the technique illustrates some universally applicable principle of human behavior, where none in fact exists.
The 1 in 50 number was totally pulled out of my ass
That seems to me to be a less appropriate way to do things on LW, personally.
Certainly, arguing that you pulled a number out of your ass in order to refute empirical information providing an inside view of a phenomenon is really inappropriate here.
IOW, your hypothesis is based on a total and utter incomprehension of what PUAs do or value, and is therefore empirically without merit. Actual PUAs are not only aware of the concept you are describing, but they most emphatically do not consider it success, and one guru even calls it “fool’s mate” in order to ridicule those who practice it. (In particular, Mystery ridicules it as relying on chance instead of skill.)
In short, you are simply wrong, and you’re probably getting downvoted (not by me, mind you) not because of disagreement, but because you’re failing to update on the evidence.
Certainly, arguing that you pulled a number out of your ass in order to refute empirical information providing an inside view of a phenomenon is really inappropriate here
It’s very clear from the original context that “1 in 50” was not being proposed as evidence of anything, but simply as colloquial shorthand for “1 in some number X”. And I’m not sure what empirical evidence you’re referring to—the plural of anecdote yada yada yada.
your hypothesis is based on a total and utter incomprehension of what PUA
My knowledge of what PUA entails is based almost entirely on various examples given by PUAs here on LW (that and a few clips from Mystery’s show being ridiculed on The Soup , which you might want to consider as a data point on what the general public thinks of PUA). Maybe if LW’s resident PUAs were to cite examples more like those you gave in your last reply to me, I might have a higher opinion of PUA wisdom.
Look, I totally understand why you and the other PUA adherents are so emotionally attached to the idea: if I were single, and somebody gave me a magic feather that enabled me to get laid a lot, I’m sure I would think it was awesome, and probably wouldn’t stop talking about it, well past the point that my friends and acquaintances were sick of hearing about it. It might be worth remembering, though, that the original topic of this article was Asperger/Autistic spectrum issues, and that one of the characteristic traits of the spectrum is what’s been referred to as “little professor syndrome,” where aspies tend to go on and on about their narrow topics of interest, unable to pick up social cues, like eye rolling, indicating lack of interest in the subject.
I don’t recall whether you responded positively to the “do you have high functioning asperger’s” question, and it’s not my intention to pejoratively imply that you, or anyone else here, does. I just think it might be worth looking at this through that lens.
Look, I totally understand why you and the other PUA adherents are so emotionally attached to the idea: if I were single, and somebody gave me a magic feather that enabled me to get laid a lot, I’m sure I would think it was awesome, and probably wouldn’t stop talking about it, well past the point that my friends and acquaintances were sick of hearing about it.
If you’re implying that I’m single or attempting to get laid a lot, you’ve either missed a lot of my comments in this discussion, or you didn’t read them very carefully.
(Hint: I’m married, and have never knowingly used a pickup technique for anything but social or business purposes.. and I’ve made no secret of either point in this discussion!)
In other words, the numbers aren’t the only thing you just pulled out of your ass. ;-)
I would also point out that it is not particularly rational for you to first rant that nobody is responding to your points, and then, when people reply to you in an attempt to respond, for you to criticize them for “going on and on”.
(Well, it’s not rational unless your goal is to troll me, I suppose. But in that case, congratulations… you got a response.)
Meanwhile, you’ve also just managed to demonstrate actually doing the thing you’re arguing PUAs theoretically do (but actually don’t, if they’re well-trained).
That is, you made a sweeping judgment that doesn’t really apply to the claimed target group.
And, you didn’t make any allowance for the possibility that the specific person you were interacting with might be different from your generalized model of “single with a magic feather”. (Heck, even PUA’s know they have to calibrate to the individuals they encounter—i.e. pay attention.)
If you’re implying that I’m single or attempting to get laid a lot
Nope, I neither said, nor implied anything of the kind. I was simply speculating on why it might be that so many people on LW seem to be so attached to the PUA ideas, despite their not really seeming to have much going for them in the way of Bayesian evidence. I wasn’t referring to you (or anyone) in particular. The format of comment threads requires that comments be addressed to a specific person, and so your comment was the one I happened to click ‘reply’ on, but I was referring in general to the PUA crowd.
not particularly rational for you to first rant that nobody is responding to your points,
I complained about people’s responses not addressing the substance of my argument, not the lack of responses.
and then, when people reply to you in an attempt to respond, for you to criticize them for “going on and on”.
Obviously I wasn’t talking here about your responses to my comments, but about the general inclination of certain PUA-boosters to continually bring up PUA themes in the middle of discussing unrelated issues.
No, I’m just saying that a 1 in 50 hit rate is more likely to be explained by a peculiarity of the particular people involved in the interaction, rather than a universal truth of all human social interaction.
Yep, I certainly got that point. (See the edited comment.) But today the real choice is between PUA that yieds little but positive results in the field, and alternative theories that yield no results.
But I’m not arguing that PUA is bad. I’m arguing that the lessons learned from PUA aren’t generally applicable outside that arena, and are not good examples to use when illustrating a point on an unrelated human-rationality topic.
You say a hit rate of 1 in 50 is bad and shouldn’t be considered a “win”. Do you also think that a good mathematician should be able to solve any problem in the world or give up their title?
If I apply the same methods for the same amount of time to many problems, and I solve only 1 in 50 of them, then I should seriously consider the possibility that there was something special about that 1 in 50 that made them especially accessible to my methods. I should not conclude that the 1 in 50 were typical of all the problems that I considered.
Or do you have an alternative theory that can beat PUA at PUA’s game? Then you should head over to their forums and if you’re right, they will adopt your theory en masse.
I expect that a man can maximize his number of sexual partners by focusing his attentions on women who will be especially receptive to his advances. But it would be a mistake to infer that such women are typical.
I expect that a man can maximize his number of sexual partners by focusing his attentions on women who will be especially receptive to his advances.
That’s exactly what cousin_it has described himself doing, at least in the case of women who ask him to buy them drinks. His hug test (for lack of a better word) very quickly identifies which women are receptive to being physically companionable with him.
In PUA terminology, he’s taking her opener and screening it. Other relevant PUA terminology in this space:
AI (Approach Invitation) - reading signals that indicate a woman wants you to approach
Forced IOI (Indicator Of Interest) opener—engaging in a behavior that forces a woman’s body language to immediately reveal her interest or lack thereof, such as by gazing directly into her eyes while approaching, in order to see whether she looks down, away, or back at you, and whether she smiles.
Some men swear by these things as the essence of their game; others, however, want to be able to meet women who will neither AI nor accept a forced IOI, such as women who get approached by dozens of men a night and therefore have their “shields up” against being approached.
Anyway, your hypothesis isn’t a better PUA than PUA; but practical methods for actually applying that hypothesis are part of the overall body of knowledge that is PUA.
That’s exactly what cousin_it has described himself doing, at least in the case of women who ask him to buy them drinks. His hug test (for lack of a better word) very quickly identifies which women are receptive to being physically companionable with him.
But my question is, does PUA theorizing help him get an accurate model of what women in general are actually like? More generally, does it give him tools to get a better understanding of what reality is like? Or is it just giving him tools that help him to focus his attentions on a certain small subset of women?
If I go into a library, I can easily tell the English books from the books in Chinese, so I can quickly narrow my attention to the books that I can get something out of. But that doesn’t mean that I know anything about what’s going on inside the Chinese books. And, if the vast majority of the books in the library are Chinese, then I actually know very little about the “typical” book in the library.
Anyway, your hypothesis isn’t a better PUA than PUA; but practical methods for actually applying that hypothesis are part of the overall body of knowledge that is PUA.
I’m having trouble parsing this sentence. What’s the “hypothesis” here?
But my question is, does PUA theorizing help him get an accurate model of what women in general are actually like? More generally, does it give him tools to get a better understanding of what reality is like? Or is it just giving him tools that help him to focus his attentions on a certain small subset of women?
I thought about it some more and honestly can’t tell if you’re right or not. On one hand, I never do cold approaches—there’s always some eye contact and smiling beforehand—so the women I interact with are already very self-selected. On the other hand, I know from experience that a girl who rejected me in one setting (e.g. a party) may often turn out to be receptive in another setting (e.g. a walk), so it’s not like I’m facing some immutable attribute of this girl. So every interaction with a woman has many variables beyond my control that could make it or break it, but my gut feeling is that most of those variables are environmental (current mood, presence of other people, etc.) rather than inborn.
And, if the vast majority of the books in the library are Chinese, then I actually know very little about the “typical” book in the library.
Yes, I agree. In this particularly case, though, we have no idea whether your “if” clause is satisfied, and what the proportion of English to Chinese books really is.
To make an analogy with my previous post where I explain that the ceiling on success rate is actually rather low, most of the books you read either burst into flame when you read them, or their text disappears or turns into gibberish. Sometimes, even forensic inspection can’t tell you what language the book was originally in.
All you can know is that learning English helps you read some of the books in the library. Absent the knowledge of what was in the text that was destroyed before you could read it, you have no idea of the typicality or atypicality of the English books you are capable of reading. Yet if your forensic inspection of the destroyed books reveals more English characters than Chinese characters, or you have some additional theoretical or empirical knowledge on the distribution of languages in the books, then you may have to upgrade your estimate of the proportion of English books. (This assumes that the hypotheses of books being in English or Chinese are both locateable.)
Even if your estimate is wrong, it can still be very valuable to know how to read the typical English book in the library, especially if the alternative is not being able to read any.
You still know very little, of course, about the population of books (or people) you are trying to model. Yet in the case of people, you are often faced with competing hypothesizes about how to behave, and even a small preference for one hypothesis over the other can have great practical significance. That’s why stereotypically we see women picking over their interactions with men with their female friends, and PUAs doing exactly the same thing on internet forums. They have tough decisions to make under uncertainty.
Does a preference for one theory over another, and seeming practical results mean that the preferred theory is “true?” I think we both agree: no. That’s naive realism. Yet when you are engaged in discussion on a practical subject, it’s easy to slip from language about what works to language about what is true, and adopt a pragmatic notion of truth in that context.
As I’ve mentioned before, PUAs do commit naive realism a lot. While there are ceilings to what mass-anecdotal experience of PUAs can show us about epistemic rationality, there is a lot it can show us about instrumental rationality. How to be instrumentally successful when the conclusions of epistemic rationality are up in the air is an interesting subject.
I’m not a PUArtist, I’m a PUInstrumentalist about PU models. Yet when I see a theory (or particularly hypothesis in a theory) working so spectacularly well, and that data which deviates from it generally seems to have an explanation consistent with the theory, and the theory lets me predict novel facts, and it is consistent with psychological research and theories on the topic… then it sometimes makes me wonder if my instrumentalist attitude of suspended judgment on the truth of that theory is a little airy-fairy.
I doubt that PUA models are literally highly probable in totality, yet I hold that particular hypotheses in those models are reasonable even only fueled by anecdotal evidence, and that with certain minor transformations, the models themselves could be turned into something that has a chance of being literally highly probable.
I expect that a man can maximize his number of sexual partners by focusing his attentions on women who will be especially receptive to his advances.
I was saying that PUAs don’t entirely agree with your hypothesis (and incidentally, don’t necessarily value the “maximize his number of sexual partners” part)… but they do have tools for taking advantage of attuning to women who will be especially receptive.
But my question is, does PUA theorizing help him get an accurate model of what women in general are actually like? More generally, does it give him tools to get a better understanding of what reality is like? Or is it just giving him tools that help him to focus his attentions on a certain small subset of women?
Both. As I mentioned earlier, PUA models of social behavior have been successfully applied in and out of pubs, with people who the PUA is not even trying to sleep with, both male and female. Anecdotally, PUAs who focus on learning social interaction skills find that those skills are just as useful in other contexts. (For example, Neil Strauss noted in The Game that learning PUA social skills actually helped his celebrity-interviewing technique, as it gave him tools for pepping up conversations that were starting to go stale.)
Most of the criticism here about PUA has been claiming that it has poor applicability to women, but this is the result of a severe misapprehension about both the goals and methods of PUA-developed social models. PUA social signaling models are actually applicable to humans in general, even though the means of effecting the signals will vary.
My impression is that the typical LWer has little familiarity with these models, and has only heard about a few bits of (highly context-sensitive) specific advice or techniques. Are you familiar with microloop theory? Frames? Pinging? There’s a metric ton of of systematization attempts by PUA theorists, some of which is very insightful. Also, a lot of practical advice for dealing with a wide variety of social situations.
I would predict that if you took an experienced social-game theorist PUA trainer and threw him into a random physical social environment with a goal to make as many friends as possible, vs. an untrained male of similar geekiness (I’m assuming the social game theorist will be a geek, present or former) and similar unfamiliarity with the group or its rules/topics/etc., and the PUA will kick the untrained person’s ass from here to Sunday.
What’s more, I would bet that you could repeat this experiment over and over, with different PUAs and get the same results. And if the PUA in question is a good trainer, I’d be they’d be able to take a modest-sized group of similarly-geeky students and quickly train at least one student to beat an untrained person by a solid margin, and to get most of the students to improve on their previous, untrained results
That’s how confident I am that PUA social interaction models are sufficiently correct to be broadly applicable to “typical” human beings—not just women.
(Btw, I’m aware that I’ve left a huge number of loopholes in my stated prediction that an unscrupulous experimenter could use to skew the results against the PUA, but I don’t really want to take the time to close them all right now. Suffice to say that it would need to be a fair contest, apart from the PUA’s specialized training, and I’m only betting on PUA trainers being able to totally kick an untrained person’s ass; I would expect experienced PUAs to do say, maybe 2-3 times as well as the untrained on average. Trainers and “in-field” coaches have to have a better grasp of social dynamics than the people they’re training. Also, there’s a big gap between theory and execution—if you can’t get your body and voice to do what the theory tells you to, it doesn’t matter how good the theory is!)
What’s more, I would bet that you could repeat this experiment over and over
Ok, I swore to myself I wasn’t going to comment on this thread anymore, but now you’ve made me think of something that hadn’t occurred to me before:
Assuming for the moment that it’s true that a skilled PUA trainer would beat an untrained person at this test, how much of that effect do you think is attributable to simply being more confident vs actually having a more accurate model of human social behavior? I.e. you could, in principle, test for what I’m talking about by replacing the untrained geek with a geek trained with a different, completely fabricated set of PUA rules and theories, which he’d been led to believe were the real, PUA methods.… tell him these methods have been extensively experimentally tested, maybe even fake some tests with some actors to convince him that his bogus PUA skills actually work, just to give him the confidence of thinking he knows the secrets of the PUA masters. Then test him against someone given an equal amount of training on the “real” PUA techniques.
Oh, and for bonus points, for the fabricated set of techniques, you could use stuff taught by Scientology, just to make sure there’s consensus that it’s bogus ;)
How do you think that test would turn out? (I’m taking no position on the issue—I honestly don’t know)
It’s hard to create and maintain confidence that isn’t based on actual results. I predict that the confident geeky guy would go barreling into interactions and just as easily alienate people as engage them. Without any competence to back up the confidence, the latter wouldn’t last very long, unless the guy was totally oblivious to negative signals from others.
It is a good question, whether a PUA could be matched by a control guy of the same level of confidence. But if we are talking any real sort of confidence, the main way it develops is through success, which requires manifesting attractive behaviors in the first place.
But if we are talking any real sort of confidence, the main way it develops is through success, which requires manifesting attractive behaviors in the first place.
Exactly. But in the version of the experiment I proposed, both groups are composed of (initially) inexperienced geeks, as opposed to pjeby’s protocol, which involved an untrained newbie and a PUA trainer (who, despite having trained on, IMHO, potentially invalid methods, has likely acquired a great deal of real confidence via experience).
Which is why, now that I’ve had some time to think about it, I now predict that if this experiment were performed, both trainee groups would “go barreling into interactions and just as easily alienate people as engage them”. For it to mean much, you would have to iterate the experiment over a period of weeks or months and see which group improves faster. I remain agnostic on what the outcome of that would be.
I was thinking along our lines, where both groups involve newbies. I predict that the confidence will collapse in whichever groups lack some actual practical knowledge that can achieve success to keep the confidence boosted.
Assuming for the moment that it’s true that a skilled PUA trainer would beat an untrained person at this test, how much of that effect do you think is attributable to simply being more confident vs actually having a more accurate model of human social behavior?
PUAs themselves will admit to confidence being important… in meeting people, and in its being a foundation for everything they do. But it’s not a magic bullet.
I’ve seen an excerpt of a talk that one gave who explained that when he started, he actually attained some success at opening (i.e. initiating contact) through delusional self-confidence… however, this wasn’t enough to improve his success at “closing” (i.e., getting numbers, kisses, dates, etc.), because he still made too many mistakes at understanding what he was supposed to do to “make a move”, or how he was supposed to respond to certain challenges, etc.
Remember, if the signal is too easy to fake, it’s not very useful as a signal.
Oh, and for bonus points, for the fabricated set of techniques, you could use stuff taught by Scientology, just to make sure there’s consensus that it’s bogus ;)
I think it would be a better test to reverse the PUA recommendations, i.e., teach them things that the PUAs predict would flop. If they succeed anyway, it’s a slam dunk for the confidence hypothesis. But I doubt they would.
Actually, one thing I saw on Mystery’s show suggests to me that it might be sufficient to train someone poorly—one trainee on the show couldn’t get it through his head as to tthe proper use of negging, and went around insulting women with what, as far as I could tell, was total confidence. And of course, It didn’t work, at all, while the other guys who both understood the idea and applied it with careful calibration, achieved much greater success.
In other words, I think confidence alone is insufficient to replace social calibration—the PUA term for having awareness (or reasonably accurate internal predictions) of what other people are thinking or feeling about you, each other, and the overall social situation. The principal value of PUA social dynamic theories to PUA practice is to train the socially ill-calibrated to notice the cues that more socially adept people notice instinctively (or at least intuitively).
In other words, having a theory of “status” or “value” helps you to to know what to pay attention to, to help tune in on the music of an encounter, rather than being misled by the words being sung.
(Of course , I’m sure we all know people who come along and wreck the music by confidently singing a new and entirely inharmonious tune. This sort of behavior should not be confused with being socially successful.)
(Of course , I’m sure we all know people who come along and wreck the music by confidently singing a new and entirely inharmonious tune. This sort of behavior should not be confused with being socially successful.)
I think it would be a better test to reverse the PUA recommendations, i.e., teach them things that the PUAs predict would flop
I don’t think that would be a fair test. Techniques that PUAs think would flop, I would probably agree with them in predicting they’d flop—It’s easier to know that something doesn’t work, than that it does work. So they would actually end up at a disadvantage relative to a person with natural confidence and no PUA training.
I would want my control group to be given techniques that are entirely harmless and neutral, or as close to it as is reasonably possible.
I would want my control group to be given techniques that are entirely harmless and neutral, or as close to it as is reasonably possible.
While that would be an interesting test, being entirely harmless and neutral is how to flop, PUAs predict. People don’t want to date people they feel neutral towards; they want to date people they are excited about. Since women are more selective, this principle applies even more to women, and makes for some interesting problem-solving.
Since there a bunch of different taxa in female preferences (yes, my model of the preferences of the female population accounts for significant differences in female preferences in certain dimensions), and these taxa have strong, differing, mutually-exclusive preferences (e.g. the preference to definitely kiss on the first date, vs. the preference to definitely not kiss on the first date), and which preference taxon a woman belongs to in advance is not always reasonably predictable, certain behaviors will have a polarizing response. There is only a certain set of behaviors that is universally attractive to women (e.g. confidence), and outside that set, behaviors that attract one woman might annoy or repulse another (cousin_it’s arm around the waist example falls into this category).
Unfortunately, you can’t always explicitly ask what preference taxon a woman is in; your ability to guess based on either strong or weak cues may be one of her filters. And asking too much about someone else’s preferences can signal that you consider her higher status, which many women may find unattractive. It might also signal that you think something in particular is going to happen, when she hasn’t decided if she wants it to happen yet. Even if a woman could have an explicit discussion of her preferences and not consider your obsequious for doing so, you can’t really know this in advance. And you can’t ask her if she is part of the taxon of women who can discuss their preferences explicitly without docking status points from men for raising the subject; nor can you ask her if she part of the taxon of women who can be asked which taxon of women she is in: the problem is recursive. So the only rational solution is to guess, unless you are comfortable screening out women who can’t have explicit discussions of their preferences early in the interaction. (Though you can help your guessing by starting oblique discussions of preferences, such as talking about relationship history and listening carefully.)
You can’t just avoid polarizing behaviors that women will have either strong positive or negative responses to, because then you risk relegating yourself to the boring guy heap. You are stuck doing an expected value calculation on these polarizing behaviors taking into account the uncertainty of your model of her. If you decide to make a certain move, you hope your calculation was right and you don’t weird her out. And if you decide not to make that move, you hope your calculation was right and you don’t get docked points for not making the move and failing to make a strong enough impression. A lot of guessing is going on here; if your hardware doesn’t steer you down the right path, you need to get better at guessing, which is a job for rationality.
Shorter version of the above: Men need to make strong positive impressions on women to be reliably successful. Many of the behaviors that make strong positive impressions on some types of women make strong negative impressions on other types of women. The result is that men need to engage in high-risk, high-reward behaviors to make strong positive impressions on many types of women, though the risk is substantially mitigateable with experience and knowledge. This leads to some interesting ethical dilemmas. It also leads to some interesting practical consequences, where sometimes it’s better to increase the variance in your attractiveness even at the cost of your average attractiveness to the female population. But now I’m just rambling…
So the only rational solution is to guess, unless you are comfortable screening out women who can’t have explicit discussions of their preferences early in the interaction. (Though you can help your guessing by starting oblique discussions of preferences, such as talking about relationship history and listening carefully.)
….
It also leads to some interesting practical consequences, where sometimes it’s better to increase the variance in your attractiveness even at the cost of your average attractiveness to the female population.
I think you’ve highlighted an important difference between the inside view and outside view of PUA.
Outsiders think that for PUA to be valid, it has to have techniques that work on “most women”. However, for insiders, it simply has to have a set of techniques that work on women they are personally interested in.
Outsiders, though, tend to think that the set of “women PUAs are personally interested in” is much more homogeneous than it really is. The women that say, Decker of AMP goes for, are orders of magnitude more introspective than those that say, Mystery goes for. David D seems to like ambitious professional women. Johnny Soporno seems to dig women with depth of emotion who’ll all be a big happy family in his harem. Some gurus seem to like women they can boss around. Juggler seems to value good conversation. (And notice that none of these preferences are, “who I can get to sleep with me tonight”. Even Mystery’s preference for models and strippers is much more about status than it is about sex.)
Granted—these are all superficial personal impressions of mine, based on random bits of information, but it’s helpful to point out that men’s preferences vary just as much as women’s do. PUA is not a single unified field aimed at claiming a uniform set of women for a uniform set of men. It is a set of interlinked and related fields of what works for specific groups of women in specific situations…
Conditioned on the preferences of the men who are interested in them.
That is, successful PUAs intentionally choose (or invent) behaviors and sets of techniques that will screen out women that they are not interested in. And they don’t engage in a search for what technique will work on the woman they’re with—they do what the kind of woman they want would like.
Now, there are certainly schools of thought who think the goal is to figure out whatever woman is in front of them, but my observation of what the people in PUA who seem happy with their life and work say, is that they always effectively talk about being fully themselves, and how this automatically causes one group to gravitate towards them, and the rest to gravitate away.
This has also been my personal experience when I was single and doing “social game” (which as I said, I didn’t know was a thing until much later).
What I’ve also noticed is that many gurus who used to teach mechanical, manipulative game methods have later slid over to this line of thought—specifically, many have said that thinking in terms of “what do I need to do to get this woman to like me” is actually hurting your inner game, because it sets the frame that you are the pursuer and she is the selector, and that this is going to cause her to test you more than if you just were totally open about who you are and what you want in the first place, so there’s no neediness or apprehension for her to probe.
Some people talk about feigning disinterest, but I think that what really works (from my limited experience) is genuine disinterest in people who aren’t what you’re looking for. In some schools, this is talked about as a tactic (i.e. “qualifying” and “disqualifying”), but I think the more mature schools and gurus speak about it as a way of thinking, or a lifestyle.
Anyway, tl;dr version: the success of PUA as a field isn’t predicated on one set of techniques “working” on all taxa of women, it’s predicated on individual PUAs being able to select behaviors that work well with the taxa he wants them to “work” on… and the taxa for which techniques exist is considerably wider than field-outsiders are aware of… leading to difficult communication with insiders, who implicitly understand this variability and don’t get why the outsiders are being so narrowminded.
While that would be an interesting test, being entirely harmless and neutral is how to flop, PUAs predict.
No, you misunderstood what I was saying. I meant that for the purposes of maintaining a valid control group, they be given instructions which neither help nor harm their chances, i.e. have a completely neutral effect on their innate “game” or lack thereof.
I appreciate the idea of this test; my point is that is that it might be hard to set up a group with instructions that have a completely neutral effect on their results. Maybe with a pilot study?
I also choose to use your post as a jumping off point for some rambling of my own.
So they would actually end up at a disadvantage relative to a person with natural confidence and no PUA training.
The problem is that then you’re not cleanly comparing methods any more. Remember: much of PUA is the result of modeling the beliefs and behaviors of “naturally confident” and socially-skillful people. The PUA claim is that these beliefs and behaviors can be taught and learned, not that they have invented something which is different from what people are already capable of doing.
So, if you take “a person with natural confidence”, how do you know they won’t be doing exactly what the PUA will?
By the way, please remember that the test I proposed was befriending and social climbing, not seducing women. The PUA trainer’s relevant experience is strategic manipulation of social groups—something that an individual PUA need not necessarily master in order to get laid. It is the field of strategic social manipulation that has the most relevance to applications outside dating and mating, anyway.
The problem is that then you’re not cleanly comparing methods any more.
I’m not sure I understand why you think so.
So, if you take “a person with natural confidence”, how do you know they won’t be doing exactly what the PUA will?
They might—that’s what I want to test. I’m proposing to take two randomly selected groups, with randomly varying amounts of natural confidence and “game”, and train one group with PUA techniques, the other with equally confidence-building yet counter-theoretical non-PUA techniques (which have been validated, perhaps via a pilot study, to have no effect one way or the other), and see which group improves faster. The test could be either picking up women, or any other non-pickup social game that PUA claims to help with. If it’s true that PUA is an accurate model of how people with natural game operate, then people in each group on the high end of the natural game spectrum should be relatively unchanged, but the geekier subjects should improve more in the PUA group than the control group.
Now of course this is all just hypothetical, since we don’t have the resources to actually run such a rigorous study. So my motivation in trying to negotiate a test protocol like this is really just that here on LW, we should all be in agreement that beliefs require evidence, and we should be able to agree on what that evidence should look like. Until we reach such an agreement, we’re not really having a rational debate.
So, do you think the above protocol would generate valid, update-worthy evidence? If not, why not?
I don’t understand this question. The two experimental groups get different training, and the ones in each group who actually follow the training are doing different things.
Actually, now that I think about it, I don’t understand why you think the two groups would be doing the same thing, even given your assumption that PUA is an accurate model. If PUA is accurate, then the people in the PUA trained group would end up behaving more like naturally socially successful people, and the control group would go on being geeky (or average, or whatever you select the groups to initially be), and hence the two groups’ results would diverge.
Maybe you need to re-read the experimental protocol I suggested.
I’m confused—I thought you wanted to match the PUAs against naturally confident people, which AFAICT wouldn’t be comparing anything.
What I was concerned about is the possibility that the group that was given neutral instruction might disregard the instruction and simply fall back to whatever they already do, which might be something successful.
(Thinking about it a bit more, I have a sneaking suspicion that giving people almost any instruction (whether good, bad, or neutral) may induce a temporary increase in self-consciousness, and a corresponding decrease in performance. But that’s another study altogether!)
I thought you wanted to match the PUAs against naturally confident people
No—initially I said to use geeky, socially unsuccessful subjects, but I later realized that a random sample, including all kinds of people, would work just as well.
What I was concerned about is the possibility that the group that was given neutral instruction might disregard the instruction and simply fall back to whatever they already do
Which wouldn’t be a problem, since they’re supposed to be the control group. Unless of course they lost their confidence boost in the process as well. But as long as they are at least initially convinced their training will be effective (see below), then it wouldn’t invalidate the experiment, since the same effect would apply to the PUA group as well, if PUA turns out to be ineffective.
I have a sneaking suspicion that giving people almost any instruction (whether good, bad, or neutral) may induce a temporary increase in self-consciousness, and a corresponding decrease in performance
Yes, that is a possibility I’d considered, which is why I said you may need to go so far as to fake some tests, undergrad psych experiment style, using actors, to actually convince everyone their newly acquired skills are working.
Because if the two groups are doing the same things, what is it that you’re testing?
THAT’S what we’re testing: whether the two groups are doing the same thing! Your assumption that they are is based on the belief that PUA trains people to do the same things that socially successful people do naturally, which is based on the assumption that PUA theory is an accurate model of human social interactions.… which is the hypothesis that we’re trying to test with this experiment.
the assumption that PUA theory is an accurate model of human social interactions
“PUA theory” is not a single thing. The PUA field contains numerous models of human social interactions, with varying scopes of applicability. For example, high-level theories would include Mystery’s M3 model of the phases of human courtship, and Mehow’s “microloop theory” of value/compliance transactions.
And then, there are straightforward minor models like, “people will be less defensive about engaging with you if they don’t think they’ll be stuck with you”—a rather uncontroversial principle that leads “indirect game” PUAs to “body rock” and give FTCs (“false time constraint”—creating the impression that you will need to leave soon) when approaching groups of people.
This particular idea is applicable to more situations than just that, of course—a couple decades ago when I was in a software company’s booth at some trade shows, we strategically arranged both our booth furniture and our positions within the booth to convey the impression that a person walking in would have equal ease in walking back out, without being pounced on by a lurking sales person and backed into a corner. And Joel Spolsky (of Joel On Software fame) has pointed out that people don’t like to put their data into places where they’re afraid they won’t be able to get it back out of.
Anyway… “PUA Theory” is way too broad, which is why I proposed narrowing the proposed area of testing to “rapidly manipulating social groups to form alliances and accomplish objectively observable goals”. Still pretty broad, and limited to testing the social models of indirect-game schools, but easiest to accomplish in a relatively ethical manner.
OTOH, if you wanted to test certain “inner game” theories (like the “AMP holarchy”), you could probably create a much simpler experiment, having guys just go up and introduce themselves to a wide variety of women, and then have the women complete questionnaires about the men they met, rating them on various perceived qualities such as trustworthiness, masculinity, overall attractiveness, how much of a connection they felt, etc..
(The AMP model effectively claims that they can substantially improve a man’s ratings on qualities like these. And since they do this by using actual women to give the ratings, this seems at least somewhat plausible. The main question being asked by such a test would be, how universal are those ratings? Which actually would be an interesting question in its own right...)
Assuming for the moment that it’s true that a skilled PUA trainer would beat an untrained person at this test, how much of that effect do you think is attributable to simply being more confident vs actually having a more accurate model of human social behavior?
In PUA circles, this question has been addressed very extensively, both theoretically and practically. There is in fact a whole subfield of study there, called “inner game,” which deals with the issues of confidence and self-image. The answer is that yes, unsurprisingly, confidence matters a great deal, but its relative importance in individual PUA’s techniques varies, and it doesn’t explain everything in their success, not even by a long shot.
Generally, regardless of your overall opinion of the people in the PUA scene, and for all their flaws, you definitely underestimate the breadth, intensity, and thoroughness of the debates that take place there. There are of course lots of snake oil salesmen around, but when it comes to the informal, non-commercial discourse in the community at all levels, these folks really are serious about weeding out bullshit and distilling stuff that works.
To be fair, I can’t blame people first encountering this subject to have an initial negative reaction. They don’t know the breadth of what goes on, and that it would take a college-course-worth of knowledge to even begin to have an idea of what it’s really about. What interests me is that they update when exposed to new evidence.
The problem is not only that the topic runs afoul of moralistic biases, but also that it triggers failure in high-quality anti-bullshit heuristics commonly used by math/tech/science-savvy people. When you first hear about it, it’s exactly the kind of thing that will set off a well-calibrated bullshit detector. It promises impossible-seeming results that sound tailored to appeal to naive wishful thinking, and stories about its success sound like they just must be explicable by selection effects, self-delusions, false boasting, etc. So I definitely don’t blame people for excessive skepticism either.
A personal anecdote: I remember when I first came across ASF long ago, when I was around 20. I quickly dismissed it as bullshit, and it didn’t catch my attention again until several years later. In retrospect, this miscalculation should probably be one of my major regrets in life, and not just for failures with women that could have been prevented; it would have likely opened my perspectives on many other issues too, as it actually happened the next time around.
The problem is not only that the topic runs afoul of moralistic biases, but also that it triggers failure in high-quality anti-bullshit heuristics commonly used by math/tech/science-savvy people. When you first hear about it, it’s exactly the kind of thing that will set off a well-calibrated bullshit detector
Very true. To me (and my bullshit detector), it sounds strikingly similar to any number of other self-help programs offered through the ages. In fact, it sounds to me a lot like Scientology—or at least the elevator pitch version that they give to lower level people before they start introducing them to the really strange stuff. And the endorsement you give it in your second paragraph sounds a lot like the way adherents to these kinds of absolutely-for-legal-reasons-definitely-not-a-cults will breathlessly talk about them to outsiders.
Now of course I realize that superficial similarity to snake oil doesn’t actually count as valid evidence. But I do think it’s fair to put PUA into the same reference class with them, and base my priors on that. Would you not agree?
Now of course I realize that superficial similarity to snake oil doesn’t actually count as valid evidence. But I do think it’s fair to put PUA into the same reference class with them, and base my priors on that. Would you not agree?
If you see PUA-like techniques being marketed without any additional knowledge about the matter, then yes, your snake oil/bullshit detector should hit the red end of the scale, and stay that way until some very strong evidence is presented otherwise. Thing is, when it comes to a certain subset of such techniques that pjeby, HughRistik, me, and various others have been discussing, there is actually such strong evidence. You just have to delve into the matter without any fatally blinding biases and see it.
That’s pretty much the point I’ve been hammering on. The problem is not that your prior is low, which it should be. The problem is that an accurate estimate of posteriors is obscured by very severe biases that push them downward.
What evidence? PUAs may use a lot of trial and error in developing their techniques, but do their tests count as valid experimental evidence, or just anecdotes? Where are their control groups? What is their null hypothesis? Was subject selection randomized? Were the data gathered and analyzed by independent parties?
Would you accept this kind of evidence if we were talking about physics? Would you accept this kind of evidence if we were evaluating someone who claimed to have psychic powers?
One of the reasons this topic is of interest to rationalists is that it is an example of an area where rational evidence is available but scientific evidence is in short supply. It is not in general rational to postpone judgment until scientific evidence is available. Learning how to make maximal use of rational evidence without succumbing to the pitfalls of cognitive biases is a topic of much interest to many LWers.
Yes, that’s true. I’ve been phrasing my more recent comments in terms of scientific evidence, because several people I’ve been butting heads with have made assertions about PUA that seemed to imply it had a scientific-level base of supporting evidence.
I’m still not sure though what the rational evidence is that I’m supposed to be updating on. Numerous other self improvement programs make similar claims, based on similar reasoning, and offer similar anecdotal evidence. So I consider such evidence to be equally likely to appear regardless of whether PUA’s claims are true or false, leaving me with nothing but my priors.
What evidence? PUAs may use a lot of trial and error in developing their techniques, but do their tests count as valid experimental evidence, or just anecdotes? Where are their control groups? What is their null hypothesis? Was subject selection randomized? Were the data gathered and analyzed by independent parties?
Well, as I said, if you study the discourse in the PUA community at its best in a non-biased and detached way, desensitized to the language and attitudes you might find instinctively off-putting, you’ll actually find the epistemological standards surprisingly high. But you just have to see that for yourself.
A good comparison for the PUA milieu would be a high-quality community of hobbyist amateurs who engage in some technical work with passion and enthusiasm. In their discussions, they probably won’t apply the same formal standards of discourse and evidence that are used in academic research and corporate R&D, but it’s nevertheless likely that they know what they’re talking about and their body of established knowledge is as reliable as any other—and even though there are no formal qualifications for joining, those bringing bullshit rather than insight will soon be identified and ostracized.
Now, if you don’t know at first sight whether you’re dealing with such an epistemologically healthy community, the first test would be to see how its main body of established knowledge conforms to your own experiences and observations. (In a non-biased way, of course, which is harder when it comes to the PUA stuff than some ordinary technical skill.) In my case, and not just mine, the result was a definite pass. The further test is to observe the actual manner of discourse practiced and its epistemological quality. Again, it’s harder to do when biased reactions to various signals of disrespectability are standing in the way.
Would you accept this kind of evidence if we were talking about physics?
Even in physics, not all evidence comes from reproducible experiments. Sometimes you just have to make the best out of observations gathered at random opportune moments, for example when it comes to unusual astronomical or geophysical events.
Would you accept this kind of evidence if we were evaluating someone who claimed to have psychic powers?
You’re biasing your skepticism way upward now. The correct level of initial skepticism with which to meet the PUA stuff is the skepticism you apply to people claiming to have solved difficult problems in a way consistent with the existing well-established scientific knowledge—not the much higher level appropriate for those whose claims contradict it.
The correct level of initial skepticism with which to meet the PUA stuff is the skepticism you apply to people claiming to have solved difficult problems in a way consistent with the existing well-established scientific knowledge—not the much higher level appropriate for those whose claims contradict it.
That’s a good point—the priors for PUA, though low, are nowhere near as low as for psychic phenomena. But that just means that you need a smaller amount of evidence to overcome those priors—it doesn’t lower the bar for what qualifies as valid evidence.
I think part of my problem is there is no easy way to signal you are a white hat PUA rather than a black hat. If I am interested in honest and long term relationships, I don’t want to be signalling that I have the potential to be manipulative. Especially as the name PUA implies that you are interested in picking up girls in general rather than one lady in particular.
This also applies somewhat to non-sexual relations. If someone studies human interaction to a significant degree, how do I know that they will only use their powers for good? Say in an intellectual field or political for that matter. I’m sure the knowledge is useful for spin doctors and people coaching political leaders in debates.
This comment, in itself, is probably signalling an overly reflective mind on the nature of signalling though.
I think part of my problem is there is no easy way to signal you are a white hat PUA rather than a black hat. If I am interested in honest and long term relationships, I don’t want to be signalling that I have the potential to be manipulative.
That’s unfortunately a problem that women face with men in general, PUA or no PUA. Why do you think the signaling games naturally played by men are any different? The difference is ultimately like between a musical prodigy who learned to play the piano spontaneously as a kid, and a player with a similar level of skill who was however tone-deaf and learned it only much later with lots of painstaking practice. But they’re still playing the same notes.
There is absolutely nothing in the whole PUA arsenal that wouldn’t ultimately represent reverse-engineering of techniques spontaneously applied by various types of natural ladies’ men. There is no extra “manipulation” of any sort added on top of that. Even the most callous, sly, and dishonest PUA techniques ever proposed are essentially the same behavior as that practiced by certain types of naturally occurring dark personality types of men that women often, much as they loathe to admit it, find themselves wildly attracted to. (Google “dark triad,” or see the paper I linked in one of my other comments.)
Especially as the name PUA implies that you are interested in picking up girls in general rather than one lady in particular.
It’s a name that stuck from the old days, which isn’t representative of the whole area any more (and in fact never fully was). The more modern term is “game.”
In the marginal Roissysphere, maybe. I’ve seen many attempt to get away from words like “pickup” or “seduction” though I haven’t seen any consensus on an alternative. The problem is that our culture simply has no value-neutral or positive terms for, uh, how do I put it… systematically investigating how people induce each other to want sex and relationships, and how one can practically make use of that knowledge oneself.
(It took me about four tries to write the part in italics after thinking about this subject for years, and it’s still really clunky. I could have said “understand the mating process and act on that understanding,” but that’s a bit too watered-down. My other best attempt was systematically investigating the process by which people create contexts that raise the chances of other people wanting to have sex and relationships with them, and how one can practically make use of this knowledge oneself. That phrasing is clunkier, but gets rid of the word “induce,” which a bunch of feminists once told me is “mechanical” and “objectifying.”)
“Game” has its own problems, of course. What I like about the term is that it implies that social interaction should be playful and fun. “Game” also highlights certain game-theoretic and competitive aspects of human interaction, but it might risk leading people to overstate those aspects. What I don’t like is the connotation that a game isn’t “serious” (e.g. “you think this is just a game, huh?”) and that PUAs (or critics of PUAs) may believe that “game” involves not taking other people’s feelings and interests seriously.
As I’m sure you know, some gurus (e.g. TylerDurden) have advocated viewing the process of learning pickup like learning a videogame. A similar frame is the “experiment frame,” where you think of yourself as a scientist engaging in social experiments. Such frames can be extremely valuable for beginners who need to protect themselves emotionally during the early stages of the learning process, when most of what they try isn’t going to work. Yet they are a form of emotionally distancing oneself from others; in a minority of people with existing problems, they could inhibit empathy, encourage antisocial behavior, or exacerbate feelings of alienation. In general though, I view the possible harm of such attitudes as mainly affecting the PUA.
I see these frames as training wheels which should soon be discarded once the need for such an emotionally defensive stance is gone. Most socially cool people don’t see other people as part of a video game they are playing, or as subjects in a science experiment they are running (though some Dark Triad naturals do… one favorite quote of mine from an intelligent and extremely badboy natural friend of mine who had no exposure to the seduction community: “I love causation… once you understand it, you can manipulate people”). I still engage in social experiments all the time, but when I go out, I no longer think “I’m gonna run some cool experiments tonight,” I think “I’m gonna hang out with some cool people tonight.”
I have the impression that “game” is used much more widely even as the primary general term, let alone when people talk about specific skill subsets and applications (“phone game,” “day game,” etc.). But I’m sure you’ve seen a much broader sample of all sorts of PUA-related stuff, so I’ll defer to your opinion.
That said, I see game primarily as a way of overcoming the biases and false beliefs held about male-female interactions in the contemporary culture. I would say that by historical standards, our culture is exceptionally bad in this regard. While the prevailing respectable views and popular wisdom on the matters of human pairing and sexual behavior have always been affected by biases in every culture that ever existed, my impression is that ours is exceptionally out of touch with reality when it comes to these issues. This is a special case of what I see as a much broader general trend—namely, that in contrast to hard sciences and technology, which have been making continuous and uninterrupted progress for centuries, in many areas of human interest that are not amenable to a no-nonsense hard-scientific way of filtering truth from bullshit, the dominant views have actually been drifting away from reality and into increasing biases and delusions for quite a while now.
To understand this, it is necessary to be able to completely decouple normative from factual parts in one’s beliefs about human sexual and pairing behaviors—a feat of unbiased thinking that is harder in this matter than almost any other. Once this has been done, however, a curious pattern emerges: modern people perceive the normative beliefs of old times and faraway cultures about pairing and sex as alien, strange, and repulsive, and conclude that this is because their factual beliefs were (or are) deluded and biased. Yet it seems to me that whatever one thinks about the normative part, the prevailing factual beliefs have, in many ways, become more remote from reality in modern times. (The only major exceptions are those that came from pure hard-scientific insight, like e.g. the details of women’s fertility cycle.) This of course also implies that while one can defend the modern norms on deontological grounds, the commonly believed consequentialist arguments in their favor are very seriously flawed.
The PUA insights are to a large degree about overcoming these relatively novel biases, and most PUA acolytes aren’t aware that lots of their newly gained taboo-breaking insight was in fact common knowledge not that long ago. When you look at men who have applied this insight to achieve old-fashioned pleasant monogamous harmony rather than for sarging, like that guy to whose marriage story I linked earlier, it’s impossible not to notice that it’s basically the same way our ancestors used to keep peace in the house.
I think part of my problem is there is no easy way to signal you are a white hat PUA rather than a black hat.
Actually, it’s fairly simple to signal whether you’re a white-hat or black-hat PUA trainer—all you need to do is write your marketing materials for the audience you want. White hats write things that will turn black hats off, and vice versa.
I.e., white hats will talk about direct game, inner game, honesty, respect, relating to women, “relationship game”, and so on. Black hats will talk about banging sluts and wrapping them around your finger with your persuasive and hypnotic powers, and how much of a chump they used to be before they wised up to the conspiracy keeping men down. (Sadly, I’m not exaggerating.)
On the bright side, though, if you’re definitely looking for one hat or the other, they’re not too hard to find.
Most PUA material is somewhere in between though… mostly white-ish hat, with a bit too much tolerance for using false stories and excuses in order to meet people (e.g. “I’m buying a gift for my sister and can I get your opinion on this blah blah”) , even though they’re not endorsing continuing such pretenses past the time required to get into an actual conversation.
It certainly would be nice to be able to screen off the portion of PUA that involves even such minor dishonesty, and have a term that just applied to purely white-hat, deception-free strategies.
I don’t want to be signalling that I have the potential to be manipulative.
Yup. It doesn’t help that a lot of people in the seduction community are so crappy at PR and present their ideas a socially unintelligent way that makes it sound much worse than it actually is.
I don’t have a solution to this problem, except to hope that people will judge me by the way that I treat them, not by the stereotypes triggered by the negative first impression of some of my knowledge sources.
This also applies somewhat to non-sexual relations. If someone studies human interaction to a significant degree, how do I know that they will only use their powers for good?
Again, I agree. I’ve been thinking about the ethics of social influence and persuasion for a while.
It doesn’t help that a lot of people in the seduction community are so crappy at PR and present their ideas a socially unintelligent way
OK, this is, admittedly, a totally cheap shot, but.....… if PUA tactics are so effective, and so generally applicable to the broader world of social interactions beyond just picking up women, then how come they aren’t better at “seducing” people into buying in to their way of thinking?
My hypothesis: because so much stuff in the seduction community is incorrectly sneered at even when neutrally explained, many PUAs stop bothering and revel in the political incorrectness of their private discourse. Hence you see terminology like “lair” for a seduction meetup group. Why bother with PR if you think you will be unfairly demonized either way? That’s not my perspective, but it’s a guess.
I would predict that if you took an experienced social-game theorist PUA trainer and threw him into a random physical social environment with a goal to make as many friends as possible, vs. an untrained male of similar geekiness (I’m assuming the social game theorist will be a geek, present or former) and similar unfamiliarity with the group or its rules/topics/etc., and the PUA will kick the untrained person’s ass from here to Sunday.
What’s more, I would bet that you could repeat this experiment over and over, with different PUAs and get the same results. And if the PUA in question is a good trainer, I’d be they’d be able to take a modest-sized group of similarly-geeky students and quickly train at least one student to beat an untrained person by a solid margin, and to get most of the students to improve on their previous, untrained results
I don’t think that you should compare social-skills trainer geeks to average geeks. Of course the trainers will be much more charismatic. Otherwise they wouldn’t have elected to become trainers. But that doesn’t mean that the trainers’ specific theory has much to do with why they’re charismatic.
The relevant test would be this: Compare a successful PUA social-skill’s trainer to a successful non-PUA social-skills trainer. I’m sure that almost all social-skills trainers broadly agree on all sorts of principles. The question is, do PUAs in particular have access to better knowledge?
Furthermore, do the methods used by either trainer work on the typical person? Or do they work selectively on certain types of people? Of course, instrumentally, you can have good reasons for caring only about certain types of people. But, if you are making claims about the typical person, you should demonstrate that your models reflect the typical person.
ETA: There’s an analogy to dieting gurus. I’m sure that dieting gurus are better than the average person at losing weight. That is, if you forced dieting gurus to gain weight, they could probably lose the extra weight quicker than an average person of the same weight.
However, my understanding is that all the dieting theories out there perform pretty much equally well. There are probably some principles that most diets share and which are good advice. But, as I understand it, there is little evidence that any particular diet has struck upon the truth. Whatever it is that makes a given diet distinct doesn’t seem to contribute significantly to its success.
This is despite the fact that many diets have legions of followers who gather into communities to poor over their successes and failures in meticulous detail. The analogy with the PUA community seems pretty strong on that count, too.
The relevant test would be this: Compare a successful PUA social-skill’s trainer to a successful non-PUA social-skills trainer. I’m sure that almost all social-skills trainers broadly agree on all sorts of principles. The question is, do PUAs in particular have access to better knowledge?
I think the specific dimensions of performance on which PUA trainers would outscore general social skills trainers would be in short-term/immediate manipulation of social groups to achieve specified objective and tactical results.
General social skills trainers tend to focus on longer-term and “softer”, less-specific objectives, although this could vary quite a bit. They’re unlikely to have skills that would be useful at more Machiavellian objectives like, “get people in the group to compete with each other for your attention” or “make the group single out a person for ridicule”, or “get everyone in the room to think you’re a VIP who everyone else already knows”.
Granted, not every PUA trainer would have all those skills either, and that last one might be doable by some non-PUA trainers. But if you could come up with novel challenges within the scope of what a PUA social theory would predict to be doable, it would be a good test of that theory.
(Also, I predict that PUA theorists who agree to such a challenge as being within scope of their theory, will generally update their theory if it bombs. It’s an unusual PUA social theorist who hasn’t done a lot of updating and refinement already, so they are already selected for being open to experimentation, refinement, and objective criteria for success.)
I’m not sure about that… It’s actually a mathematical question, but the proper formalization escapes me at the moment. (Maybe someone could help?) At first glance, any value of hit rate can be equally well-explained by hidden characteristics or by simple randomness. Right now I believe you have to notice some visible characteristic that determines the success of your method before you can conclude that it’s not just randomness. But I can’t prove that with numbers yet.
I should be a little clearer about the priors on which my claims are based.
What I am saying is that the observed level of PUA success is very likely on the hypothesis that the PUA description of the “typical woman” reflects only a small subset within a very heterogeneous population. If I furthermore take into account my prior that women are a heterogeneous population, the observed PUA success is not sufficient evidence that their description is accurate of the “typical woman”.
To be a little more precise:
Let
H = “Traits vary among women with a certain kind of distribution such that the population of women is heterogeneous. Moreover, insofar as there is a typical woman, the PUA description of her is not accurate.”
T = “The PUA description of the typical woman is accurate. That is, PUA methods can be expected to ‘work’ on the typical woman.”
S = “PUAs have the success that we have observed them to have.”
X = Prior knowledge
I grant that p(S | T & X) > p(S | H & X). That is, PUAs would be more likely to have their observed success if their model of the typical woman were accurate.
However, I think that p(S | H & X) is still fairly large. Furthermore, I think that p(H | X) is sufficiently larger than p(T | X) to imply that
p(H | S & X)
= [ p(H | X) / p(S | X) ] p(S | H & X*)
> [ p(T | X) / p(S | X) ] p(S | T & X*)
= p(T | S & X).
[ETA: I’m not sure why that “>” sign is not escaping properly.]
That is, the PUA model of the typical woman is probably inaccurate.
No, it’s localizing the source of disagreement :P.
You brought the evidence of pickup artist success to the table. I’m telling you something about the priors that were already on the table. (Here, the table’s contents are my beliefs about the world.) In particular, I’m saying something about why your new evidence isn’t enough to change what I think is probably true.
It’s too difficult to give you exact values for all of the relevant probabilities. But this is a start. For example, now you know that I already grant that p(S | T & X) > p(S | H & X), so you could try to increase my estimation of their difference. Or you could try to show me that p(H | X) doesn’t exceed p(T | X) by as much as I thought. That is, you could try to show me that, even without the evidence of PUA success, I shouldn’t have thought that women are so likely to be heterogeneous.
I don’t expect you to consider all of this work to be worth your time. But at least maybe you have a better sense of what it would take than you had before.
Damn, so this is how Aumann agreement works in the real world. You update! No, you update!
Even without knowing S, the hypothesis T comes with a nifty biological explanation—all those alphas and betas. Does H have anything like that? Why would it be genetically useful for different women to prefer highly different traits in men?
That link argues that each individual interbreeding population does have psychological unity, but there are differences between populations. So PUA techniques should work or fail depending on ethnicity. (Yeah! I win the Non-PC Award!) Is that what you believe?
That link argues that each individual interbreeding population does have psychological unity, but there are differences between populations.
I see an argument that different populations could have different means for certain quantifiable traits. I don’t see an argument that a single population will be homogeneous.
Moreover, the link claims that populations have diverged on these metrics in fairly short amounts of time. I think that that is evidence for a fair amount of diversity within populations to serve as the raw material for that divergence.
So PUA techniques should work or fail depending on ethnicity. (Yeah! I win the Non-PC Award!) Is that what you believe?
I should clarify that I’m not convinced by the link’s claim that populations differ on those metrics for genetic reasons. But I certainly allow that it’s possible. It’s not ruled out by what we know about biology. I presented the link only as evidence that the case for psychological unity is not a slam-dunk.
That is, you could try to show me that, even without the evidence of PUA success, I shouldn’t have thought that women are so likely to be heterogeneous.
For characteristics that we share with other primates, what would be your evidence that we would not be so heterogeneous in our inner workings?
Yes, people are pretty varied in their cultural trappings and acquired values (i.e. choices of signal expression), but we’re ridiculously common in the mental/emotional machinery by which we obtain that acculturation.
For characteristics that we share with other primates, what would be your evidence that we would not be so heterogeneous in our inner workings?
Did you mean, what would be my evidence that we would be so heterogeneous?
Assuming that you did, it’s not clear to me that we share the relevant characteristics with the other primates at the relevant level of abstraction. It’s not known to me that a female chimpanzee would react well to a male she’d never met before putting his arm around her waist.
My understanding is that mating practices vary pretty widely among the primates. They have greater and lesser sexual dimorphism. They are more or less inclined to have harem-type arrangements.
Did you mean, what would be my evidence that we would be so heterogeneous?
Oops, I temporarily confused homogeneous and heterogeneous, actually. ;-)
Assuming that you did, it’s not clear to me that we share the relevant characteristics with the other primates at the relevant level of abstraction.
Based on your examples, I’d say that where we disagree is on what the correct level of abstraction is. I would expect “arm around the waist” to vary in attractiveness by culture, but the attractiveness of “comfortable initiating touch” to vary a good bit less.
Based on your examples, I’d say that where we disagree is on what the correct level of abstraction is.
Yes, I think that’s right. I too would expect most women to like men who evince confidence, and who act as though they’re used to being liked rather than disliked.
But it’s less clear to me that initiating touch conveys that attitude without giving 49 out of 50 women the impression that you have other undesirable qualities.
For example, perhaps, by rushing to touch, you give the impression that you are in a hurry to be physically intimate as quickly as possible. She might infer that you lack the confidence or security to pursue courtship at a leisurely pace. Perhaps you are some zero-status interloper who’s trying to get in and out as fast as you can before the local alpha male catches you. And, given the level of inter-tribe violence in the EEA, she might be leery of interlopers. Maybe they present too high a threat of violence or rape to her personally, especially if they seem eager to get intimate quickly.
You’re not imagining the same thing as pjeby when you think of “comfortable initiating touch”. If you appear to be rushing/eager, you’re not appearing comfortable and, as you’ve predicted, will appear less attractive.
I’m considering the possibility that initiating touch a few minutes after meeting a woman for the first time, in and of itself, could convey that you are in a hurry.
If there is such a thing as a “local alpha male”, he certainly wouldn’t “pursue courtship at a leisurely pace”.
I’m not convinced of that. The local alpha male might have so many irons in the fire that no one woman should expect to see him in a particular rush to court her.
But it doesn’t really matter what the local alpha male would be expected to do. The local alpha male in the EEA ought to be well known, not a stranger. It doesn’t seem plausible to me that you could fool someone into thinking that you’re him just by initiating some touch. As I understand it, strangers in the EEA were so dangerous that a woman would be very leery about admitting a stranger into her personal space.
Here’s another point: As you know, there’s a whole line of theory in PUA circles about feigning disinterest, so that the woman thinks that you must have higher market value than her. Part of my argument is appealing to that line of thinking. Touching shortly after meeting may imply that you are too eager to be intimate with her.
Let me make a few meta remarks about what I’m arguing and how I’ve argued it.
The above account may not be what is going in with women who profess that they don’t like to be touched by strangers. What I’m trying to do is to make it plausible that the PUA-constructed “typical woman” is not typical by (1) showing that PUA success does not prove that their models of women are generally accurate, and (2) showing that even PUA theory itself has room for women who don’t like to be touched, for the above reasons. Argument (2) is just to open up a “line of retreat” by making the existence of such women seem plausible to a PUA proponent. I’m making the additional claim that such women may in fact be much more common than what the PUA view as I understand it would allow.
The upshot is that PUAs mistakenly think that their success implies that the woman with whom they succeed are typical.
You haven’t really given me any reason to update towards your point of view.
I grant that. Aside from the Aumann-type evidence that I hold my point of view, I’ve given you little else.
However, my position is closer to the null hypothesis, the extreme version of which would posit that women correlate no more with each other than is implied by the definition of “woman”. Unless I misunderstand you, you are asserting that they tend to conform to a certain model of the typical woman espoused by PUAs. Since my view is closer to the null hypothesis, you should be the one presenting evidence for your position. My obligation is just to say what I can about what evidence would convince me.
Counterpoint: whether it’s due to hidden variables, or simple randomness, in either case, what general principle are you able to extract from the example which can be usefully applied to topics other than male/female mating interactions?
I know many NT, “ladies man” types who are perfectly moral, ethical, upstanding people in just about every other way imaginable, but who have no problem lying to women to get in their pants. I find this a bit distasteful, but I don’t object to it, I just recognize that this is how the world works.
Do you think the costs to women are negligible in a utilitarian sense, or just not of interest to you?
See the problem of Friendly AI; that is, if humans are going to make a powerful AI, we should make sure it doesn’t do something to wreck our shit, like turn the whole universe into paperclips or some other crazy thing—i.e. it should be Friendly.
RichardKennaway was putting a jokey spin on the idea by suggesting that we solve the problem of designing Friendly Human Intelligence, by analogy to the problem of designing Friendly Artificial Intelligence. (Edited last sentence for accuracy.)
Exactly. Well, not instead of FAI, but FHI is an important problem, as old as humanity: how to bring up your kids right and stop them wrecking the place.
Have you never encountered this attitude amongst religious people over atheism? The idea that atheism is an inherently dangerous idea, that merely engaging with it risks infection. That atheism might be a kind of aqua regia for morality, capable of dissolving all that is good and right in the world into some kind of nihilistic nightmare.
Rationalism, which leads to atheism, is just such an aqua regia. Contact with it can destroy any and all of one’s beliefs. The result is not necessarily an improvement:
Even (or perhaps especially) those who think atheism might be true see it as potentially dangerous, that gazing into the abyss may permanently damage the seeker’s moral core.
I agree that in principle it’s possible that someone will do worse (or become more harmful to others) by becoming more rational. But do you take it to be likely?
Perfect rationality is, by definition, perfect and can never go wrong, for if it went wrong, it would not be perfect rationality. But none of us is perfect. When an imperfect person comes into contact with this ultimate solvent of unexamined beliefs, the ways they could go wrong outnumber the ways they could go right.
“There is no such thing as morality, therefore I can lie and steal all I like and you’re a chump if you don’t!” “There is no afterlife, therefore all is meaningless and I should just cut my throat now! Or yours! It doesn’t matter!” “Everything that people say is self-serving lies and if you say you don’t agree you’re just another self-serving liar!” “At last, I see the truth, while everyone else is just another slave of the Matrix!”
That last is a hazard on any path to enlightenment, rationalistic or otherwise. Belief in one’s own enlightenment—even an accurate one—provides a fully general counterargument to anyone else: they’re not as enlightened.
ETA: Those aren’t actual quotations, but I’m not making them up out of thin air. On the first, compare pjeby’s recent description of black-hat PUAs. On the second, a while back (but I can’t find the actual messages) someone here was arguing that unless he could live forever, nothing could matter to him. On the third, black-hat PUAs and people seeing status games at the root of all interaction are going that way. On the last, as I said above, this is a well-known hazard on many paths. There’s even an xkcd on the subject.
Perfect rationality can still go wrong. Consider for example a perfectly rational player playing the Monty Hall game. The rational thing to do is to switch doors. But that can still turn out to be wrong. A perfectly rational individual can still be wrong.
I hope that my reply does not in any way discourage Richard Kennaway’s reply. I am curious about different responses. But mine: rationalism intends to find better ways to satisfy values, but finds in the process that values are negated, or that it would be more rational to modify values.
Some time ago, I had grand hopes that as a human being embedded in reality, I could just look around and think about things and with some steady effort I might find a world view—at least an epistemology—that would bring everything together, or that I could be involved in a process of bringing things together. Kind of the way religion would do, if it was believable and not a bunch of nonsense. However, the continued application of thought and reason to life just seems to negate the value of life.
Intellectually, I’m in a place where life presents as meaningless. While I can’t “go back” to religious thinking—in fact, I suspect I was never actually there, I’ve only ever been looking for a comprehensive paradigm—I think religions have the right idea; they are wise to the fact that intellectualism/objectivity is not the way to go when it comes to experiencing “cosmic meaning”.
Many people never think about the double think that is required in religion. But I suspect many more people have thought about things both ways … a lifetime is a long time, with space for lots of thoughts … and found that “intellectualism” requires double think as well (compartmentalization) but in a way that is immensely less satisfying. In the latter, you intellectually know that “nothing matters” but that you are powerless to experience and apply this viscerally due to biology. Viscerally, you continue to seek comfort and avoid pain, while your intellect tells you there’s no purpose to your movements.
A shorter way of saying all of this: Being rational is supposed to help humans pursue their values. But it’s pretty obvious that having faith is something that humans value.
Although this comment is already long, it seems a concrete example is needed. Culturally, it appears that singularitarians value information (curiosity) and life (immortality). Suppose immortality was granted: we upload our brains to something replicable and durable so that we can persist forever without any concerns. What in the world would we be motivated to do? What would be the value of information? So what if the digits of pi strung endlessly ahead of me?
Some time ago, I had grand hopes that as a human being embedded in reality, I could just look around and think about things and with some steady effort I might find a world view—at least an epistemology—that would bring everything together, or that I could be involved in a process of bringing things together. Kind of the way religion would do, if it was believable and not a bunch of nonsense. However, the continued application of thought and reason to life just seems to negate the value of life.
I think the “mental muscles” model I use is helpful here. We have different ways of thinking that are useful for different things—mental muscles, if you will.
But, the muscles used in critical thinking are, well, critical. They involve finding counterexamples and things that are wrong. While this is useful in certain contexts, it has negative side effects on one’s direct quality of life, just as using one physical muscle to the exclusion of all others would create problems.
Some of the mental muscles used by religion, OTOH, are appreciation, gratitude, acceptance, awe, compassion… all of which have more positive direct effects on quality of life.
In short, even though reason has applications that indirectly lead to improved circumstances of living, its overuse is directly detrimental to the quality of experience that occurs in that life. And while exclusive use of certain mental muscles used in religion can indirectly lead to worsened circumstances of living, they nonetheless contribute directly to an improved quality of experience.
I’ve pretty much always felt that the problem with LessWrong is that it consists of an effort by people who are already overusing their critical faculties, seeking to improve their quality of experience, by employing those faculties even more.
In your case, the search for a comprehensive world view is an example of this: i.e., believing that if your critical faculty was satisfied, then you would be happy. Instead, you’ve discovered that using the critical faculty simply produces more of the same dissatisfaction that using the critical faculty always produces. In a very real sense, the emotion of dissatisfaction is the critical faculty.
In fact, I got the idea of mental muscles from Minsky’s book The Emotion Machine, wherein he proposes mental “resources” organized into larger activation patterns by emotion. That is, he proposes that emotions are actually modes of thought, that determine which resources (muscles) are activated or suppressed in relation to the topic. Or in other words, he proposes that emotions are a form of functional metacognition.
(While Minksy calls the individual units “resources”, I prefer the term “muscles”, because as with physical muscles they can be developed with training, some are more appropriate for some tasks than others, etc. So it’s more vivid and suggestive when training to either engage or “relax” specific “muscle groups”.)
Anyway… tl;dr version: emotions and thinking faculties are linked, so how you think is how you feel and vice versa, and your choice of which ones to use has non-trivial and inescapable side-effects on your quality of life. Choose wisely. ;-)
I’ve always suspected that introspection was tied to negative emotions. It’s more of a tool to help figure out solutions to problems rather than a happy state like ‘being in flow’. People can get addicted to introspection because it feels productive, but remains depressing if no positive action is taken from it.
Do you think this is related to the mental muscles model?
I agree and this is insightful: thinking in certain types of ways results in specific predictable emotions. The way I feel about reality is the result of the state of my mind, which is a choice. However, exercising the other set of muscles does not seem to be epistemically neutral. They generate thoughts that my critical faculty would be .. critical of.
Some of the mental muscles used by religion, OTOH, are appreciation, gratitude, acceptance, awe, compassion… all of which have more positive direct effects on quality of life.
For me, many of these muscles seem to require some extent of magical thinking. They generate a belief in a presence that is taking care of me or at least a feeling for the interconnectedness and self-organization of reality. Is this dependency unusual? Am I mistaken about the dependence?
Consider a concrete example: enjoying the sunshine. Enjoyment seems neutral. However, if I want to feel grateful, it seems I feel grateful towards something. I can personify the sun itself, or reality. It seems silly to personify the sun, but I find it quite natural to personify reality. I currently repress personifying reality with my critical muscles, after a while I suspect it would also feel silly.
I’m not sure what I mean by ‘personify’, but while false (or silly) it also seems harmless. Being grateful for the sun never caused me to make—say—a biased prediction about future experience with the sun. But while I’ve argued a few times here that one should be “allowed” false beliefs if they increase quality of life without penalty, I find that I am currently in a mode of preferring “rational” emotions over allowing impressions that would feel silly.
Nope. The idea that your brain’s entire contents need to be self-consistent is just the opinion of the part of you that finds inconsistencies and insists they’re bad. Of course they are… to that part of your brain.
I teach people these questions for noticing and redirecting mental muscles:
What am I paying attention to? (e.g. inconsistencies)
Is that useful? (yes, if you’re debugging a program, doing an engineering task, etc. -- no if you’re socializing or doing something fun)
What would it be useful for me to pay attention to?
Consider a concrete example: enjoying the sunshine. Enjoyment seems neutral. However, if I want to feel grateful, it seems I feel grateful towards something. I can personify the sun itself, or reality. It seems silly to personify the sun, but I find it quite natural to personify reality.
Is that really necessary? I have not personally observed that gratitude must be towards something in particular, or that it needs to be personified. One can be grateful in the abstract—thank luck or probability or the Tegmark level IV multiverse if you must. Or “thank Bayes!”. ;-)
For me, many of these muscles seem to require some extent of magical thinking. They generate a belief in a presence that is taking care of me or at least a feeling for the interconnectedness and self-organization of reality. Is this dependency unusual? Am I mistaken about the dependence?
Sure, there’s a link. I think that Einstein’s question about whether the universe is a friendly place is related. I also think that this is the one place where an emphasis on epistemic truth and decompartmentalization is potentially a serious threat to one’s long-term quality of life.
I think that our brains and bodies more or less have an inner setting for “how friendly/hostile is my environment”—and believing that it’s friendly has enormous positive impact, which is why religious people who believe in a personally caring deity score so high on various quality of life measures, including recovery from illness.
So, this is one place where you need to choose carefully about which truths you’re going to pay attention to, and worry much more about whether you’re going to let too much critical faculty leak over into your basic satisfaction with and enjoyment of life.
Much more than you should worry about whether your uncritical enjoyment is going to leak over and ruin your critical thinking.
Trust me, if you’re worrying about that, then it’s a pretty good sign that the reverse is the problem. (i.e., your critical faculty already has too much of an upper hand!)
This is one reason I say here that I’m an instrumentalist: it’s more important for me to believe things that are useful, than things that are true. And I can (now, after quite a lot of practice) switch off my critical faculties enough to learn useful things from people who have ridiculously-untrue theories about how they work.
For example, “law of attraction” people believe all sorts of stupidly false things… that are nonetheless very useful to believe, or at least to act as if they were true. But I avoid epistemic conflict by viewing such theories as mnemonic fuel for intuition pumps, rather than as epistemically truthful things.
In fact, I pretty much assume everything is just a mnemonic/intuition pump, even the things that are currently considered epistemically “true”. If you’ll notice, over the long term such “truths” of one era get revised to be “less wrong”, even though the previous model usually worked just fine for whatever it was being used for, up to a certain point. (e.g. Newtonian physics)
(Sadly, as models become “less wrong”, they have a corresponding tendency to be less and less useful as mnemonics or intuition pumps, and require outside tools or increased conscious cognition to become useful. (e.g. Einsteinian physics and quantum mechanics.))
Without really being able to make a case that I have successfully done so, I believe it’s possible to improve my life by thinking accurately and making wise choices. It’s hard to think clearly about areas of painful failure, and it’s hard to motivate myself to search for invalidating experiences, rather than self-protectively circumscribing my efforts, but on the other hand I love the feeling of facing and knowing reality.
I think if you look at the original source for that phrase it reflects the double-edged sword concerns raised by this comment:
Fiat justitia ruat caelum is a Latin legal phrase, meaning “May justice be done though the heavens fall.” The maxim signifies the belief that justice must be realized regardless of consequences.
...
In De Ira (On Anger), Book I, Chapter XVIII, Seneca tells of Gnaeus Piso, a Roman governor and lawmaker, when he was angry, ordering the execution of a soldier who had returned from leave of absence without his comrade, on the ground that if the man did not produce his companion, he had killed him. As the condemned man was presenting his neck to the executioner’s sword, there suddenly appeared the very comrade who was supposed to have been murdered. The centurion in charge of the execution halted the proceedings and led the condemned man back to Piso, expecting a reprieve. But Piso mounted the tribunal in a rage, and ordered three soldiers to be led to execution. He ordered the death of the man who was to have been executed, because the sentence had already been passed; he also ordered the death of the centurion who was charged with the original execution, for failing to perform his duty; finally, he ordered the death of the man who had been supposed to have been murdered, because he had been the cause of death of two innocent men.
In subsequent retellings of this legend, this principle became known as “Piso’s justice”, which is when sentences made or carried out of retaliation intentions are technically correct, but morally wrong, as could be a negative interpretation of the meaning for Fiat justitita ruat caelum.
discuss the topic openly only with mystical reverence and unrealistic idealizations. Realistic open discussions are perceived as offensive and sacrilegious. It’s an enormous bias.
I think perhaps discussion of the topic is also seen as low status. And you giving advice to us is implying we are low status.
Because a high status confidant man would just expect the world to conform to them because of their manifest qualities, rather than trying to adapt to the world.
This particular domain of human behavior is so ridiculously irrational that I don’t think it serves as a good model for ordinary, everyday human irrationality
Well, even if Geoffrey Miller’s theories are overshooting it a bit, the role of sexual selection in the evolution of the human mind should not be underestimated. Rather than being some isolated dark corner or irrationality that can be safely corralled and ignored, it seems to me that various inclinations and biases related to the mating behaviors, whether directly or indirectly, are very much all-pervasive in the workings of human minds. Therefore, careful dissection of these behaviors can reveal a lot about human nature that is applicable more widely.
No one wants to take the rules or methods for playing status games or encouraging sexual attraction and generalize from them lessons for how to be rational. What people want to do is (a) apply rationality techniques to this field to better understand how it works and (b) take the techniques people used to learn about this field, specify them, and see if they are applicable more generally.
Women who shit test are typically quite secure, not insecure. You seem to be in a muddle surrounding the subject. That is not to say that I condone everything that everyone on the internet ever says about dating and psychology—but the example quoted is a clear case—passing a shit test is not doing a bad thing. If anything, the person who uses such a test is in more questionable territory, as they are probing you for insecurity.
Edit: Practical advice that is appropriate for the majority of people on this site is fine, it doesn’t create the noise of confusion and boredom. Akrasia being a good example of appropriate practical advice. As is advice about sleeping, eating, teaching, communicating ideas.
Look at it from the flip side. Should we do make up tips for nerdy girls?
Sure, why not? If a nerdy girl feels she has learned something about rationality from exploring makeup techniques, I would absolutely be interested to hear about it on LessWrong. If other people don’t care about makeup, they don’t have to read her posts.
Truth is entangled, and who gets to mate with whom is one of the biggest truths in human social interaction—because mating behavior is very strongly selected by evolution. If you close your mind to the truths about human mating behavior, you’ll mess up your entire map of human social interaction.
If we are going to develop rationality to the point where we see an increase in uptake of rational thinking by millions of people, we can’t just ignore massively important parts of real-world human behavior.
I have a question, since you seem to know a lot about human sociality. What exactly is wrong with handling the dilemmas you describe by saying to the other humans, “I am slightly more committed to this group’s welfare, particularly to that of its weakest members, than most of its members are. If you suffer a serious loss of status/well-being I will still help you in order to display affiliation to this group even though you will no longer be in a position to help me. I am substantially more kind and helpful to the people I like and substantially more vindictive and aggressive towards those I dislike. I am generally stable in who I like. I am much more capable and popular than most members of this group, demand appropriate consideration, and grant appropriate consideration to those more capable than myself. I adhere to simple taboos so that my reputation and health are secure and so that I am unlikely to contaminate the reputations or health of my friends. I currently like you and dislike your enemies but I am somewhat inclined towards ambivalence on regarding whether I like you right now so the pay-off would be very great for you if you were to expend resources pleasing me and get me into the stable ‘liking you’ region of my possible attitudinal space. Once there, I am likely to make a strong commitment to a friendly attitude towards you rather than wasting cognitive resources checking a predictable parameter among my set of derivative preferences.”?
Saying this explicitly is extremely weak evidence of it being true. In fact, because it sounds pre-prepared, comprehensive and calculated most humans won’t believe you. Human courtship rituals are basically ways of signaling all of this but are much harder to fake.
When human females ask “Will you buy me a drink?” they’re testing to see if the male does in fact “demand appropriate consideration”.
Also, relative status and genetic fitness are extremely important in human coupling decisions and your statement does not sufficiently cover those.
That’s a good point. Let me try a different one.
Let X be ‘I am slightly more committed to this group’s welfare, particularly to that of its weakest members, than most of its members are. If you suffer a serious loss of status/well-being I will still help you in order to display affiliation to this group even though you will no longer be in a position to help me. I am substantially more kind and helpful to the people I like and substantially more vindictive and aggressive towards those I dislike. I am generally stable in who I like. I am much more capable and popular than most members of this group, demand appropriate consideration, and grant appropriate consideration to those more capable than myself. I adhere to simple taboos so that my reputation and health are secure and so that I am unlikely to contaminate the reputations or health of my friends. I currently like you and dislike your enemies but I am somewhat inclined towards ambivalence on regarding whether I like you right now so the pay-off would be very great for you if you were to expend resources pleasing me and get me into the stable ‘liking you’ region of my possible attitudinal space. Once there, I am likely to make a strong commitment to a friendly attitude towards you rather than wasting cognitive resources checking a predictable parameter among my set of derivative preferences.′
Then, instead of saying my previous suggestion, say something like, ‘I would precommit to acting in such a way that X if and only if you would precommit to acting in such a way that you could truthfully say, “X if and only if you would precommit to acting in such a way that you could truthfully say X.”’
(Edit: Note, if you haven’t already, that the above is just a special case of the decision theory, “I would adhere to rule system R if and only if (You would adhere to R if and only if I would adhere to R).” )
Wouldn’t the mere ability to recognize such a symmetric decision theory be strong evidence of X being true?
If I understood you correctly, I think that people do do this kind of thing, except it’s all nonverbal and implicit. E.g. Using hard to fake tests for the other person’s decision theory is a way to make the other person honestly reveal what’s going on inside them. Another component is use of strong emotions, which are sort of like a precommitment mechanism for people, because once activated, they are stable.
Yes, I understand the signal must be hard to fake. But if the concern is merely about optimizing signal quality, wouldn’t it be an even stronger mechanism to noticeably couple your payoff profile to a credible mechanism?
Just as a sketch, find some “punisher” that noticeably imposes disutility (like repurposing the signal faker’s means toward paperclip production, since that’s such such a terrible outcome, apparently) on you whenever you deviate from your purported decision theory. It’s rather trivial to have a publicly-viewable database of who is coupled to the punisher (and by what decision theory), and to make it verifiable that any being with which you are interacting matches a specific database entry.
This has the effect of elevating your signal quality to that of the punisher’s. Then, it’s just a problem of finding a reliable punisher.
Why not just do that, for example?
We do. That’s one of the functions of reputation and gossip among humans, and also the purpose of having a legal system. But it doesn’t work perfectly: we have yet to find a reliable punisher, and if we did find one it would probably need to constantly monitor everyone and invade their privacy.
Yet another reason why people invented religion...
Well it looks like you just got yourself a job ;-0
That is good!
Attention Users: please provide me with your decision theory, and what means I should use to enforce your decision theory so that you can reliably claim to adhere to it.
For this job, I request 50,000 USD as compensation, and I ask that it be given to User:Kevin.
Why is this being downvoted? Even those Clippy’s proposed strategy doesn’t work at all for reasons that Jack explained, he is asking an excellent question. For people (and AIs) without social experience and knowledge, it is very, very important for them to know why people can’t just talk all this stuff through explicitly. They should be asking exactly these sorts of questions so they an update.
Upvoted.
A guess: because everything in quotes in Clippy’s comment is a copy and paste of a generic comment it posted a week ago.
I don’t actually know myself, though—I upvoted Clippy’s comment because I thought it was funny. Copying an earlier comment and asking for feedback on it where it’s semi-relevant is exactly in keeping with what I imagine the Clippy character to be.
I have little problem with the way that Robin Hanson discusses status, signalling, and human interactions including mating. He doesn’t give advice to the people on OB on how to pick up chicks though. If you are not interested in the practicalities it is enough to know that women test for a variety of personality and material traits in potential mates (with different tests dependent upon the women’s personality). You don’t need to know what tests go with what personality. Knowing that the majority of women like dominant, smooth talking, humorous men is useful in predicting what men will cultivate in themselves. But I don’t need to know how to fake it.
Wouldn’t it just be easier for you to ignore the posts that contain info that you don’t personally need or want to know?
Unless you find practical advice offensive? [Boy is that going to be a problem if rationality is about winning...]
I think it’s the “faking it” part I and many other people find objectionable.
ETA: you edited this post after I replied, so I don’t think my original reply makes sense any more....
How is this different from “if you disagree with me, keep it to yourself”?
This is where you and several other people here make a critical mistake. You view various aspects of human mating behavior exclusively in terms of signaling objective traits, and then you add a moral dimension to it by trying to judge whether these objective traits supposedly being signaled are true or fake.
In reality, however, human social behavior—and especially mating behavior—is about much more complex higher-order signaling strategies, which are a product of a long and complicated evolutionary interplay of strategies for signaling, counter-signaling, fake signaling, and fake signaling detection—as well as the complex game-theoretic questions of what can ultimately be inferred from one’s signaled intentions. Nobody has disentangled this whole complicated mess into a complete and coherent theory yet, though some basic principles have been established pretty conclusively, both by the academic evolutionary psychology and by people generalizing informally from practical experiences. However, the key point is that in a species practicing higher-order signaling strategies, signaling ability itself becomes an adaptive trait. You’re not supposed to just signal objective traits directly; you also have to demonstrate your skill in navigating through the complex signaling games. It’s a self-reinforcing feedback cycle, where at the end of the day, your signaling skills matter in their own right, just like your other abilities for navigating through the world matter—and most things being signaled are in fact meta-signals about these traits.
Therefore, where you see “faking it” and “head games” and whatnot, in reality it’s just humans practicing their regular social behaviors. You’ll miss the point spectacularly if you analyze these behaviors in terms of simple announcements of objective traits and plain intentions and direct negotiations based on these announcements, where anything beyond that is deceitful faking. Learning how to play the signaling games better is no more deceitful than, say, practicing basic social norms of politeness instead of just honestly blurting out your opinions of other people to their faces.
I agree with you, and pjeby, who made similar points: the complexity of actual social games is higher than they appear on the surface, and much signaling is about signaling ability itself. But these insights also imply that the value of “running social interactions in software” is limited. Our general purpose cognitive machinery is unlikely to be able to reproduce the throughput and latency characteristics of a dedicated social coprocessor, and can really only handle relatively simple games, or situations where you have a lot of time to think. In other words, trying to play mating games with an NT “in software” is kind of like trying to play basketball “in software”.
Your argument is fallacious because it rests on overstretching the software/hardware analogy. Human brain contains highly reconfigurable hardware, and if some particular computations are practiced enough, the brain will eventually start synthesizing specialized circuits for them, thus dramatically boosting their speed and accuracy. Or to say it the traditional way, practice makes perfect.
Whether it’s throwing darts, programming computers, speaking a foreign language, or various social interactions, if you’re lacking any experience, your first attempts will be very clumsy, as your general cognitive circuits struggle ineptly to do the necessary computations. After enough practice, though, specialized hardware gradually takes over and things start going much more smoothly; you just do what it takes without much conscious thinking. You may never match someone with greater natural talent or who has much more accumulated practice initially, but the improvements can certainly be dramatic. (And even before that, you might be surprised how well some simple heuristics work.)
“Practice makes perfect” has a rather different emphasis from Roko’s suggestion of “running social interactions in software”, which is what I was addressing.
But to answer your point, I agree that improvements in social skills from practice can be dramatic, but probably not for everyone, just like not everyone can learn how to program computers. It would be interesting to see some empirical data on how much improvement can be expected, and what the distribution of outcomes is, so people can make more informed choices about how much effort to put into practicing social skills.
I’m also curious what the “simple heuristics” that you mention are.
Wei_Dai:
Fair enough, if you’re talking only about the initial stage where you’re running things purely “in software,” before any skill buildup.
From what I’ve observed in practice, people with normal (and especially above average) intelligence and without extraordinary problems (like e.g. a severe speech disorder) who start at a low social skill level can see significant improvements with fairly modest efforts. In this regard, the situation is much better than with technical or math skills, where you have to acquire a fairly high level of mastery to be able to put them to any productive use at all.
I don’t deny that some people with extremely bad social skills are sincerely content with their lives. However, my impression is that a very considerable percentage would be happy to change it but believe that it’s impossible, or at least far more difficult than it is. Many such people, especially the more intelligent ones, would greatly benefit from exposure to explicit analyses of human social behaviors (both mating and otherwise) that unfortunately fall under the hypocritical norms against honest and explicit discussion that I mentioned in my above comment. So they remain falsely convinced that there is something deeply mysterious, inconceivable, and illogical about what they’re lacking.
Well, which ones are the most effective for a particular person will depend on his concrete problems. But often bad social skills are to a significant degree—though never completely—due to behaviors that can be recognized and avoided using fairly simple rules. An example would be, say, someone who consistently overestimates how much people are interested in what he has to say and ends up being a bore. If he starts being more conservative in estimating his collocutors’ interest before starting his diatribes, it can be a tremendous first step.
This is admittedly a pretty bland and narrow example; unfortunately, pieces of advice that would be more generally applicable tend to be very un-PC to discuss due to the above mentioned hypocritical norms.
why?
why what? Why do I find “faking it” objectionable? Dude, you’re talking about playing head games to trick insecure women into sleeping with you!
But more to the point: the real world is full of instances where verbalized whiter-than-white morality is thrown out of the window, in some cases to such a large extent that the verbalized rules are not the actual rules, and people consider you a defective person if you actually follow verbalized rules rather than just paying lipservice to them.
I understand that this is often the case, and that this is how “pick ups” often work in the real world. The thing is, I just think that human’s sexual rituals are ingrained so deeply in our little monkey brains, that I don’t think generalizing from what works in that domain to the broader world of “refining the art of human rationality” is a really good idea. This particular domain of human behavior is so ridiculously irrational that I don’t think it serves as a good model for ordinary, everyday human irrationality. So if you’re reasoning by analogy to it, you’re basically patterning against a superstimulus
No! Not at all. Quite the contrary: in the original post I was careful to show that a shit-test is actually an application of an advanced concept from game theory—using a credential to solve a cheap talk problem in a signaling game!
To put it more clearly, it’s not that this domain of human behavior is actually particularly irrational. In reality, it has its well-defined rules, and men who have the knowledge and ability to behave according to these rules are, at least in a libertine society such as ours, awarded with high status in the eyes of others—and lots of sex, of course, if they choose to employ their abilities in practice. In contrast, men who are particularly bad at it suffer an extreme low status penalty; they are are a target of derision and scorn both privately and in the popular culture. However, what complicates the situation is that this is one of those areas where humans practice extreme hypocrisy, in that you’re expected not just to navigate the rules of the game cleverly, but also to pretend that they don’t exist, and to discuss the topic openly only with mystical reverence and unrealistic idealizations. Realistic open discussions are perceived as offensive and sacrilegious. It’s an enormous bias.
I don’t really agree but I think this describes the fear that underlies much of the hostility to discussing these controversial topics.
I think you’re partly correct, but some other biases are in fact more relevant here. However, going deeper into this would look too much like attacking other people’s motives, which would be perceived as both unproductive and hostile, so I’d rather not delve into that line of discussion.
I would also like to know more about biases you mentioned, can PM me this too? Or just post it here for everyone to read, because it’s a very big teaser on a topic which you seem to have a lot of interesting insights.
I’ve enjoyed all your posts on this topic and would love to know what you mean about other biases. If you don’t want to say it here, can you PM me?
I don’t think I understand the connection you’re trying to make.
Have you never encountered this attitude amongst religious people over atheism? The idea that atheism is an inherently dangerous idea, that merely engaging with it risks infection. That atheism might be a kind of aqua regia for morality, capable of dissolving all that is good and right in the world into some kind of nihilistic nightmare. Even (or perhaps especially) those who think atheism might be true see it as potentially dangerous, that gazing into the abyss may permanently damage the seeker’s moral core. This belief, whether implicit or explicit, seems quite common among the religious and I think explains some of the hostility born of fear that is sometimes observed in the reactions to atheism and atheists.
I’m suggesting something similar may underlie some of the reactions to discussions of the below-the-surface game theoretic realities of human social interaction. People fear that if they gaze into that abyss they risk losing or destroying things they value highly, like traditional concepts of love, loyalty or compassion. I think this fear is misguided, and personally prefer the truth be told, though the heavens fall regardless, but I can understand and to some extent sympathize with the sentiment that I think sometimes underlies it.
Yes, and no. My objection to the citation of PUA tactics is motivated by fear that it could lead down the dark path… but not fear that it might be true. Rather, it’s fear that something that might be true in one narrow domain might get applied as a general rule in broader domains where it is no longer applicable.
In PUA circles, “winning” is defined by getting laid. So if you go to a meat-market and try your PUA tactics all night long, you may end up getting rejected 50 times, but be successful once, and your brain records that as a “win”, cause you didn’t go home alone (just like audiences at psychic shows remember the “hits” and forget the “misses”). But does that really tell you that PUA theory correctly describes typical social interaction? No, it just tells you that there is a certain, small minority of people on whom PUA tactics work, but they are a non-representative sample of a non-representative sample.
So when you then take one of these PUA tactics, which isn’t even effective on the vast majority of people even in the meat-market pickup context, and start talking as if it was a universal truth applicable to all manner of human social interactions, it makes my head explode.
So where does my “fear” come in? Well, here’s the thing… I suspect that a large portion of the audience for PUA material is AS spectrum, or otherwise non-GPU possessing people, who have trouble finding sex/romance partners on their own, so they learn some PUA techniques. Fine. But these techniques often require the abandoning of “black and white morality”, as has been said earlier on this thread. Applied solely to the realm of picking up women, I don’t necessarily have a problem with that—“all’s fair in love and war” after all. But the thing is, most NTs are able to compartmentalize this kind of thing. I know many NT, “ladies man” types who are perfectly moral, ethical, upstanding people in just about every other way imaginable, but who have no problem lying to women to get in their pants. I find this a bit distasteful, but I don’t object to it, I just recognize that this is how the world works. But the thing is, many AS/non-GPU people have difficulty compartmentalizing things like this in the same way NTs do.
So I fear that if you teach these kind of dark arts to the non-compartmentalizing, non-NT crowd, they’re going to take away from it the message that abandoning “black and white morality” is the way to go about fitting in in the NT world, in areas beyond the meat-market. I fear that we may end up unintentionally creating the next generation of Bill Gates and Henry Kissingers.
You make a fair point that PUA probably doesn’t explain all of human interaction—it explains just the bare minimum needed to get that 1 in 50 hit rate, so the majority of girls could be PUA-invulnerable and we wouldn’t know it. But you also claim that a hit rate of 1 in 50 is bad and shouldn’t be considered a “win”, and I take objection to this. Do you also think that a good mathematician should be able to solve any problem in the world or give up their title? Or do you have an alternative theory that can beat PUA at PUA’s game? (Then you should head over to their forums and if you’re right, they will adopt your theory en masse.) If not, why should we suppress the best theory we’ve got at the moment?
If your goal is to pick up women, then yes, absolutely 1 in 50 is a “win”. But if your goal is to refine the art of human rationality, I just don’t see how it’s relevant.
The thing is, with any model (PUA or otherwise), there are many reasons you could lose out on the 49 in 50 (to go with your terminology for now):
They aren’t into your body type, facial structure, height, race, or some other superficial characteristic
They have preferences that are explained by your model, but you messed up or otherwise failed to fulfill them (Similarly: they have preferences that are explained by your model, but you didn’t go far enough in following the model.) This is exacerbated by the tendency of people to go for partners at the edge of what they can realistically expect to attract, which makes it really easy to fall just a tiny bit short of fulfilling their preferences. Even when your improve your attractiveness, then you may set your sights on a higher tier of partners, and you will still be on the edge of being accepted. P(rejection | you go for a random person in the population you are into) is much less than P(rejection | you go after the most desirable person in that population who you still consider a realistic prospect).
They have preferences that are explained by your model, but someone else around fulfilled them better (or they weren’t single)
Taking into account these factors, from the start we know that there is a ceiling for success of under 50. Let’s say that at least one of these factors apply 50% of the time. Then we are really seeing a max success rate of 1 in 25. 1 in 10 max success rate out of 50 is even plausible. If you only pursue people on the higher edge of your attractiveness bracket, then the number could go even lower, and one success looks more and more impressive.
When you expect to meet rejection >50% of the time via your model, using rejection to test your model is difficult. It’s hard to test such theories in isolation. At what point do you abandon or modify your model, and at what point to you protect it with an ad hoc hypothesis? A protective belt of ad hoc hypotheses isn’t always bad. Sometimes you have actual evidence inducing belief in the presence or absence of the type of factors I mention, but the data for assessing those factors is also very messy.
Stated in a more general form, the problem we are trying to solve is: how do I select between models of human interactions with only my biased anecdotal experience, the biased anecdotal experience of others (who I select in a biased non-representative fashion), and perhaps theories (e.g. evolutionary psychology) with unclear applicability or research studies performed in non-naturalistic settings with unclear generalizability? Whew, what a mouthful!
This is not a trivial problem, and the answers matter. It is exactly the kind of problem where we should be refining the art of human rationality. And an increase in success on this problem (e.g. 1 in 500 to 1 in 50, to continue the trend of pulling numbers out of thin air to illustrate a point ) suggests that we have learned something about rationality.
I actually agree with this completely, and I think your analysis is rather insightful. Your conclusion seems to be that PUA topics are deserving of further study and analysis, and I have no problem with that… I only have a problem with assuming PUA-isms to be true, and citing them as “everybody knows that...” examples when illustrating completely unrelated points.
This is well put. The issue you raise is why I tried to be a little more explicit about the priors that I was using here. Obviously it’s a long way from giving the explicit probabilities that would be necessary to automate the Bayesian updating, but at least we can make a start at identifying where our priors differ.
Sure… maybe for when you’re starting out as a rank beginner, doing “cold approach” and “night game”. But my success rate at “social circle game” was an order of magnitude better than that before I knew any PUA stuff in the first place… and in retrospect I can easily see how that success was based on me accidentally doing a lot of things that are explicitly taught to PUAs for that type of game.
Hell, even during the brief period where I went to nightclubs and danced with girls, there are times that I realize in retrospect I was getting major IOIs and would’ve gotten laid if I’d simply had even a single ounce of clue or game in my entire body… and at a better success rate than 1 in 50.
So, I’m not sure where you pulled the 1 in 50 number from, but in my experience it’s not even remotely credible as a “success” for a PUA, if you mean that the PUA has to ask 50 to get 1 yes.
However, if you mean that a PUA can take 50 women who are attracted to him, and then chooses from them only the one or two that he finds most desirable, then I would agree that that’s indeed a success from his POV. ;-)
(And I would also guess that most PUAs would agree that this is much closer to their idea of “winning”, and that even a PUA of modest or average ability should be able to do much better than your original estimate, even for nightclub game.)
AAARGH! You’re still totally responding to this as if we were having this discussion on a PUA forum, rather than on LW.
The 1 in 50 number was totally pulled out of my ass, a hypothetical intended to illustrate the idea that if a given technique works only 1 in X times, but that’s enough to result in getting laid, your brain is likely to count that as a “win”, and ignore the (X − 1) times it failed, leading you to incorrectly assume that the technique illustrates some universally applicable principle of human behavior, where none in fact exists.
That seems to me to be a less appropriate way to do things on LW, personally.
Certainly, arguing that you pulled a number out of your ass in order to refute empirical information providing an inside view of a phenomenon is really inappropriate here.
IOW, your hypothesis is based on a total and utter incomprehension of what PUAs do or value, and is therefore empirically without merit. Actual PUAs are not only aware of the concept you are describing, but they most emphatically do not consider it success, and one guru even calls it “fool’s mate” in order to ridicule those who practice it. (In particular, Mystery ridicules it as relying on chance instead of skill.)
In short, you are simply wrong, and you’re probably getting downvoted (not by me, mind you) not because of disagreement, but because you’re failing to update on the evidence.
It’s very clear from the original context that “1 in 50” was not being proposed as evidence of anything, but simply as colloquial shorthand for “1 in some number X”. And I’m not sure what empirical evidence you’re referring to—the plural of anecdote yada yada yada.
My knowledge of what PUA entails is based almost entirely on various examples given by PUAs here on LW (that and a few clips from Mystery’s show being ridiculed on The Soup , which you might want to consider as a data point on what the general public thinks of PUA). Maybe if LW’s resident PUAs were to cite examples more like those you gave in your last reply to me, I might have a higher opinion of PUA wisdom.
Look, I totally understand why you and the other PUA adherents are so emotionally attached to the idea: if I were single, and somebody gave me a magic feather that enabled me to get laid a lot, I’m sure I would think it was awesome, and probably wouldn’t stop talking about it, well past the point that my friends and acquaintances were sick of hearing about it. It might be worth remembering, though, that the original topic of this article was Asperger/Autistic spectrum issues, and that one of the characteristic traits of the spectrum is what’s been referred to as “little professor syndrome,” where aspies tend to go on and on about their narrow topics of interest, unable to pick up social cues, like eye rolling, indicating lack of interest in the subject.
I don’t recall whether you responded positively to the “do you have high functioning asperger’s” question, and it’s not my intention to pejoratively imply that you, or anyone else here, does. I just think it might be worth looking at this through that lens.
If you’re implying that I’m single or attempting to get laid a lot, you’ve either missed a lot of my comments in this discussion, or you didn’t read them very carefully.
(Hint: I’m married, and have never knowingly used a pickup technique for anything but social or business purposes.. and I’ve made no secret of either point in this discussion!)
In other words, the numbers aren’t the only thing you just pulled out of your ass. ;-)
I would also point out that it is not particularly rational for you to first rant that nobody is responding to your points, and then, when people reply to you in an attempt to respond, for you to criticize them for “going on and on”.
(Well, it’s not rational unless your goal is to troll me, I suppose. But in that case, congratulations… you got a response.)
Meanwhile, you’ve also just managed to demonstrate actually doing the thing you’re arguing PUAs theoretically do (but actually don’t, if they’re well-trained).
That is, you made a sweeping judgment that doesn’t really apply to the claimed target group.
And, you didn’t make any allowance for the possibility that the specific person you were interacting with might be different from your generalized model of “single with a magic feather”. (Heck, even PUA’s know they have to calibrate to the individuals they encounter—i.e. pay attention.)
So… pot, meet kettle. ;-)
Nope, I neither said, nor implied anything of the kind. I was simply speculating on why it might be that so many people on LW seem to be so attached to the PUA ideas, despite their not really seeming to have much going for them in the way of Bayesian evidence. I wasn’t referring to you (or anyone) in particular. The format of comment threads requires that comments be addressed to a specific person, and so your comment was the one I happened to click ‘reply’ on, but I was referring in general to the PUA crowd.
I complained about people’s responses not addressing the substance of my argument, not the lack of responses.
Obviously I wasn’t talking here about your responses to my comments, but about the general inclination of certain PUA-boosters to continually bring up PUA themes in the middle of discussing unrelated issues.
No, I’m just saying that a 1 in 50 hit rate is more likely to be explained by a peculiarity of the particular people involved in the interaction, rather than a universal truth of all human social interaction.
Yep, I certainly got that point. (See the edited comment.) But today the real choice is between PUA that yieds little but positive results in the field, and alternative theories that yield no results.
OK, fair enough.
But I’m not arguing that PUA is bad. I’m arguing that the lessons learned from PUA aren’t generally applicable outside that arena, and are not good examples to use when illustrating a point on an unrelated human-rationality topic.
If I apply the same methods for the same amount of time to many problems, and I solve only 1 in 50 of them, then I should seriously consider the possibility that there was something special about that 1 in 50 that made them especially accessible to my methods. I should not conclude that the 1 in 50 were typical of all the problems that I considered.
I expect that a man can maximize his number of sexual partners by focusing his attentions on women who will be especially receptive to his advances. But it would be a mistake to infer that such women are typical.
That’s exactly what cousin_it has described himself doing, at least in the case of women who ask him to buy them drinks. His hug test (for lack of a better word) very quickly identifies which women are receptive to being physically companionable with him.
In PUA terminology, he’s taking her opener and screening it. Other relevant PUA terminology in this space:
AI (Approach Invitation) - reading signals that indicate a woman wants you to approach
Forced IOI (Indicator Of Interest) opener—engaging in a behavior that forces a woman’s body language to immediately reveal her interest or lack thereof, such as by gazing directly into her eyes while approaching, in order to see whether she looks down, away, or back at you, and whether she smiles.
Some men swear by these things as the essence of their game; others, however, want to be able to meet women who will neither AI nor accept a forced IOI, such as women who get approached by dozens of men a night and therefore have their “shields up” against being approached.
Anyway, your hypothesis isn’t a better PUA than PUA; but practical methods for actually applying that hypothesis are part of the overall body of knowledge that is PUA.
But my question is, does PUA theorizing help him get an accurate model of what women in general are actually like? More generally, does it give him tools to get a better understanding of what reality is like? Or is it just giving him tools that help him to focus his attentions on a certain small subset of women?
If I go into a library, I can easily tell the English books from the books in Chinese, so I can quickly narrow my attention to the books that I can get something out of. But that doesn’t mean that I know anything about what’s going on inside the Chinese books. And, if the vast majority of the books in the library are Chinese, then I actually know very little about the “typical” book in the library.
I’m having trouble parsing this sentence. What’s the “hypothesis” here?
I thought about it some more and honestly can’t tell if you’re right or not. On one hand, I never do cold approaches—there’s always some eye contact and smiling beforehand—so the women I interact with are already very self-selected. On the other hand, I know from experience that a girl who rejected me in one setting (e.g. a party) may often turn out to be receptive in another setting (e.g. a walk), so it’s not like I’m facing some immutable attribute of this girl. So every interaction with a woman has many variables beyond my control that could make it or break it, but my gut feeling is that most of those variables are environmental (current mood, presence of other people, etc.) rather than inborn.
Yes, I agree. In this particularly case, though, we have no idea whether your “if” clause is satisfied, and what the proportion of English to Chinese books really is.
To make an analogy with my previous post where I explain that the ceiling on success rate is actually rather low, most of the books you read either burst into flame when you read them, or their text disappears or turns into gibberish. Sometimes, even forensic inspection can’t tell you what language the book was originally in.
All you can know is that learning English helps you read some of the books in the library. Absent the knowledge of what was in the text that was destroyed before you could read it, you have no idea of the typicality or atypicality of the English books you are capable of reading. Yet if your forensic inspection of the destroyed books reveals more English characters than Chinese characters, or you have some additional theoretical or empirical knowledge on the distribution of languages in the books, then you may have to upgrade your estimate of the proportion of English books. (This assumes that the hypotheses of books being in English or Chinese are both locateable.)
Even if your estimate is wrong, it can still be very valuable to know how to read the typical English book in the library, especially if the alternative is not being able to read any.
You still know very little, of course, about the population of books (or people) you are trying to model. Yet in the case of people, you are often faced with competing hypothesizes about how to behave, and even a small preference for one hypothesis over the other can have great practical significance. That’s why stereotypically we see women picking over their interactions with men with their female friends, and PUAs doing exactly the same thing on internet forums. They have tough decisions to make under uncertainty.
Does a preference for one theory over another, and seeming practical results mean that the preferred theory is “true?” I think we both agree: no. That’s naive realism. Yet when you are engaged in discussion on a practical subject, it’s easy to slip from language about what works to language about what is true, and adopt a pragmatic notion of truth in that context.
As I’ve mentioned before, PUAs do commit naive realism a lot. While there are ceilings to what mass-anecdotal experience of PUAs can show us about epistemic rationality, there is a lot it can show us about instrumental rationality. How to be instrumentally successful when the conclusions of epistemic rationality are up in the air is an interesting subject.
I’m not a PUArtist, I’m a PUInstrumentalist about PU models. Yet when I see a theory (or particularly hypothesis in a theory) working so spectacularly well, and that data which deviates from it generally seems to have an explanation consistent with the theory, and the theory lets me predict novel facts, and it is consistent with psychological research and theories on the topic… then it sometimes makes me wonder if my instrumentalist attitude of suspended judgment on the truth of that theory is a little airy-fairy.
I doubt that PUA models are literally highly probable in totality, yet I hold that particular hypotheses in those models are reasonable even only fueled by anecdotal evidence, and that with certain minor transformations, the models themselves could be turned into something that has a chance of being literally highly probable.
Well put. This is a good delineation of the issues.
The portion of your comment that I quoted, i.e.:
I was saying that PUAs don’t entirely agree with your hypothesis (and incidentally, don’t necessarily value the “maximize his number of sexual partners” part)… but they do have tools for taking advantage of attuning to women who will be especially receptive.
Both. As I mentioned earlier, PUA models of social behavior have been successfully applied in and out of pubs, with people who the PUA is not even trying to sleep with, both male and female. Anecdotally, PUAs who focus on learning social interaction skills find that those skills are just as useful in other contexts. (For example, Neil Strauss noted in The Game that learning PUA social skills actually helped his celebrity-interviewing technique, as it gave him tools for pepping up conversations that were starting to go stale.)
Most of the criticism here about PUA has been claiming that it has poor applicability to women, but this is the result of a severe misapprehension about both the goals and methods of PUA-developed social models. PUA social signaling models are actually applicable to humans in general, even though the means of effecting the signals will vary.
My impression is that the typical LWer has little familiarity with these models, and has only heard about a few bits of (highly context-sensitive) specific advice or techniques. Are you familiar with microloop theory? Frames? Pinging? There’s a metric ton of of systematization attempts by PUA theorists, some of which is very insightful. Also, a lot of practical advice for dealing with a wide variety of social situations.
I would predict that if you took an experienced social-game theorist PUA trainer and threw him into a random physical social environment with a goal to make as many friends as possible, vs. an untrained male of similar geekiness (I’m assuming the social game theorist will be a geek, present or former) and similar unfamiliarity with the group or its rules/topics/etc., and the PUA will kick the untrained person’s ass from here to Sunday.
What’s more, I would bet that you could repeat this experiment over and over, with different PUAs and get the same results. And if the PUA in question is a good trainer, I’d be they’d be able to take a modest-sized group of similarly-geeky students and quickly train at least one student to beat an untrained person by a solid margin, and to get most of the students to improve on their previous, untrained results
That’s how confident I am that PUA social interaction models are sufficiently correct to be broadly applicable to “typical” human beings—not just women.
(Btw, I’m aware that I’ve left a huge number of loopholes in my stated prediction that an unscrupulous experimenter could use to skew the results against the PUA, but I don’t really want to take the time to close them all right now. Suffice to say that it would need to be a fair contest, apart from the PUA’s specialized training, and I’m only betting on PUA trainers being able to totally kick an untrained person’s ass; I would expect experienced PUAs to do say, maybe 2-3 times as well as the untrained on average. Trainers and “in-field” coaches have to have a better grasp of social dynamics than the people they’re training. Also, there’s a big gap between theory and execution—if you can’t get your body and voice to do what the theory tells you to, it doesn’t matter how good the theory is!)
Ok, I swore to myself I wasn’t going to comment on this thread anymore, but now you’ve made me think of something that hadn’t occurred to me before:
Assuming for the moment that it’s true that a skilled PUA trainer would beat an untrained person at this test, how much of that effect do you think is attributable to simply being more confident vs actually having a more accurate model of human social behavior? I.e. you could, in principle, test for what I’m talking about by replacing the untrained geek with a geek trained with a different, completely fabricated set of PUA rules and theories, which he’d been led to believe were the real, PUA methods.… tell him these methods have been extensively experimentally tested, maybe even fake some tests with some actors to convince him that his bogus PUA skills actually work, just to give him the confidence of thinking he knows the secrets of the PUA masters. Then test him against someone given an equal amount of training on the “real” PUA techniques.
Oh, and for bonus points, for the fabricated set of techniques, you could use stuff taught by Scientology, just to make sure there’s consensus that it’s bogus ;)
How do you think that test would turn out? (I’m taking no position on the issue—I honestly don’t know)
It’s hard to create and maintain confidence that isn’t based on actual results. I predict that the confident geeky guy would go barreling into interactions and just as easily alienate people as engage them. Without any competence to back up the confidence, the latter wouldn’t last very long, unless the guy was totally oblivious to negative signals from others.
It is a good question, whether a PUA could be matched by a control guy of the same level of confidence. But if we are talking any real sort of confidence, the main way it develops is through success, which requires manifesting attractive behaviors in the first place.
Exactly. But in the version of the experiment I proposed, both groups are composed of (initially) inexperienced geeks, as opposed to pjeby’s protocol, which involved an untrained newbie and a PUA trainer (who, despite having trained on, IMHO, potentially invalid methods, has likely acquired a great deal of real confidence via experience).
Which is why, now that I’ve had some time to think about it, I now predict that if this experiment were performed, both trainee groups would “go barreling into interactions and just as easily alienate people as engage them”. For it to mean much, you would have to iterate the experiment over a period of weeks or months and see which group improves faster. I remain agnostic on what the outcome of that would be.
I was thinking along our lines, where both groups involve newbies. I predict that the confidence will collapse in whichever groups lack some actual practical knowledge that can achieve success to keep the confidence boosted.
Do you have any prediction as to which group would come out ahead after a sufficient number of iterations?
And as an aside, wouldn’t it be awesome if LW had a prediction market built into it where we could resolve these things?
PUAs themselves will admit to confidence being important… in meeting people, and in its being a foundation for everything they do. But it’s not a magic bullet.
I’ve seen an excerpt of a talk that one gave who explained that when he started, he actually attained some success at opening (i.e. initiating contact) through delusional self-confidence… however, this wasn’t enough to improve his success at “closing” (i.e., getting numbers, kisses, dates, etc.), because he still made too many mistakes at understanding what he was supposed to do to “make a move”, or how he was supposed to respond to certain challenges, etc.
Remember, if the signal is too easy to fake, it’s not very useful as a signal.
I think it would be a better test to reverse the PUA recommendations, i.e., teach them things that the PUAs predict would flop. If they succeed anyway, it’s a slam dunk for the confidence hypothesis. But I doubt they would.
Actually, one thing I saw on Mystery’s show suggests to me that it might be sufficient to train someone poorly—one trainee on the show couldn’t get it through his head as to tthe proper use of negging, and went around insulting women with what, as far as I could tell, was total confidence. And of course, It didn’t work, at all, while the other guys who both understood the idea and applied it with careful calibration, achieved much greater success.
In other words, I think confidence alone is insufficient to replace social calibration—the PUA term for having awareness (or reasonably accurate internal predictions) of what other people are thinking or feeling about you, each other, and the overall social situation. The principal value of PUA social dynamic theories to PUA practice is to train the socially ill-calibrated to notice the cues that more socially adept people notice instinctively (or at least intuitively).
In other words, having a theory of “status” or “value” helps you to to know what to pay attention to, to help tune in on the music of an encounter, rather than being misled by the words being sung.
(Of course , I’m sure we all know people who come along and wreck the music by confidently singing a new and entirely inharmonious tune. This sort of behavior should not be confused with being socially successful.)
But it certainly is fun!
I don’t think that would be a fair test. Techniques that PUAs think would flop, I would probably agree with them in predicting they’d flop—It’s easier to know that something doesn’t work, than that it does work. So they would actually end up at a disadvantage relative to a person with natural confidence and no PUA training.
I would want my control group to be given techniques that are entirely harmless and neutral, or as close to it as is reasonably possible.
While that would be an interesting test, being entirely harmless and neutral is how to flop, PUAs predict. People don’t want to date people they feel neutral towards; they want to date people they are excited about. Since women are more selective, this principle applies even more to women, and makes for some interesting problem-solving.
Since there a bunch of different taxa in female preferences (yes, my model of the preferences of the female population accounts for significant differences in female preferences in certain dimensions), and these taxa have strong, differing, mutually-exclusive preferences (e.g. the preference to definitely kiss on the first date, vs. the preference to definitely not kiss on the first date), and which preference taxon a woman belongs to in advance is not always reasonably predictable, certain behaviors will have a polarizing response. There is only a certain set of behaviors that is universally attractive to women (e.g. confidence), and outside that set, behaviors that attract one woman might annoy or repulse another (cousin_it’s arm around the waist example falls into this category).
Unfortunately, you can’t always explicitly ask what preference taxon a woman is in; your ability to guess based on either strong or weak cues may be one of her filters. And asking too much about someone else’s preferences can signal that you consider her higher status, which many women may find unattractive. It might also signal that you think something in particular is going to happen, when she hasn’t decided if she wants it to happen yet. Even if a woman could have an explicit discussion of her preferences and not consider your obsequious for doing so, you can’t really know this in advance. And you can’t ask her if she is part of the taxon of women who can discuss their preferences explicitly without docking status points from men for raising the subject; nor can you ask her if she part of the taxon of women who can be asked which taxon of women she is in: the problem is recursive. So the only rational solution is to guess, unless you are comfortable screening out women who can’t have explicit discussions of their preferences early in the interaction. (Though you can help your guessing by starting oblique discussions of preferences, such as talking about relationship history and listening carefully.)
You can’t just avoid polarizing behaviors that women will have either strong positive or negative responses to, because then you risk relegating yourself to the boring guy heap. You are stuck doing an expected value calculation on these polarizing behaviors taking into account the uncertainty of your model of her. If you decide to make a certain move, you hope your calculation was right and you don’t weird her out. And if you decide not to make that move, you hope your calculation was right and you don’t get docked points for not making the move and failing to make a strong enough impression. A lot of guessing is going on here; if your hardware doesn’t steer you down the right path, you need to get better at guessing, which is a job for rationality.
Shorter version of the above: Men need to make strong positive impressions on women to be reliably successful. Many of the behaviors that make strong positive impressions on some types of women make strong negative impressions on other types of women. The result is that men need to engage in high-risk, high-reward behaviors to make strong positive impressions on many types of women, though the risk is substantially mitigateable with experience and knowledge. This leads to some interesting ethical dilemmas. It also leads to some interesting practical consequences, where sometimes it’s better to increase the variance in your attractiveness even at the cost of your average attractiveness to the female population. But now I’m just rambling…
I think you’ve highlighted an important difference between the inside view and outside view of PUA.
Outsiders think that for PUA to be valid, it has to have techniques that work on “most women”. However, for insiders, it simply has to have a set of techniques that work on women they are personally interested in.
Outsiders, though, tend to think that the set of “women PUAs are personally interested in” is much more homogeneous than it really is. The women that say, Decker of AMP goes for, are orders of magnitude more introspective than those that say, Mystery goes for. David D seems to like ambitious professional women. Johnny Soporno seems to dig women with depth of emotion who’ll all be a big happy family in his harem. Some gurus seem to like women they can boss around. Juggler seems to value good conversation. (And notice that none of these preferences are, “who I can get to sleep with me tonight”. Even Mystery’s preference for models and strippers is much more about status than it is about sex.)
Granted—these are all superficial personal impressions of mine, based on random bits of information, but it’s helpful to point out that men’s preferences vary just as much as women’s do. PUA is not a single unified field aimed at claiming a uniform set of women for a uniform set of men. It is a set of interlinked and related fields of what works for specific groups of women in specific situations…
Conditioned on the preferences of the men who are interested in them.
That is, successful PUAs intentionally choose (or invent) behaviors and sets of techniques that will screen out women that they are not interested in. And they don’t engage in a search for what technique will work on the woman they’re with—they do what the kind of woman they want would like.
Now, there are certainly schools of thought who think the goal is to figure out whatever woman is in front of them, but my observation of what the people in PUA who seem happy with their life and work say, is that they always effectively talk about being fully themselves, and how this automatically causes one group to gravitate towards them, and the rest to gravitate away.
This has also been my personal experience when I was single and doing “social game” (which as I said, I didn’t know was a thing until much later).
What I’ve also noticed is that many gurus who used to teach mechanical, manipulative game methods have later slid over to this line of thought—specifically, many have said that thinking in terms of “what do I need to do to get this woman to like me” is actually hurting your inner game, because it sets the frame that you are the pursuer and she is the selector, and that this is going to cause her to test you more than if you just were totally open about who you are and what you want in the first place, so there’s no neediness or apprehension for her to probe.
Some people talk about feigning disinterest, but I think that what really works (from my limited experience) is genuine disinterest in people who aren’t what you’re looking for. In some schools, this is talked about as a tactic (i.e. “qualifying” and “disqualifying”), but I think the more mature schools and gurus speak about it as a way of thinking, or a lifestyle.
Anyway, tl;dr version: the success of PUA as a field isn’t predicated on one set of techniques “working” on all taxa of women, it’s predicated on individual PUAs being able to select behaviors that work well with the taxa he wants them to “work” on… and the taxa for which techniques exist is considerably wider than field-outsiders are aware of… leading to difficult communication with insiders, who implicitly understand this variability and don’t get why the outsiders are being so narrowminded.
No, you misunderstood what I was saying. I meant that for the purposes of maintaining a valid control group, they be given instructions which neither help nor harm their chances, i.e. have a completely neutral effect on their innate “game” or lack thereof.
I appreciate the idea of this test; my point is that is that it might be hard to set up a group with instructions that have a completely neutral effect on their results. Maybe with a pilot study?
I also choose to use your post as a jumping off point for some rambling of my own.
What are we testing for? Whether there’s a placebo effect in believing you have good instructions?
If yes, it seems obvious there is one—especially in a domain where confidence is highly correlated with positive results.
Hmmmmmm.… is anyone here on LW experienced at writing grant proposals? ;)
The problem is that then you’re not cleanly comparing methods any more. Remember: much of PUA is the result of modeling the beliefs and behaviors of “naturally confident” and socially-skillful people. The PUA claim is that these beliefs and behaviors can be taught and learned, not that they have invented something which is different from what people are already capable of doing.
So, if you take “a person with natural confidence”, how do you know they won’t be doing exactly what the PUA will?
By the way, please remember that the test I proposed was befriending and social climbing, not seducing women. The PUA trainer’s relevant experience is strategic manipulation of social groups—something that an individual PUA need not necessarily master in order to get laid. It is the field of strategic social manipulation that has the most relevance to applications outside dating and mating, anyway.
I’m not sure I understand why you think so.
They might—that’s what I want to test. I’m proposing to take two randomly selected groups, with randomly varying amounts of natural confidence and “game”, and train one group with PUA techniques, the other with equally confidence-building yet counter-theoretical non-PUA techniques (which have been validated, perhaps via a pilot study, to have no effect one way or the other), and see which group improves faster. The test could be either picking up women, or any other non-pickup social game that PUA claims to help with. If it’s true that PUA is an accurate model of how people with natural game operate, then people in each group on the high end of the natural game spectrum should be relatively unchanged, but the geekier subjects should improve more in the PUA group than the control group.
Now of course this is all just hypothetical, since we don’t have the resources to actually run such a rigorous study. So my motivation in trying to negotiate a test protocol like this is really just that here on LW, we should all be in agreement that beliefs require evidence, and we should be able to agree on what that evidence should look like. Until we reach such an agreement, we’re not really having a rational debate.
So, do you think the above protocol would generate valid, update-worthy evidence? If not, why not?
Because if the two groups are doing the same things, what is it that you’re testing?
I don’t understand this question. The two experimental groups get different training, and the ones in each group who actually follow the training are doing different things.
Actually, now that I think about it, I don’t understand why you think the two groups would be doing the same thing, even given your assumption that PUA is an accurate model. If PUA is accurate, then the people in the PUA trained group would end up behaving more like naturally socially successful people, and the control group would go on being geeky (or average, or whatever you select the groups to initially be), and hence the two groups’ results would diverge.
Maybe you need to re-read the experimental protocol I suggested.
I’m confused—I thought you wanted to match the PUAs against naturally confident people, which AFAICT wouldn’t be comparing anything.
What I was concerned about is the possibility that the group that was given neutral instruction might disregard the instruction and simply fall back to whatever they already do, which might be something successful.
(Thinking about it a bit more, I have a sneaking suspicion that giving people almost any instruction (whether good, bad, or neutral) may induce a temporary increase in self-consciousness, and a corresponding decrease in performance. But that’s another study altogether!)
No—initially I said to use geeky, socially unsuccessful subjects, but I later realized that a random sample, including all kinds of people, would work just as well.
Which wouldn’t be a problem, since they’re supposed to be the control group. Unless of course they lost their confidence boost in the process as well. But as long as they are at least initially convinced their training will be effective (see below), then it wouldn’t invalidate the experiment, since the same effect would apply to the PUA group as well, if PUA turns out to be ineffective.
Yes, that is a possibility I’d considered, which is why I said you may need to go so far as to fake some tests, undergrad psych experiment style, using actors, to actually convince everyone their newly acquired skills are working.
THAT’S what we’re testing: whether the two groups are doing the same thing! Your assumption that they are is based on the belief that PUA trains people to do the same things that socially successful people do naturally, which is based on the assumption that PUA theory is an accurate model of human social interactions.… which is the hypothesis that we’re trying to test with this experiment.
“PUA theory” is not a single thing. The PUA field contains numerous models of human social interactions, with varying scopes of applicability. For example, high-level theories would include Mystery’s M3 model of the phases of human courtship, and Mehow’s “microloop theory” of value/compliance transactions.
And then, there are straightforward minor models like, “people will be less defensive about engaging with you if they don’t think they’ll be stuck with you”—a rather uncontroversial principle that leads “indirect game” PUAs to “body rock” and give FTCs (“false time constraint”—creating the impression that you will need to leave soon) when approaching groups of people.
This particular idea is applicable to more situations than just that, of course—a couple decades ago when I was in a software company’s booth at some trade shows, we strategically arranged both our booth furniture and our positions within the booth to convey the impression that a person walking in would have equal ease in walking back out, without being pounced on by a lurking sales person and backed into a corner. And Joel Spolsky (of Joel On Software fame) has pointed out that people don’t like to put their data into places where they’re afraid they won’t be able to get it back out of.
Anyway… “PUA Theory” is way too broad, which is why I proposed narrowing the proposed area of testing to “rapidly manipulating social groups to form alliances and accomplish objectively observable goals”. Still pretty broad, and limited to testing the social models of indirect-game schools, but easiest to accomplish in a relatively ethical manner.
OTOH, if you wanted to test certain “inner game” theories (like the “AMP holarchy”), you could probably create a much simpler experiment, having guys just go up and introduce themselves to a wide variety of women, and then have the women complete questionnaires about the men they met, rating them on various perceived qualities such as trustworthiness, masculinity, overall attractiveness, how much of a connection they felt, etc..
(The AMP model effectively claims that they can substantially improve a man’s ratings on qualities like these. And since they do this by using actual women to give the ratings, this seems at least somewhat plausible. The main question being asked by such a test would be, how universal are those ratings? Which actually would be an interesting question in its own right...)
kodos96:
In PUA circles, this question has been addressed very extensively, both theoretically and practically. There is in fact a whole subfield of study there, called “inner game,” which deals with the issues of confidence and self-image. The answer is that yes, unsurprisingly, confidence matters a great deal, but its relative importance in individual PUA’s techniques varies, and it doesn’t explain everything in their success, not even by a long shot.
Generally, regardless of your overall opinion of the people in the PUA scene, and for all their flaws, you definitely underestimate the breadth, intensity, and thoroughness of the debates that take place there. There are of course lots of snake oil salesmen around, but when it comes to the informal, non-commercial discourse in the community at all levels, these folks really are serious about weeding out bullshit and distilling stuff that works.
To be fair, I can’t blame people first encountering this subject to have an initial negative reaction. They don’t know the breadth of what goes on, and that it would take a college-course-worth of knowledge to even begin to have an idea of what it’s really about. What interests me is that they update when exposed to new evidence.
The problem is not only that the topic runs afoul of moralistic biases, but also that it triggers failure in high-quality anti-bullshit heuristics commonly used by math/tech/science-savvy people. When you first hear about it, it’s exactly the kind of thing that will set off a well-calibrated bullshit detector. It promises impossible-seeming results that sound tailored to appeal to naive wishful thinking, and stories about its success sound like they just must be explicable by selection effects, self-delusions, false boasting, etc. So I definitely don’t blame people for excessive skepticism either.
A personal anecdote: I remember when I first came across ASF long ago, when I was around 20. I quickly dismissed it as bullshit, and it didn’t catch my attention again until several years later. In retrospect, this miscalculation should probably be one of my major regrets in life, and not just for failures with women that could have been prevented; it would have likely opened my perspectives on many other issues too, as it actually happened the next time around.
Very true. To me (and my bullshit detector), it sounds strikingly similar to any number of other self-help programs offered through the ages. In fact, it sounds to me a lot like Scientology—or at least the elevator pitch version that they give to lower level people before they start introducing them to the really strange stuff. And the endorsement you give it in your second paragraph sounds a lot like the way adherents to these kinds of absolutely-for-legal-reasons-definitely-not-a-cults will breathlessly talk about them to outsiders.
Now of course I realize that superficial similarity to snake oil doesn’t actually count as valid evidence. But I do think it’s fair to put PUA into the same reference class with them, and base my priors on that. Would you not agree?
kodos96:
If you see PUA-like techniques being marketed without any additional knowledge about the matter, then yes, your snake oil/bullshit detector should hit the red end of the scale, and stay that way until some very strong evidence is presented otherwise. Thing is, when it comes to a certain subset of such techniques that pjeby, HughRistik, me, and various others have been discussing, there is actually such strong evidence. You just have to delve into the matter without any fatally blinding biases and see it.
That’s pretty much the point I’ve been hammering on. The problem is not that your prior is low, which it should be. The problem is that an accurate estimate of posteriors is obscured by very severe biases that push them downward.
What evidence? PUAs may use a lot of trial and error in developing their techniques, but do their tests count as valid experimental evidence, or just anecdotes? Where are their control groups? What is their null hypothesis? Was subject selection randomized? Were the data gathered and analyzed by independent parties?
Would you accept this kind of evidence if we were talking about physics? Would you accept this kind of evidence if we were evaluating someone who claimed to have psychic powers?
One of the reasons this topic is of interest to rationalists is that it is an example of an area where rational evidence is available but scientific evidence is in short supply. It is not in general rational to postpone judgment until scientific evidence is available. Learning how to make maximal use of rational evidence without succumbing to the pitfalls of cognitive biases is a topic of much interest to many LWers.
Yes, that’s true. I’ve been phrasing my more recent comments in terms of scientific evidence, because several people I’ve been butting heads with have made assertions about PUA that seemed to imply it had a scientific-level base of supporting evidence.
I’m still not sure though what the rational evidence is that I’m supposed to be updating on. Numerous other self improvement programs make similar claims, based on similar reasoning, and offer similar anecdotal evidence. So I consider such evidence to be equally likely to appear regardless of whether PUA’s claims are true or false, leaving me with nothing but my priors.
kodos96:
Well, as I said, if you study the discourse in the PUA community at its best in a non-biased and detached way, desensitized to the language and attitudes you might find instinctively off-putting, you’ll actually find the epistemological standards surprisingly high. But you just have to see that for yourself.
A good comparison for the PUA milieu would be a high-quality community of hobbyist amateurs who engage in some technical work with passion and enthusiasm. In their discussions, they probably won’t apply the same formal standards of discourse and evidence that are used in academic research and corporate R&D, but it’s nevertheless likely that they know what they’re talking about and their body of established knowledge is as reliable as any other—and even though there are no formal qualifications for joining, those bringing bullshit rather than insight will soon be identified and ostracized.
Now, if you don’t know at first sight whether you’re dealing with such an epistemologically healthy community, the first test would be to see how its main body of established knowledge conforms to your own experiences and observations. (In a non-biased way, of course, which is harder when it comes to the PUA stuff than some ordinary technical skill.) In my case, and not just mine, the result was a definite pass. The further test is to observe the actual manner of discourse practiced and its epistemological quality. Again, it’s harder to do when biased reactions to various signals of disrespectability are standing in the way.
Even in physics, not all evidence comes from reproducible experiments. Sometimes you just have to make the best out of observations gathered at random opportune moments, for example when it comes to unusual astronomical or geophysical events.
You’re biasing your skepticism way upward now. The correct level of initial skepticism with which to meet the PUA stuff is the skepticism you apply to people claiming to have solved difficult problems in a way consistent with the existing well-established scientific knowledge—not the much higher level appropriate for those whose claims contradict it.
That’s a good point—the priors for PUA, though low, are nowhere near as low as for psychic phenomena. But that just means that you need a smaller amount of evidence to overcome those priors—it doesn’t lower the bar for what qualifies as valid evidence.
I think part of my problem is there is no easy way to signal you are a white hat PUA rather than a black hat. If I am interested in honest and long term relationships, I don’t want to be signalling that I have the potential to be manipulative. Especially as the name PUA implies that you are interested in picking up girls in general rather than one lady in particular.
This also applies somewhat to non-sexual relations. If someone studies human interaction to a significant degree, how do I know that they will only use their powers for good? Say in an intellectual field or political for that matter. I’m sure the knowledge is useful for spin doctors and people coaching political leaders in debates.
This comment, in itself, is probably signalling an overly reflective mind on the nature of signalling though.
whpearson:
That’s unfortunately a problem that women face with men in general, PUA or no PUA. Why do you think the signaling games naturally played by men are any different? The difference is ultimately like between a musical prodigy who learned to play the piano spontaneously as a kid, and a player with a similar level of skill who was however tone-deaf and learned it only much later with lots of painstaking practice. But they’re still playing the same notes.
There is absolutely nothing in the whole PUA arsenal that wouldn’t ultimately represent reverse-engineering of techniques spontaneously applied by various types of natural ladies’ men. There is no extra “manipulation” of any sort added on top of that. Even the most callous, sly, and dishonest PUA techniques ever proposed are essentially the same behavior as that practiced by certain types of naturally occurring dark personality types of men that women often, much as they loathe to admit it, find themselves wildly attracted to. (Google “dark triad,” or see the paper I linked in one of my other comments.)
It’s a name that stuck from the old days, which isn’t representative of the whole area any more (and in fact never fully was). The more modern term is “game.”
In the marginal Roissysphere, maybe. I’ve seen many attempt to get away from words like “pickup” or “seduction” though I haven’t seen any consensus on an alternative. The problem is that our culture simply has no value-neutral or positive terms for, uh, how do I put it… systematically investigating how people induce each other to want sex and relationships, and how one can practically make use of that knowledge oneself.
(It took me about four tries to write the part in italics after thinking about this subject for years, and it’s still really clunky. I could have said “understand the mating process and act on that understanding,” but that’s a bit too watered-down. My other best attempt was systematically investigating the process by which people create contexts that raise the chances of other people wanting to have sex and relationships with them, and how one can practically make use of this knowledge oneself. That phrasing is clunkier, but gets rid of the word “induce,” which a bunch of feminists once told me is “mechanical” and “objectifying.”)
“Game” has its own problems, of course. What I like about the term is that it implies that social interaction should be playful and fun. “Game” also highlights certain game-theoretic and competitive aspects of human interaction, but it might risk leading people to overstate those aspects. What I don’t like is the connotation that a game isn’t “serious” (e.g. “you think this is just a game, huh?”) and that PUAs (or critics of PUAs) may believe that “game” involves not taking other people’s feelings and interests seriously.
As I’m sure you know, some gurus (e.g. TylerDurden) have advocated viewing the process of learning pickup like learning a videogame. A similar frame is the “experiment frame,” where you think of yourself as a scientist engaging in social experiments. Such frames can be extremely valuable for beginners who need to protect themselves emotionally during the early stages of the learning process, when most of what they try isn’t going to work. Yet they are a form of emotionally distancing oneself from others; in a minority of people with existing problems, they could inhibit empathy, encourage antisocial behavior, or exacerbate feelings of alienation. In general though, I view the possible harm of such attitudes as mainly affecting the PUA.
I see these frames as training wheels which should soon be discarded once the need for such an emotionally defensive stance is gone. Most socially cool people don’t see other people as part of a video game they are playing, or as subjects in a science experiment they are running (though some Dark Triad naturals do… one favorite quote of mine from an intelligent and extremely badboy natural friend of mine who had no exposure to the seduction community: “I love causation… once you understand it, you can manipulate people”). I still engage in social experiments all the time, but when I go out, I no longer think “I’m gonna run some cool experiments tonight,” I think “I’m gonna hang out with some cool people tonight.”
I have the impression that “game” is used much more widely even as the primary general term, let alone when people talk about specific skill subsets and applications (“phone game,” “day game,” etc.). But I’m sure you’ve seen a much broader sample of all sorts of PUA-related stuff, so I’ll defer to your opinion.
That said, I see game primarily as a way of overcoming the biases and false beliefs held about male-female interactions in the contemporary culture. I would say that by historical standards, our culture is exceptionally bad in this regard. While the prevailing respectable views and popular wisdom on the matters of human pairing and sexual behavior have always been affected by biases in every culture that ever existed, my impression is that ours is exceptionally out of touch with reality when it comes to these issues. This is a special case of what I see as a much broader general trend—namely, that in contrast to hard sciences and technology, which have been making continuous and uninterrupted progress for centuries, in many areas of human interest that are not amenable to a no-nonsense hard-scientific way of filtering truth from bullshit, the dominant views have actually been drifting away from reality and into increasing biases and delusions for quite a while now.
To understand this, it is necessary to be able to completely decouple normative from factual parts in one’s beliefs about human sexual and pairing behaviors—a feat of unbiased thinking that is harder in this matter than almost any other. Once this has been done, however, a curious pattern emerges: modern people perceive the normative beliefs of old times and faraway cultures about pairing and sex as alien, strange, and repulsive, and conclude that this is because their factual beliefs were (or are) deluded and biased. Yet it seems to me that whatever one thinks about the normative part, the prevailing factual beliefs have, in many ways, become more remote from reality in modern times. (The only major exceptions are those that came from pure hard-scientific insight, like e.g. the details of women’s fertility cycle.) This of course also implies that while one can defend the modern norms on deontological grounds, the commonly believed consequentialist arguments in their favor are very seriously flawed.
The PUA insights are to a large degree about overcoming these relatively novel biases, and most PUA acolytes aren’t aware that lots of their newly gained taboo-breaking insight was in fact common knowledge not that long ago. When you look at men who have applied this insight to achieve old-fashioned pleasant monogamous harmony rather than for sarging, like that guy to whose marriage story I linked earlier, it’s impossible not to notice that it’s basically the same way our ancestors used to keep peace in the house.
I don’t. I wouldn’t want to associate myself with naturally skilled playas either.
Actually, it’s fairly simple to signal whether you’re a white-hat or black-hat PUA trainer—all you need to do is write your marketing materials for the audience you want. White hats write things that will turn black hats off, and vice versa.
I.e., white hats will talk about direct game, inner game, honesty, respect, relating to women, “relationship game”, and so on. Black hats will talk about banging sluts and wrapping them around your finger with your persuasive and hypnotic powers, and how much of a chump they used to be before they wised up to the conspiracy keeping men down. (Sadly, I’m not exaggerating.)
On the bright side, though, if you’re definitely looking for one hat or the other, they’re not too hard to find.
Most PUA material is somewhere in between though… mostly white-ish hat, with a bit too much tolerance for using false stories and excuses in order to meet people (e.g. “I’m buying a gift for my sister and can I get your opinion on this blah blah”) , even though they’re not endorsing continuing such pretenses past the time required to get into an actual conversation.
It certainly would be nice to be able to screen off the portion of PUA that involves even such minor dishonesty, and have a term that just applied to purely white-hat, deception-free strategies.
Yup. It doesn’t help that a lot of people in the seduction community are so crappy at PR and present their ideas a socially unintelligent way that makes it sound much worse than it actually is.
I don’t have a solution to this problem, except to hope that people will judge me by the way that I treat them, not by the stereotypes triggered by the negative first impression of some of my knowledge sources.
Again, I agree. I’ve been thinking about the ethics of social influence and persuasion for a while.
OK, this is, admittedly, a totally cheap shot, but.....… if PUA tactics are so effective, and so generally applicable to the broader world of social interactions beyond just picking up women, then how come they aren’t better at “seducing” people into buying in to their way of thinking?
My hypothesis: because so much stuff in the seduction community is incorrectly sneered at even when neutrally explained, many PUAs stop bothering and revel in the political incorrectness of their private discourse. Hence you see terminology like “lair” for a seduction meetup group. Why bother with PR if you think you will be unfairly demonized either way? That’s not my perspective, but it’s a guess.
I don’t think that you should compare social-skills trainer geeks to average geeks. Of course the trainers will be much more charismatic. Otherwise they wouldn’t have elected to become trainers. But that doesn’t mean that the trainers’ specific theory has much to do with why they’re charismatic.
The relevant test would be this: Compare a successful PUA social-skill’s trainer to a successful non-PUA social-skills trainer. I’m sure that almost all social-skills trainers broadly agree on all sorts of principles. The question is, do PUAs in particular have access to better knowledge?
Furthermore, do the methods used by either trainer work on the typical person? Or do they work selectively on certain types of people? Of course, instrumentally, you can have good reasons for caring only about certain types of people. But, if you are making claims about the typical person, you should demonstrate that your models reflect the typical person.
ETA: There’s an analogy to dieting gurus. I’m sure that dieting gurus are better than the average person at losing weight. That is, if you forced dieting gurus to gain weight, they could probably lose the extra weight quicker than an average person of the same weight.
However, my understanding is that all the dieting theories out there perform pretty much equally well. There are probably some principles that most diets share and which are good advice. But, as I understand it, there is little evidence that any particular diet has struck upon the truth. Whatever it is that makes a given diet distinct doesn’t seem to contribute significantly to its success.
This is despite the fact that many diets have legions of followers who gather into communities to poor over their successes and failures in meticulous detail. The analogy with the PUA community seems pretty strong on that count, too.
I think the specific dimensions of performance on which PUA trainers would outscore general social skills trainers would be in short-term/immediate manipulation of social groups to achieve specified objective and tactical results.
General social skills trainers tend to focus on longer-term and “softer”, less-specific objectives, although this could vary quite a bit. They’re unlikely to have skills that would be useful at more Machiavellian objectives like, “get people in the group to compete with each other for your attention” or “make the group single out a person for ridicule”, or “get everyone in the room to think you’re a VIP who everyone else already knows”.
Granted, not every PUA trainer would have all those skills either, and that last one might be doable by some non-PUA trainers. But if you could come up with novel challenges within the scope of what a PUA social theory would predict to be doable, it would be a good test of that theory.
(Also, I predict that PUA theorists who agree to such a challenge as being within scope of their theory, will generally update their theory if it bombs. It’s an unusual PUA social theorist who hasn’t done a lot of updating and refinement already, so they are already selected for being open to experimentation, refinement, and objective criteria for success.)
I’m not sure about that… It’s actually a mathematical question, but the proper formalization escapes me at the moment. (Maybe someone could help?) At first glance, any value of hit rate can be equally well-explained by hidden characteristics or by simple randomness. Right now I believe you have to notice some visible characteristic that determines the success of your method before you can conclude that it’s not just randomness. But I can’t prove that with numbers yet.
I should be a little clearer about the priors on which my claims are based.
What I am saying is that the observed level of PUA success is very likely on the hypothesis that the PUA description of the “typical woman” reflects only a small subset within a very heterogeneous population. If I furthermore take into account my prior that women are a heterogeneous population, the observed PUA success is not sufficient evidence that their description is accurate of the “typical woman”.
To be a little more precise:
Let
H = “Traits vary among women with a certain kind of distribution such that the population of women is heterogeneous. Moreover, insofar as there is a typical woman, the PUA description of her is not accurate.”
T = “The PUA description of the typical woman is accurate. That is, PUA methods can be expected to ‘work’ on the typical woman.”
S = “PUAs have the success that we have observed them to have.”
X = Prior knowledge
I grant that p(S | T & X) > p(S | H & X). That is, PUAs would be more likely to have their observed success if their model of the typical woman were accurate.
However, I think that p(S | H & X) is still fairly large. Furthermore, I think that p(H | X) is sufficiently larger than p(T | X) to imply that
p(H | S & X)
= [ p(H | X) / p(S | X) ] p(S | H & X*)
> [ p(T | X) / p(S | X) ] p(S | T & X*)
= p(T | S & X).
[ETA: I’m not sure why that “>” sign is not escaping properly.]
That is, the PUA model of the typical woman is probably inaccurate.
Isn’t this begging the question? You haven’t really given me any reason to update towards your point of view.
No, it’s localizing the source of disagreement :P.
You brought the evidence of pickup artist success to the table. I’m telling you something about the priors that were already on the table. (Here, the table’s contents are my beliefs about the world.) In particular, I’m saying something about why your new evidence isn’t enough to change what I think is probably true.
It’s too difficult to give you exact values for all of the relevant probabilities. But this is a start. For example, now you know that I already grant that p(S | T & X) > p(S | H & X), so you could try to increase my estimation of their difference. Or you could try to show me that p(H | X) doesn’t exceed p(T | X) by as much as I thought. That is, you could try to show me that, even without the evidence of PUA success, I shouldn’t have thought that women are so likely to be heterogeneous.
I don’t expect you to consider all of this work to be worth your time. But at least maybe you have a better sense of what it would take than you had before.
Damn, so this is how Aumann agreement works in the real world. You update! No, you update!
Even without knowing S, the hypothesis T comes with a nifty biological explanation—all those alphas and betas. Does H have anything like that? Why would it be genetically useful for different women to prefer highly different traits in men?
I don’t think that the biology predicts that much psychological unity among humans.
That link argues that each individual interbreeding population does have psychological unity, but there are differences between populations. So PUA techniques should work or fail depending on ethnicity. (Yeah! I win the Non-PC Award!) Is that what you believe?
I see an argument that different populations could have different means for certain quantifiable traits. I don’t see an argument that a single population will be homogeneous.
Moreover, the link claims that populations have diverged on these metrics in fairly short amounts of time. I think that that is evidence for a fair amount of diversity within populations to serve as the raw material for that divergence.
I should clarify that I’m not convinced by the link’s claim that populations differ on those metrics for genetic reasons. But I certainly allow that it’s possible. It’s not ruled out by what we know about biology. I presented the link only as evidence that the case for psychological unity is not a slam-dunk.
cousin_it, I hereby award you the un-PC silver medal for offending both feminists and politically correct race-difference deniers in one sentence.
Different mating practices in different cultures is a piece of data consistent with your hypothesis.
For characteristics that we share with other primates, what would be your evidence that we would not be so heterogeneous in our inner workings?
Yes, people are pretty varied in their cultural trappings and acquired values (i.e. choices of signal expression), but we’re ridiculously common in the mental/emotional machinery by which we obtain that acculturation.
Did you mean, what would be my evidence that we would be so heterogeneous?
Assuming that you did, it’s not clear to me that we share the relevant characteristics with the other primates at the relevant level of abstraction. It’s not known to me that a female chimpanzee would react well to a male she’d never met before putting his arm around her waist.
My understanding is that mating practices vary pretty widely among the primates. They have greater and lesser sexual dimorphism. They are more or less inclined to have harem-type arrangements.
Oops, I temporarily confused homogeneous and heterogeneous, actually. ;-)
Based on your examples, I’d say that where we disagree is on what the correct level of abstraction is. I would expect “arm around the waist” to vary in attractiveness by culture, but the attractiveness of “comfortable initiating touch” to vary a good bit less.
Yes, I think that’s right. I too would expect most women to like men who evince confidence, and who act as though they’re used to being liked rather than disliked.
But it’s less clear to me that initiating touch conveys that attitude without giving 49 out of 50 women the impression that you have other undesirable qualities.
For example, perhaps, by rushing to touch, you give the impression that you are in a hurry to be physically intimate as quickly as possible. She might infer that you lack the confidence or security to pursue courtship at a leisurely pace. Perhaps you are some zero-status interloper who’s trying to get in and out as fast as you can before the local alpha male catches you. And, given the level of inter-tribe violence in the EEA, she might be leery of interlopers. Maybe they present too high a threat of violence or rape to her personally, especially if they seem eager to get intimate quickly.
You’re not imagining the same thing as pjeby when you think of “comfortable initiating touch”. If you appear to be rushing/eager, you’re not appearing comfortable and, as you’ve predicted, will appear less attractive.
I’m considering the possibility that initiating touch a few minutes after meeting a woman for the first time, in and of itself, could convey that you are in a hurry.
That’s the best time to initiate touch. Any later and it will seem out of character or contrived.
I understand that that’s the theory.
What you’re saying sounds weird to me. If there is such a thing as a “local alpha male”, he certainly wouldn’t “pursue courtship at a leisurely pace”.
I’m not convinced of that. The local alpha male might have so many irons in the fire that no one woman should expect to see him in a particular rush to court her.
But it doesn’t really matter what the local alpha male would be expected to do. The local alpha male in the EEA ought to be well known, not a stranger. It doesn’t seem plausible to me that you could fool someone into thinking that you’re him just by initiating some touch. As I understand it, strangers in the EEA were so dangerous that a woman would be very leery about admitting a stranger into her personal space.
Here’s another point: As you know, there’s a whole line of theory in PUA circles about feigning disinterest, so that the woman thinks that you must have higher market value than her. Part of my argument is appealing to that line of thinking. Touching shortly after meeting may imply that you are too eager to be intimate with her.
Let me make a few meta remarks about what I’m arguing and how I’ve argued it.
The above account may not be what is going in with women who profess that they don’t like to be touched by strangers. What I’m trying to do is to make it plausible that the PUA-constructed “typical woman” is not typical by (1) showing that PUA success does not prove that their models of women are generally accurate, and (2) showing that even PUA theory itself has room for women who don’t like to be touched, for the above reasons. Argument (2) is just to open up a “line of retreat” by making the existence of such women seem plausible to a PUA proponent. I’m making the additional claim that such women may in fact be much more common than what the PUA view as I understand it would allow.
The upshot is that PUAs mistakenly think that their success implies that the woman with whom they succeed are typical.
I grant that. Aside from the Aumann-type evidence that I hold my point of view, I’ve given you little else.
However, my position is closer to the null hypothesis, the extreme version of which would posit that women correlate no more with each other than is implied by the definition of “woman”. Unless I misunderstand you, you are asserting that they tend to conform to a certain model of the typical woman espoused by PUAs. Since my view is closer to the null hypothesis, you should be the one presenting evidence for your position. My obligation is just to say what I can about what evidence would convince me.
OK, now we’re getting somewhere.
Counterpoint: whether it’s due to hidden variables, or simple randomness, in either case, what general principle are you able to extract from the example which can be usefully applied to topics other than male/female mating interactions?
Do you think the costs to women are negligible in a utilitarian sense, or just not of interest to you?
I’m not sure really. I just meant that I file it under “things about the world that are beyond my power to control”
Maybe we should be working on the FHI problem.
FHI?
Friendly Human Intelligence.
Sorry, I’m not following...
See the problem of Friendly AI; that is, if humans are going to make a powerful AI, we should make sure it doesn’t do something to wreck our shit, like turn the whole universe into paperclips or some other crazy thing—i.e. it should be Friendly.
RichardKennaway was putting a jokey spin on the idea by suggesting that we solve the problem of designing Friendly Human Intelligence, by analogy to the problem of designing Friendly Artificial Intelligence. (Edited last sentence for accuracy.)
Exactly. Well, not instead of FAI, but FHI is an important problem, as old as humanity: how to bring up your kids right and stop them wrecking the place.
I’ll take this to the open thread.
You’re right, ‘instead of’ was sloppy phrasing on my side, I’ve edited my comment.
Rationalism, which leads to atheism, is just such an aqua regia. Contact with it can destroy any and all of one’s beliefs. The result is not necessarily an improvement:
http://chesterton.org/qmeister2/any-everything.htm
It is. It can.
I agree that in principle it’s possible that someone will do worse (or become more harmful to others) by becoming more rational. But do you take it to be likely?
I’ve no basis for attaching numbers. But some of the things some people have said right here on LW or on OB make me wonder.
We are dealing with fire here. Most people learn to use matches safely. That does not mean that matches are safe.
I’d love to hear an elaboration of this. How can rationality be so dangerous?
Perfect rationality is, by definition, perfect and can never go wrong, for if it went wrong, it would not be perfect rationality. But none of us is perfect. When an imperfect person comes into contact with this ultimate solvent of unexamined beliefs, the ways they could go wrong outnumber the ways they could go right.
“There is no such thing as morality, therefore I can lie and steal all I like and you’re a chump if you don’t!” “There is no afterlife, therefore all is meaningless and I should just cut my throat now! Or yours! It doesn’t matter!” “Everything that people say is self-serving lies and if you say you don’t agree you’re just another self-serving liar!” “At last, I see the truth, while everyone else is just another slave of the Matrix!”
That last is a hazard on any path to enlightenment, rationalistic or otherwise. Belief in one’s own enlightenment—even an accurate one—provides a fully general counterargument to anyone else: they’re not as enlightened.
ETA: Those aren’t actual quotations, but I’m not making them up out of thin air. On the first, compare pjeby’s recent description of black-hat PUAs. On the second, a while back (but I can’t find the actual messages) someone here was arguing that unless he could live forever, nothing could matter to him. On the third, black-hat PUAs and people seeing status games at the root of all interaction are going that way. On the last, as I said above, this is a well-known hazard on many paths. There’s even an xkcd on the subject.
Perfect rationality can still go wrong. Consider for example a perfectly rational player playing the Monty Hall game. The rational thing to do is to switch doors. But that can still turn out to be wrong. A perfectly rational individual can still be wrong.
The rational thing to do might be to look behind the doors, but in any case, perfect rationality is not perfect omniscience.
I hope that my reply does not in any way discourage Richard Kennaway’s reply. I am curious about different responses. But mine: rationalism intends to find better ways to satisfy values, but finds in the process that values are negated, or that it would be more rational to modify values.
Some time ago, I had grand hopes that as a human being embedded in reality, I could just look around and think about things and with some steady effort I might find a world view—at least an epistemology—that would bring everything together, or that I could be involved in a process of bringing things together. Kind of the way religion would do, if it was believable and not a bunch of nonsense. However, the continued application of thought and reason to life just seems to negate the value of life.
Intellectually, I’m in a place where life presents as meaningless. While I can’t “go back” to religious thinking—in fact, I suspect I was never actually there, I’ve only ever been looking for a comprehensive paradigm—I think religions have the right idea; they are wise to the fact that intellectualism/objectivity is not the way to go when it comes to experiencing “cosmic meaning”.
Many people never think about the double think that is required in religion. But I suspect many more people have thought about things both ways … a lifetime is a long time, with space for lots of thoughts … and found that “intellectualism” requires double think as well (compartmentalization) but in a way that is immensely less satisfying. In the latter, you intellectually know that “nothing matters” but that you are powerless to experience and apply this viscerally due to biology. Viscerally, you continue to seek comfort and avoid pain, while your intellect tells you there’s no purpose to your movements.
A shorter way of saying all of this: Being rational is supposed to help humans pursue their values. But it’s pretty obvious that having faith is something that humans value.
Although this comment is already long, it seems a concrete example is needed. Culturally, it appears that singularitarians value information (curiosity) and life (immortality). Suppose immortality was granted: we upload our brains to something replicable and durable so that we can persist forever without any concerns. What in the world would we be motivated to do? What would be the value of information? So what if the digits of pi strung endlessly ahead of me?
I think the “mental muscles” model I use is helpful here. We have different ways of thinking that are useful for different things—mental muscles, if you will.
But, the muscles used in critical thinking are, well, critical. They involve finding counterexamples and things that are wrong. While this is useful in certain contexts, it has negative side effects on one’s direct quality of life, just as using one physical muscle to the exclusion of all others would create problems.
Some of the mental muscles used by religion, OTOH, are appreciation, gratitude, acceptance, awe, compassion… all of which have more positive direct effects on quality of life.
In short, even though reason has applications that indirectly lead to improved circumstances of living, its overuse is directly detrimental to the quality of experience that occurs in that life. And while exclusive use of certain mental muscles used in religion can indirectly lead to worsened circumstances of living, they nonetheless contribute directly to an improved quality of experience.
I’ve pretty much always felt that the problem with LessWrong is that it consists of an effort by people who are already overusing their critical faculties, seeking to improve their quality of experience, by employing those faculties even more.
In your case, the search for a comprehensive world view is an example of this: i.e., believing that if your critical faculty was satisfied, then you would be happy. Instead, you’ve discovered that using the critical faculty simply produces more of the same dissatisfaction that using the critical faculty always produces. In a very real sense, the emotion of dissatisfaction is the critical faculty.
In fact, I got the idea of mental muscles from Minsky’s book The Emotion Machine, wherein he proposes mental “resources” organized into larger activation patterns by emotion. That is, he proposes that emotions are actually modes of thought, that determine which resources (muscles) are activated or suppressed in relation to the topic. Or in other words, he proposes that emotions are a form of functional metacognition.
(While Minksy calls the individual units “resources”, I prefer the term “muscles”, because as with physical muscles they can be developed with training, some are more appropriate for some tasks than others, etc. So it’s more vivid and suggestive when training to either engage or “relax” specific “muscle groups”.)
Anyway… tl;dr version: emotions and thinking faculties are linked, so how you think is how you feel and vice versa, and your choice of which ones to use has non-trivial and inescapable side-effects on your quality of life. Choose wisely. ;-)
I’ve always suspected that introspection was tied to negative emotions. It’s more of a tool to help figure out solutions to problems rather than a happy state like ‘being in flow’. People can get addicted to introspection because it feels productive, but remains depressing if no positive action is taken from it.
Do you think this is related to the mental muscles model?
Yep—Minsky actually uses something like it as an example.
I agree and this is insightful: thinking in certain types of ways results in specific predictable emotions. The way I feel about reality is the result of the state of my mind, which is a choice. However, exercising the other set of muscles does not seem to be epistemically neutral. They generate thoughts that my critical faculty would be .. critical of.
For me, many of these muscles seem to require some extent of magical thinking. They generate a belief in a presence that is taking care of me or at least a feeling for the interconnectedness and self-organization of reality. Is this dependency unusual? Am I mistaken about the dependence?
Consider a concrete example: enjoying the sunshine. Enjoyment seems neutral. However, if I want to feel grateful, it seems I feel grateful towards something. I can personify the sun itself, or reality. It seems silly to personify the sun, but I find it quite natural to personify reality. I currently repress personifying reality with my critical muscles, after a while I suspect it would also feel silly.
I’m not sure what I mean by ‘personify’, but while false (or silly) it also seems harmless. Being grateful for the sun never caused me to make—say—a biased prediction about future experience with the sun. But while I’ve argued a few times here that one should be “allowed” false beliefs if they increase quality of life without penalty, I find that I am currently in a mode of preferring “rational” emotions over allowing impressions that would feel silly.
Is this conflict “real”?
Nope. The idea that your brain’s entire contents need to be self-consistent is just the opinion of the part of you that finds inconsistencies and insists they’re bad. Of course they are… to that part of your brain.
I teach people these questions for noticing and redirecting mental muscles:
What am I paying attention to? (e.g. inconsistencies)
Is that useful? (yes, if you’re debugging a program, doing an engineering task, etc. -- no if you’re socializing or doing something fun)
What would it be useful for me to pay attention to?
Is that really necessary? I have not personally observed that gratitude must be towards something in particular, or that it needs to be personified. One can be grateful in the abstract—thank luck or probability or the Tegmark level IV multiverse if you must. Or “thank Bayes!”. ;-)
Sure, there’s a link. I think that Einstein’s question about whether the universe is a friendly place is related. I also think that this is the one place where an emphasis on epistemic truth and decompartmentalization is potentially a serious threat to one’s long-term quality of life.
I think that our brains and bodies more or less have an inner setting for “how friendly/hostile is my environment”—and believing that it’s friendly has enormous positive impact, which is why religious people who believe in a personally caring deity score so high on various quality of life measures, including recovery from illness.
So, this is one place where you need to choose carefully about which truths you’re going to pay attention to, and worry much more about whether you’re going to let too much critical faculty leak over into your basic satisfaction with and enjoyment of life.
Much more than you should worry about whether your uncritical enjoyment is going to leak over and ruin your critical thinking.
Trust me, if you’re worrying about that, then it’s a pretty good sign that the reverse is the problem. (i.e., your critical faculty already has too much of an upper hand!)
This is one reason I say here that I’m an instrumentalist: it’s more important for me to believe things that are useful, than things that are true. And I can (now, after quite a lot of practice) switch off my critical faculties enough to learn useful things from people who have ridiculously-untrue theories about how they work.
For example, “law of attraction” people believe all sorts of stupidly false things… that are nonetheless very useful to believe, or at least to act as if they were true. But I avoid epistemic conflict by viewing such theories as mnemonic fuel for intuition pumps, rather than as epistemically truthful things.
In fact, I pretty much assume everything is just a mnemonic/intuition pump, even the things that are currently considered epistemically “true”. If you’ll notice, over the long term such “truths” of one era get revised to be “less wrong”, even though the previous model usually worked just fine for whatever it was being used for, up to a certain point. (e.g. Newtonian physics)
(Sadly, as models become “less wrong”, they have a corresponding tendency to be less and less useful as mnemonics or intuition pumps, and require outside tools or increased conscious cognition to become useful. (e.g. Einsteinian physics and quantum mechanics.))
Without really being able to make a case that I have successfully done so, I believe it’s possible to improve my life by thinking accurately and making wise choices. It’s hard to think clearly about areas of painful failure, and it’s hard to motivate myself to search for invalidating experiences, rather than self-protectively circumscribing my efforts, but on the other hand I love the feeling of facing and knowing reality.
That reminds me—I’d been intending to add more applause lights to my comments.
I think if you look at the original source for that phrase it reflects the double-edged sword concerns raised by this comment:
Yep.
I think perhaps discussion of the topic is also seen as low status. And you giving advice to us is implying we are low status.
Because a high status confidant man would just expect the world to conform to them because of their manifest qualities, rather than trying to adapt to the world.
Well, even if Geoffrey Miller’s theories are overshooting it a bit, the role of sexual selection in the evolution of the human mind should not be underestimated. Rather than being some isolated dark corner or irrationality that can be safely corralled and ignored, it seems to me that various inclinations and biases related to the mating behaviors, whether directly or indirectly, are very much all-pervasive in the workings of human minds. Therefore, careful dissection of these behaviors can reveal a lot about human nature that is applicable more widely.
No one wants to take the rules or methods for playing status games or encouraging sexual attraction and generalize from them lessons for how to be rational. What people want to do is (a) apply rationality techniques to this field to better understand how it works and (b) take the techniques people used to learn about this field, specify them, and see if they are applicable more generally.
Women who shit test are typically quite secure, not insecure. You seem to be in a muddle surrounding the subject. That is not to say that I condone everything that everyone on the internet ever says about dating and psychology—but the example quoted is a clear case—passing a shit test is not doing a bad thing. If anything, the person who uses such a test is in more questionable territory, as they are probing you for insecurity.
See previous comment about signal to noise ratio.
Edit: Practical advice that is appropriate for the majority of people on this site is fine, it doesn’t create the noise of confusion and boredom. Akrasia being a good example of appropriate practical advice. As is advice about sleeping, eating, teaching, communicating ideas.
Look at it from the flip side. Should we do make up tips for nerdy girls?
Sure, why not? If a nerdy girl feels she has learned something about rationality from exploring makeup techniques, I would absolutely be interested to hear about it on LessWrong. If other people don’t care about makeup, they don’t have to read her posts.