The thing is, with any model (PUA or otherwise), there are many reasons you could lose out on the 49 in 50 (to go with your terminology for now):
They aren’t into your body type, facial structure, height, race, or some other superficial characteristic
They have preferences that are explained by your model, but you messed up or otherwise failed to fulfill them (Similarly: they have preferences that are explained by your model, but you didn’t go far enough in following the model.) This is exacerbated by the tendency of people to go for partners at the edge of what they can realistically expect to attract, which makes it really easy to fall just a tiny bit short of fulfilling their preferences. Even when your improve your attractiveness, then you may set your sights on a higher tier of partners, and you will still be on the edge of being accepted. P(rejection | you go for a random person in the population you are into) is much less than P(rejection | you go after the most desirable person in that population who you still consider a realistic prospect).
They have preferences that are explained by your model, but someone else around fulfilled them better (or they weren’t single)
Taking into account these factors, from the start we know that there is a ceiling for success of under 50. Let’s say that at least one of these factors apply 50% of the time. Then we are really seeing a max success rate of 1 in 25. 1 in 10 max success rate out of 50 is even plausible. If you only pursue people on the higher edge of your attractiveness bracket, then the number could go even lower, and one success looks more and more impressive.
When you expect to meet rejection >50% of the time via your model, using rejection to test your model is difficult. It’s hard to test such theories in isolation. At what point do you abandon or modify your model, and at what point to you protect it with an ad hoc hypothesis? A protective belt of ad hoc hypotheses isn’t always bad. Sometimes you have actual evidence inducing belief in the presence or absence of the type of factors I mention, but the data for assessing those factors is also very messy.
Stated in a more general form, the problem we are trying to solve is: how do I select between models of human interactions with only my biased anecdotal experience, the biased anecdotal experience of others (who I select in a biased non-representative fashion), and perhaps theories (e.g. evolutionary psychology) with unclear applicability or research studies performed in non-naturalistic settings with unclear generalizability? Whew, what a mouthful!
This is not a trivial problem, and the answers matter. It is exactly the kind of problem where we should be refining the art of human rationality. And an increase in success on this problem (e.g. 1 in 500 to 1 in 50, to continue the trend of pulling numbers out of thin air to illustrate a point ) suggests that we have learned something about rationality.
This is not a trivial problem, and the answers matter. … suggests that we have learned something about rationality.
I actually agree with this completely, and I think your analysis is rather insightful. Your conclusion seems to be that PUA topics are deserving of further study and analysis, and I have no problem with that… I only have a problem with assuming PUA-isms to be true, and citing them as “everybody knows that...” examples when illustrating completely unrelated points.
how do I select between models of human interactions with only my biased anecdotal experience, the biased anecdotal experience of others (who I select in a biased non-representative fashion), and perhaps theories (e.g. evolutionary psychology) with unclear applicability or research studies performed in non-naturalistic settings with unclear generalizability?
This is well put. The issue you raise is why I tried to be a little more explicit about the priors that I was using here. Obviously it’s a long way from giving the explicit probabilities that would be necessary to automate the Bayesian updating, but at least we can make a start at identifying where our priors differ.
The thing is, with any model (PUA or otherwise), there are many reasons you could lose out on the 49 in 50 (to go with your terminology for now):
They aren’t into your body type, facial structure, height, race, or some other superficial characteristic
They have preferences that are explained by your model, but you messed up or otherwise failed to fulfill them (Similarly: they have preferences that are explained by your model, but you didn’t go far enough in following the model.) This is exacerbated by the tendency of people to go for partners at the edge of what they can realistically expect to attract, which makes it really easy to fall just a tiny bit short of fulfilling their preferences. Even when your improve your attractiveness, then you may set your sights on a higher tier of partners, and you will still be on the edge of being accepted. P(rejection | you go for a random person in the population you are into) is much less than P(rejection | you go after the most desirable person in that population who you still consider a realistic prospect).
They have preferences that are explained by your model, but someone else around fulfilled them better (or they weren’t single)
Taking into account these factors, from the start we know that there is a ceiling for success of under 50. Let’s say that at least one of these factors apply 50% of the time. Then we are really seeing a max success rate of 1 in 25. 1 in 10 max success rate out of 50 is even plausible. If you only pursue people on the higher edge of your attractiveness bracket, then the number could go even lower, and one success looks more and more impressive.
When you expect to meet rejection >50% of the time via your model, using rejection to test your model is difficult. It’s hard to test such theories in isolation. At what point do you abandon or modify your model, and at what point to you protect it with an ad hoc hypothesis? A protective belt of ad hoc hypotheses isn’t always bad. Sometimes you have actual evidence inducing belief in the presence or absence of the type of factors I mention, but the data for assessing those factors is also very messy.
Stated in a more general form, the problem we are trying to solve is: how do I select between models of human interactions with only my biased anecdotal experience, the biased anecdotal experience of others (who I select in a biased non-representative fashion), and perhaps theories (e.g. evolutionary psychology) with unclear applicability or research studies performed in non-naturalistic settings with unclear generalizability? Whew, what a mouthful!
This is not a trivial problem, and the answers matter. It is exactly the kind of problem where we should be refining the art of human rationality. And an increase in success on this problem (e.g. 1 in 500 to 1 in 50, to continue the trend of pulling numbers out of thin air to illustrate a point ) suggests that we have learned something about rationality.
I actually agree with this completely, and I think your analysis is rather insightful. Your conclusion seems to be that PUA topics are deserving of further study and analysis, and I have no problem with that… I only have a problem with assuming PUA-isms to be true, and citing them as “everybody knows that...” examples when illustrating completely unrelated points.
This is well put. The issue you raise is why I tried to be a little more explicit about the priors that I was using here. Obviously it’s a long way from giving the explicit probabilities that would be necessary to automate the Bayesian updating, but at least we can make a start at identifying where our priors differ.