I agree that if you want to know “effect on women if one has mastered the technique” people who do not try or fail to learn PUA techniques are not of interest. One probably has to study the successes of PUAs to get a complete picture on women’s psyche.
However, if you want to know the “expected payoff if one starts to study the technique” the failure rate given a serious time investment is hugely relevant. If I would have known that 90% of the math students fail within the first semester (note: this is not the case), I would not have tried it at all, as I am not in the top 10% of any measure, however I would construct it.
Note: I do not know what success-rate PUAs claim and/or achieve.
Here is my best attempt to catalog the success rate of the guys with pickup background I’ve known in real life. Of course, in some cases I have imperfect information and don’t know how they are doing, in which case I will guess, and my guess will be conservative (e.g. I will assume that they are the same way I last saw them, rather than improving since then). This sample isn’t representative at all, so take it with it a grain of salt, but it will help other people understand some of my priors about the success of pickup.
Me: Started out with social anxiety disorder. 6 months: substantial social skills improvement. 8 months: lost virginity. Next few years: Stuck on a plateau of getting numbers and kisses, but social skills slowly improving. Since then: going in and out of flings and relationships; currently in a relationship. I could give several other success metrics, but it would sound like I’m bragging.
4 other guys: Began with severe social deficits. Now they have no problem dating and go in and out of flings and relationships. One of them started out as 300 lbs and massively insecure, but lost weight, applied himself, and is now massively popular with women, to the point of sometimes refusing sex because he is looking for relationships.
1: Had one relationship before pickup and was struggling after. Hooked up with several women for a year, met one he liked, dated her for a couple years, and married her.
1: Started out with severe social problems and alienation, along with depression. Lost his virginity, but then struggled for multiple years without a single date. However, in the last year, he greatly improved his fashion sense and started going out multiple times a week. He is now quite socially popular, and several women in our social circle are really into him, though he isn’t attracted to them. Women come up to him in clubs and compliment him. He went out with this one girl who was really into him, but he wasn’t interested in a relationship, so he ended things and they are just friends. He recently had a fling with a girl who was in town.
1: I give him a brush of pickup knowledge around the same time he was getting into kink subculture. Butch dominant women started looking at him like a piece of tasty meat, and were lining up to beat him. He said that the pickup stuff helped him keep up conversations when women approached him, even though he was still having trouble approaching. He is in a relationship now.
1: He had pickup background improve his fashion sense and social skills, but he still has difficulties interacting with women. He is mega-cool around guys, but still feels very awkward talking to women he is interested in. He says that pickup is part of what caused the awkwardness (inverse of the previous guy). He isn’t really applying himself to pickup nowadays, and working on his career.
1: Similar story, except he managed to end up in a long-term relationship, which is now over.
1: Similar story, except he isn’t awkward around women, and gets phone numbers. He is very socially popular, but still has difficulties expressing sexuality with women.
4: Guys with some exposure to pickup, mostly through me. They are still struggling and having minimal success, as far as I know. Their difficulties are easily explained within the pickup paradigm, such as fashion issues, posture (the classic computer slouch), and women reading them as extremely “nerdy” and/or emotionally inexpressive. One of them may have Asperger’s syndrome. Some of these guys have gone on some online dates. These guys all have < 1 year experience with pickup.
Here are some interesting results, out of these 15 guys:
5 (33%): Massive sexual success
7 (47%): At least one relationship
5 (33%): Still significant lack of success at sexual contact or dates
6 (40%): Still lack of consistent success at sexual contact or dates currently, but has had some success in those areas after studying pickup
2 (13%): Lack of consistent success, even though they have at least average fashion sense and social skills
15 (100%): Minor social skills improvement
11 (73%): Major social skills improvement
1 (7%): Married
The main variables that appear to correlate with success (order of causation unclear):
Fashion sense, particularly non-nerdy presentation. I doubt this variable fully explains success, but it may gate improvement in other areas.
Social skills and self-confidence
Years of experience (all of the highly successful guys have multiple years of experience, and some had plateaus where they struggled)
For a sample of almost all nerdy guys with social deficits, this distribution of outcomes is probably pretty impressive, relative to the alternative (it’s quite possible that by now, I would have been on a couple dates with a few women and still be a virgin). Only one guy reports pickup exacerbating his struggles.
My limited empirical evidence does suggest that success with women as a function of attractiveness is a step function. There can be periods of rapid improvement, and plateaus of little progress. There is very much a feeling of “leveling up” as things come together.
For instance, whenever I’ve seen a guy hit both above average fashion sense, and above average social skills, the attention he gets from women suddenly jumps. It’s as if female attention is a multiplicative factor of different components of attraction.
The plateaus can be tough, especially if you start out on one. However, improvements in social skills during those times can keep you motivated.
Of course, one of the issues with estimating the effects of pickup knowledge is that none of this is placebo tested. Since PU itself teaches that self-confidence is crucial having a method for meeting women that you believe works should by itself produce positive results- especially for people who were previously too anxious to walk up to a stranger and say hello.
Also, those correlates your reporting are pretty general and 101-level. I’d be a little more suspicious of the efficacy of the more ‘advanced’ routines and techniques in the PUA literature.
Placebo testing would be hilarious. Isn’t that a standard comedy plot? A shy man asks for pickup and courting tips, gets terrible ones, and implements them with disastrous results?
You don’t think every professional pick-up artist/dating coach would say their material works better than placebo?
I realize of course we’re not talking about formal science but we still need to be aware of the limitations of personal anecdotes versus controlled studies. Who cares if it is fair?
You might be interested to know that Style says roughly one out of twenty people who start to learn PUA reach a high level of skill.
I personally agree with Martin however; especially in relation to diets. Diets DO work, they are just difficult to implement, changing your lifestyle often is; that applies to exercise, studying a new language or anything that requires a large time investment before you see payoffs. The math comparison is especially appropriate. In this way PUA is no different from any other self improvement course that you might decide to undertake.
That depends on what you mean by “work”. If your intent is to improve your life through achieving some goal, but the side effects of a strategy cause a net cost in quality of life even if the intermediate goal is achieved, then I’d say that the method doesn’t work.
That’s two ways beside the point. They work by the most reasonable and common definition, and often do so without causing a net cost in quality of life. Even if diets don’t suit a majority of people, they work, unlike reciting the alphabet backwards before you go to sleep.
Furthermore, if a procedure, perfectly applied, yield no significant results with 99% of the population, but clearly is effective upon the remaining 1%, it’s not a sham. It might not be the most efficient procedure if you can’t distinguish who it will work with beforehand, but it still works. Even if diet’s aren’t in such a category, the point that something can be should be accepted, and your argument should be focused on the strongest possible case.
That’s two ways beside the point. They work by the most reasonable and common definition
Bear in mind that a diet only ‘works’ if you actually successfully keep to them in the long term. Only a minority of people stick to diets in the long term. That’s where choosing diets based on convenience and ease of compliance becomes more important than raw effectiveness.
Bear in mind that science only ‘works’ if you actually successfully use it over the long term. Only a minority of people stick to science in the long term. That’s where choosing knowledge-gathering techniques based on convenience and ease of compliance becomes more important than raw effectiveness.
I see your point (which is valid), but mine is that the cost of using a method does not reduce the effectiveness of that method, just the number of people who apply it. One might say that it is too costly to uniformly apply science, diets or PUA, but that’s a different statement than saying that they don’t work.
I thought your reference to conventional ‘works’ was a valid reply to Nancy’s “but maybe it will also make you sad or otherwise have external costs” point. Convention would call that ‘works’ even though naturally it is a cost to consider. (For yet another ‘although’ I expect people who find a diet that works in the mid to long term to also enjoy improved experience in other aspects of life so I don’t think there is much of a balance to be had there at all.)
What I would not agree with is the complete exclusion of psychological considerations from deciding whether a diet ‘works’ or not. For example for a bare minimum ‘diet’ I would argue that “have a breakfast including at least 30g of protein within 30m of waking up” works far more effectively than “eat 10% less”. This is just based on how humans work at a psychological and physiological level.
I actually consider science to be a good analogy to go by and not at all as the ad absurdum imply. ‘Science’ is the application of various traditional rules and limits upon rational thinking that discard all sorts of reasoning that is valid for the purpose of avoiding some of the more drastic human failure modes and biases. By limiting evidence and officially sanctioned persuasion via some formulaic rules it makes it somewhat harder for money, politics and ego to sabotage epistemic progress. (Unfortunately it is still not hard enough).
The math comparison lacks in one important piece: Math is clearly defined, and has standard textbooks. If you ask around for recommendations on how to learn math you get similar responses, and will end up learning similar things—up to a certain degree.
There is only one type of math! In general people agree on what math is, and what not.
PU as well as PD is a very broad, not clearly defined subject, that contains a mash-up of many other topics. It is contradictory. Done by amateurs who generally do not care about scientific results.
You get advice that goes against those transported by the mainstream (which we on LW are somewhat used to in other contexts.) But you also find the statement that the subjects of your interest will generally give you bad advice and do not even know what works for them. As will your peers, your family, potential natural friends, the media, and anyone else who you could possibly ask. That makes for a very bad heuristic in regards to its truthfulness.
And then there is the annoying property of PD advice, that it is not only difficult to actually get, but that it also hurts. Sometimes we carry gaping holes that really hurt our social life, and no one has the guts to tell us, since they are afraid of a bad reaction.
One easy to understand example is trying to tell a colleague or friend that he needs to do something about his smell.
I am not aware of a safe way to navigate this. It would be interesting to see real scientists, or science minded people undertake this exploration. But there are way to many possibilities to have it go wrong.
PU does contain basilisks. So handle with care. And do not believe any one particular source completely.
There is only one type of math! In general people agree on what math is, and what not.
The second sentence here is true, but the first one is false. There is mainstream math, and then there are alternatives. Of course, there are insane crackpot ideas, but there are also alternative forms of mathematics that are studied by serious researchers who earn tenure for it and prove valid theorems. Buzzwords to search for include “intuitionism”, “constructive mathematics”, “predicatvism”, “finitism”, and “nonclassical mathematics” generally.
This mostly only affects things from after the 19th century, however, so nothing significant about the mathematics that most people learn in school. Even going on to more advanced material, there is a very definite mainstream to follow, so this doesn’t really affect your point; this is just a hobby horse of mine.
And I should have mentioned “experimental mathematics”, which is really different! This term can be interpreted in weak and strong ways; the former, in which experiments are a preliminary to proof, is normal, but the latter, in which massive computer-generated experimental results are accepted as a substitute for proof when proof seems unlikely, is different. The key point is that most true theorems that we can understand have no proofs that we can understand, a fact that can itself be proved (at least if if you use length of the text as a proxy for whether we can understand it).
Then message me. The concept of a PU basilisk seems unlikely to me but still somewhat intriguing. The closest things I can imagine are in the form of disillusionment with ideals.
It would be interesting to have a collection of basilisks somewhere, like an ammo dump for a mimetic war.
Absolutely! Maybe not on lesswrong, that might make some people cry. But I’d love to have a list somewhere else. And this can be considered an open request to send any basilisk spotted in the wild to me personally for examination.
I’ve yet to see a basilisk that was remotely intimidating to me. And would like to be able to further improve my resistance to exposure to new ‘basilisks’ while they are framed as basilisks so I am even less likely to be vulnerable to them in the wild.
A superintelligence would almost certainly be able to construct sentences that could hack my brain and damage it. Some humans could if they were able to put me in a suitable social or physical environment and ensure ongoing exposure (and environment and exposure are far more important than the abstract concepts conveyed). But things like “Roko’s Basilisk” are just cute. You can tame them and keep them as pets. :)
Being careful not to criticise what was sent to me (and by so doing discourage others) I don’t think what I was sent fits the term ‘basilisk’. Instead I got some well thought out considerations and potential pitfalls that people may fall into along a PUA journey. In fact I think people would appreciate them being spoken publicly. Rather than “things that will kill you just by looking at them” they are things that you are better off looking at so you can avoid falling into them. Obviously the exceptional case is the pessimistic person who is looking for excuses not to try—which is not an uncommon mindset.
I would not repost them here (that would be discourteous) but I suggest that if the author did post his thoughts publicly they would be well received (ie. would get an 8+karma rating if the comment was not buried too deeply to gain exposure.)
I violently agree with all of this. Have you seen any basilisk-like ideas besides roko’s? Roko’s at least looks like a real basilisk until you think about it. Everything else I’ve seen doesn’t come close to living up to the name.
‘Violent agreement’ seems to have been adopted for use in situations in which the participants have been arguing aggressively only to discover that they agree on the substantive issues. For a term that hasn’t been hijacked as jargon I go with “vehemently”. It has a more visceral feel to it too. :)
Have you seen any basilisk-like ideas besides roko’s? Roko’s at least looks like a real basilisk until you think about it. Everything else I’ve seen doesn’t come close to living up to the name.
Roko’s is the most interesting I’ve seen too. Although for some people a combination of Pascal’s Wager and certain religious doctrines about children not being held accountable for their beliefs until a certain age would do it. Once again it is the ability to apply abstract reasoning while at the same time the naivety and weakness in following the rational conclusion correctly that would cause the problem.
Anyone have a really solid working definition for ‘basilisk’ as we use it here?
Although for some people a combination of Pascal’s Wager and certain religious doctrines about children not being held accountable for their beliefs until a certain age would do it.
Am I supposed to be able to see it from just this? Assuming it’s not the kind of thing that would hurt LW posters can you explain? Otherwise, pm it?
One interesting idea is that it seems plausible to create basilisks that only effect your memetic/cognitively different enemies- perhaps the only way to avoid the harm of the basilisk is to deconvert from your religion/ideology. A basilisk that only worked on, say, religious fundamentalists would be a really powerful weapon (I’m not suggesting that the basilisk be capable of killing anyone, necessarily).
Am I supposed to be able to see it from just this? Assuming it’s not the kind of thing that would hurt LW posters can you explain?
Pascal’s Wager → Accept popular religion (Disclaimer: I did label these people naive and with an inability to take reasoning all the way to a sane conclusion. Nevertheless, it works on some intelligent people better than on some unintelligent people.)
There exist popular religious doctrines that God will send young children to heaven regardless because they are too young to have been able to do the conversion thing. I think the “Age Of Accountability” concept may be related.
If a child is not likely to convert to the ‘True’ religion in adulthood then they are (believed to be) likely to go to Hell instead of Heaven if they grow up.
Such a child would go to Heaven if murdered while young but Hell if they grow up.
Such a child would be better off in they are murdered.
A quick google on “do young children go to hell?” led me to this, excerpt:
Therefore, we have been given a specific example in the Old Testament of an infant who died and would live forever in heaven. And Jesus Christ Himself, in the New Testament, stated that little children retain the qualities that make a person eligible to inherit the kingdom of God. We see, then, that infants and small children that die are in a safe state, and will live eternally in heaven.
With such clear statements from the Bible about the eternal destiny of dead infants and small children, why have religious people mistakenly taught that babies go to hell when they die?
(The scripture quotes in question were totally reaching by the way. But that’s the whole point of theology.)
That is over-broad unfortunately. The concept needs to be distinguished from “ugly truths” where the disutility comes from an emotional reaction to how far from ideal the world is.
Do we include concepts selected to elicit a maximal emotional response? Such as particularly Funny Jokes or things so sad that they would drive most people to suicide? They do seem to be a different concept but still deserve a similar title. (If not basilisk then at least cockatrice or one of the Gorgons).
Am I supposed to be able to see it from just this? Assuming it’s not the kind of thing that would hurt LW posters can you explain? Otherwise, pm it?
I think I know what wedrifid is getting at, but I don’t think Pascal’s Wager would do it. Pascal’s Wager argues that one should act as if one believes in God because the costs are low and the potential benefits (Heaven) are high.
But in order to get to the particular failure state at which I think wedrifid is hinting, you can’t just be betting on God—you have to be absolutely certain that Heaven exists and that its joys outweigh on every axis everything that Earth has to offer. Most people, no matter what they say, are not that certain, which is why we don’t routinely slaughter infants in order to ensure their blameless souls entry into Heaven. (Similar logic has been invoked to rationalize murders—such as innocent deaths at witch trials—but in these cases, as a justification pasted on after the fact rather than an honest motive towards murder.)
But in order to get to the particular failure state at which I think wedrifid is hinting, you can’t just be betting on God—you have to be absolutely certain that Heaven exists
No, absolute certainty is definitely not required. The cost is increased so a proportionate increase in probability*payoff is required. But this is still all dwarfed by the arbitrarily large payoffs inherent in religious questions. The whole point of ‘afterlife’ focussed doctrine is to encourage the flock to discount all ‘earthly’ matters as trivial compared to eternal questions.
Most people, no matter what they say, are not that certain, which is why we don’t routinely slaughter infants in order to ensure their blameless souls entry into Heaven.
No, that would not be a rational reason to refrain from the slaughter. The difference between 90% sure and absolutely certain isn’t really much of a big deal when you have the chance of flipping the sign bit of an arbitrarily large disulility payoff (Hell). A 0.05% hunch would be more than enough.
Rational agents that really have arbitrarily large utility payoffs floating around in their utility function will inevitably do things that look insane to us.
No, absolute certainty is definitely not required. The cost is increased so a proportionate increase in probability*payoff is required.
Right, but now you’ve left the standard formulation of Pascal’s Wager. The original Pascal’s Wager includes the stipulation that one loses nothing by behaving as if God were real. To get to a point where you’re willing to kill kids, obviously you have to go a lot further—you must be ready to incur substantial costs as a consequence of belief.
One interesting idea is that it seems plausible to create basilisks that only effect your memetic/cognitively different enemies
I am sure that this is possible, but wonder why it has not been done yet—or at least appeared on my radar. Might be one of the more darker arts, and a very interesting one!
No. It would be better to first develop defenses against them. Basilisks seem to only affect people of a certain mental capacity able to understand and process them. If you look up the Charles Langan interview, or his writings, or this Ted/Unabomba guy you see how really bright people can go wrong.
I would hate LW to contribute to that.
I want LWers and myself to not only have a realistic view of reality, but also be able to life in it and be happy and productive.
I’m not sure how you develop defenses to Basilisks without know what they are. Unless we get lucky and there is a fully general countermeasure.
I was just talking about collecting them though- it’s another question entirely whether or not the list should be public. One doesn’t usually leave ammo dumps unlocked.
If you tell wedrifid privately, then you have to promise to tell me.
I have a few minor basilisks (not from PU alone, but from combining PU with psychometrics or feminism). Nothing so bad that I think it would make people want to ban me, but it might be disconcerting and depressing for many people, and some of it I’m still thinking through.
I agree that if you want to know “effect on women if one has mastered the technique” people who do not try or fail to learn PUA techniques are not of interest. One probably has to study the successes of PUAs to get a complete picture on women’s psyche.
However, if you want to know the “expected payoff if one starts to study the technique” the failure rate given a serious time investment is hugely relevant. If I would have known that 90% of the math students fail within the first semester (note: this is not the case), I would not have tried it at all, as I am not in the top 10% of any measure, however I would construct it.
Note: I do not know what success-rate PUAs claim and/or achieve.
Here is my best attempt to catalog the success rate of the guys with pickup background I’ve known in real life. Of course, in some cases I have imperfect information and don’t know how they are doing, in which case I will guess, and my guess will be conservative (e.g. I will assume that they are the same way I last saw them, rather than improving since then). This sample isn’t representative at all, so take it with it a grain of salt, but it will help other people understand some of my priors about the success of pickup.
Me: Started out with social anxiety disorder. 6 months: substantial social skills improvement. 8 months: lost virginity. Next few years: Stuck on a plateau of getting numbers and kisses, but social skills slowly improving. Since then: going in and out of flings and relationships; currently in a relationship. I could give several other success metrics, but it would sound like I’m bragging.
4 other guys: Began with severe social deficits. Now they have no problem dating and go in and out of flings and relationships. One of them started out as 300 lbs and massively insecure, but lost weight, applied himself, and is now massively popular with women, to the point of sometimes refusing sex because he is looking for relationships.
1: Had one relationship before pickup and was struggling after. Hooked up with several women for a year, met one he liked, dated her for a couple years, and married her.
1: Started out with severe social problems and alienation, along with depression. Lost his virginity, but then struggled for multiple years without a single date. However, in the last year, he greatly improved his fashion sense and started going out multiple times a week. He is now quite socially popular, and several women in our social circle are really into him, though he isn’t attracted to them. Women come up to him in clubs and compliment him. He went out with this one girl who was really into him, but he wasn’t interested in a relationship, so he ended things and they are just friends. He recently had a fling with a girl who was in town.
1: I give him a brush of pickup knowledge around the same time he was getting into kink subculture. Butch dominant women started looking at him like a piece of tasty meat, and were lining up to beat him. He said that the pickup stuff helped him keep up conversations when women approached him, even though he was still having trouble approaching. He is in a relationship now.
1: He had pickup background improve his fashion sense and social skills, but he still has difficulties interacting with women. He is mega-cool around guys, but still feels very awkward talking to women he is interested in. He says that pickup is part of what caused the awkwardness (inverse of the previous guy). He isn’t really applying himself to pickup nowadays, and working on his career.
1: Similar story, except he managed to end up in a long-term relationship, which is now over.
1: Similar story, except he isn’t awkward around women, and gets phone numbers. He is very socially popular, but still has difficulties expressing sexuality with women.
4: Guys with some exposure to pickup, mostly through me. They are still struggling and having minimal success, as far as I know. Their difficulties are easily explained within the pickup paradigm, such as fashion issues, posture (the classic computer slouch), and women reading them as extremely “nerdy” and/or emotionally inexpressive. One of them may have Asperger’s syndrome. Some of these guys have gone on some online dates. These guys all have < 1 year experience with pickup.
Here are some interesting results, out of these 15 guys:
5 (33%): Massive sexual success
7 (47%): At least one relationship
5 (33%): Still significant lack of success at sexual contact or dates
6 (40%): Still lack of consistent success at sexual contact or dates currently, but has had some success in those areas after studying pickup
2 (13%): Lack of consistent success, even though they have at least average fashion sense and social skills
15 (100%): Minor social skills improvement
11 (73%): Major social skills improvement
1 (7%): Married
The main variables that appear to correlate with success (order of causation unclear):
Fashion sense, particularly non-nerdy presentation. I doubt this variable fully explains success, but it may gate improvement in other areas.
Social skills and self-confidence
Years of experience (all of the highly successful guys have multiple years of experience, and some had plateaus where they struggled)
For a sample of almost all nerdy guys with social deficits, this distribution of outcomes is probably pretty impressive, relative to the alternative (it’s quite possible that by now, I would have been on a couple dates with a few women and still be a virgin). Only one guy reports pickup exacerbating his struggles.
My limited empirical evidence does suggest that success with women as a function of attractiveness is a step function. There can be periods of rapid improvement, and plateaus of little progress. There is very much a feeling of “leveling up” as things come together.
For instance, whenever I’ve seen a guy hit both above average fashion sense, and above average social skills, the attention he gets from women suddenly jumps. It’s as if female attention is a multiplicative factor of different components of attraction.
The plateaus can be tough, especially if you start out on one. However, improvements in social skills during those times can keep you motivated.
Of course, one of the issues with estimating the effects of pickup knowledge is that none of this is placebo tested. Since PU itself teaches that self-confidence is crucial having a method for meeting women that you believe works should by itself produce positive results- especially for people who were previously too anxious to walk up to a stranger and say hello.
Also, those correlates your reporting are pretty general and 101-level. I’d be a little more suspicious of the efficacy of the more ‘advanced’ routines and techniques in the PUA literature.
(Though as usual I pretty much agree with you)
You know, I have this great cure for scurvy. But I cannot tell you about it, since it has not been properly double blind tested yet.
I have a great cure for the flu. Take some Muscovy Duck offal and dilute it to 1 part in 100^200 with water.
Did it work on the one guy you tried it on?
Yup! Only took like 3 days with bed rest!
Placebo testing would be hilarious. Isn’t that a standard comedy plot? A shy man asks for pickup and courting tips, gets terrible ones, and implements them with disastrous results?
Not safe for work.
Medicine holds itself to the standard “do better than placebo”. I am not sure if it is fair to hold PUA to the same standard.
You don’t think every professional pick-up artist/dating coach would say their material works better than placebo?
I realize of course we’re not talking about formal science but we still need to be aware of the limitations of personal anecdotes versus controlled studies. Who cares if it is fair?
You might be interested to know that Style says roughly one out of twenty people who start to learn PUA reach a high level of skill.
I personally agree with Martin however; especially in relation to diets. Diets DO work, they are just difficult to implement, changing your lifestyle often is; that applies to exercise, studying a new language or anything that requires a large time investment before you see payoffs. The math comparison is especially appropriate. In this way PUA is no different from any other self improvement course that you might decide to undertake.
That depends on what you mean by “work”. If your intent is to improve your life through achieving some goal, but the side effects of a strategy cause a net cost in quality of life even if the intermediate goal is achieved, then I’d say that the method doesn’t work.
That’s two ways beside the point. They work by the most reasonable and common definition, and often do so without causing a net cost in quality of life. Even if diets don’t suit a majority of people, they work, unlike reciting the alphabet backwards before you go to sleep.
Furthermore, if a procedure, perfectly applied, yield no significant results with 99% of the population, but clearly is effective upon the remaining 1%, it’s not a sham. It might not be the most efficient procedure if you can’t distinguish who it will work with beforehand, but it still works. Even if diet’s aren’t in such a category, the point that something can be should be accepted, and your argument should be focused on the strongest possible case.
Bear in mind that a diet only ‘works’ if you actually successfully keep to them in the long term. Only a minority of people stick to diets in the long term. That’s where choosing diets based on convenience and ease of compliance becomes more important than raw effectiveness.
Bear in mind that science only ‘works’ if you actually successfully use it over the long term. Only a minority of people stick to science in the long term. That’s where choosing knowledge-gathering techniques based on convenience and ease of compliance becomes more important than raw effectiveness.
I see your point (which is valid), but mine is that the cost of using a method does not reduce the effectiveness of that method, just the number of people who apply it. One might say that it is too costly to uniformly apply science, diets or PUA, but that’s a different statement than saying that they don’t work.
I thought your reference to conventional ‘works’ was a valid reply to Nancy’s “but maybe it will also make you sad or otherwise have external costs” point. Convention would call that ‘works’ even though naturally it is a cost to consider. (For yet another ‘although’ I expect people who find a diet that works in the mid to long term to also enjoy improved experience in other aspects of life so I don’t think there is much of a balance to be had there at all.)
What I would not agree with is the complete exclusion of psychological considerations from deciding whether a diet ‘works’ or not. For example for a bare minimum ‘diet’ I would argue that “have a breakfast including at least 30g of protein within 30m of waking up” works far more effectively than “eat 10% less”. This is just based on how humans work at a psychological and physiological level.
I actually consider science to be a good analogy to go by and not at all as the ad absurdum imply. ‘Science’ is the application of various traditional rules and limits upon rational thinking that discard all sorts of reasoning that is valid for the purpose of avoiding some of the more drastic human failure modes and biases. By limiting evidence and officially sanctioned persuasion via some formulaic rules it makes it somewhat harder for money, politics and ego to sabotage epistemic progress. (Unfortunately it is still not hard enough).
What if your intent is to lose weight? You’re pre-defining “work” for the benefit of your argument.
A good place to deconstruct my own argument.
The math comparison lacks in one important piece: Math is clearly defined, and has standard textbooks. If you ask around for recommendations on how to learn math you get similar responses, and will end up learning similar things—up to a certain degree. There is only one type of math! In general people agree on what math is, and what not.
PU as well as PD is a very broad, not clearly defined subject, that contains a mash-up of many other topics. It is contradictory. Done by amateurs who generally do not care about scientific results. You get advice that goes against those transported by the mainstream (which we on LW are somewhat used to in other contexts.) But you also find the statement that the subjects of your interest will generally give you bad advice and do not even know what works for them. As will your peers, your family, potential natural friends, the media, and anyone else who you could possibly ask. That makes for a very bad heuristic in regards to its truthfulness.
And then there is the annoying property of PD advice, that it is not only difficult to actually get, but that it also hurts. Sometimes we carry gaping holes that really hurt our social life, and no one has the guts to tell us, since they are afraid of a bad reaction.
One easy to understand example is trying to tell a colleague or friend that he needs to do something about his smell.
I am not aware of a safe way to navigate this. It would be interesting to see real scientists, or science minded people undertake this exploration. But there are way to many possibilities to have it go wrong.
PU does contain basilisks. So handle with care. And do not believe any one particular source completely.
The second sentence here is true, but the first one is false. There is mainstream math, and then there are alternatives. Of course, there are insane crackpot ideas, but there are also alternative forms of mathematics that are studied by serious researchers who earn tenure for it and prove valid theorems. Buzzwords to search for include “intuitionism”, “constructive mathematics”, “predicatvism”, “finitism”, and “nonclassical mathematics” generally.
This mostly only affects things from after the 19th century, however, so nothing significant about the mathematics that most people learn in school. Even going on to more advanced material, there is a very definite mainstream to follow, so this doesn’t really affect your point; this is just a hobby horse of mine.
And Though shall get a geek point for it. I was kind of waiting for someone to point this out.
And I should have mentioned “experimental mathematics”, which is really different! This term can be interpreted in weak and strong ways; the former, in which experiments are a preliminary to proof, is normal, but the latter, in which massive computer-generated experimental results are accepted as a substitute for proof when proof seems unlikely, is different. The key point is that most true theorems that we can understand have no proofs that we can understand, a fact that can itself be proved (at least if if you use length of the text as a proxy for whether we can understand it).
Oooh, PU basilisks. Where? Show me!
Are you sure you want me to do that on a public forum? I do not want to have my account deleted for posting dangerous stuff.
Then message me. The concept of a PU basilisk seems unlikely to me but still somewhat intriguing. The closest things I can imagine are in the form of disillusionment with ideals.
It would be interesting to have a collection of basilisks somewhere, like an ammo dump for a mimetic war.
Absolutely! Maybe not on lesswrong, that might make some people cry. But I’d love to have a list somewhere else. And this can be considered an open request to send any basilisk spotted in the wild to me personally for examination.
I’ve yet to see a basilisk that was remotely intimidating to me. And would like to be able to further improve my resistance to exposure to new ‘basilisks’ while they are framed as basilisks so I am even less likely to be vulnerable to them in the wild.
A superintelligence would almost certainly be able to construct sentences that could hack my brain and damage it. Some humans could if they were able to put me in a suitable social or physical environment and ensure ongoing exposure (and environment and exposure are far more important than the abstract concepts conveyed). But things like “Roko’s Basilisk” are just cute. You can tame them and keep them as pets. :)
My perspective on this is very similar to yours. If you were sent any interesting PU basilisks, would you please forward them to me?
Being careful not to criticise what was sent to me (and by so doing discourage others) I don’t think what I was sent fits the term ‘basilisk’. Instead I got some well thought out considerations and potential pitfalls that people may fall into along a PUA journey. In fact I think people would appreciate them being spoken publicly. Rather than “things that will kill you just by looking at them” they are things that you are better off looking at so you can avoid falling into them. Obviously the exceptional case is the pessimistic person who is looking for excuses not to try—which is not an uncommon mindset.
I would not repost them here (that would be discourteous) but I suggest that if the author did post his thoughts publicly they would be well received (ie. would get an 8+karma rating if the comment was not buried too deeply to gain exposure.)
I violently agree with all of this. Have you seen any basilisk-like ideas besides roko’s? Roko’s at least looks like a real basilisk until you think about it. Everything else I’ve seen doesn’t come close to living up to the name.
‘Violent agreement’ seems to have been adopted for use in situations in which the participants have been arguing aggressively only to discover that they agree on the substantive issues. For a term that hasn’t been hijacked as jargon I go with “vehemently”. It has a more visceral feel to it too. :)
Roko’s is the most interesting I’ve seen too. Although for some people a combination of Pascal’s Wager and certain religious doctrines about children not being held accountable for their beliefs until a certain age would do it. Once again it is the ability to apply abstract reasoning while at the same time the naivety and weakness in following the rational conclusion correctly that would cause the problem.
Anyone have a really solid working definition for ‘basilisk’ as we use it here?
Am I supposed to be able to see it from just this? Assuming it’s not the kind of thing that would hurt LW posters can you explain? Otherwise, pm it?
One interesting idea is that it seems plausible to create basilisks that only effect your memetic/cognitively different enemies- perhaps the only way to avoid the harm of the basilisk is to deconvert from your religion/ideology. A basilisk that only worked on, say, religious fundamentalists would be a really powerful weapon (I’m not suggesting that the basilisk be capable of killing anyone, necessarily).
Pascal’s Wager → Accept popular religion (Disclaimer: I did label these people naive and with an inability to take reasoning all the way to a sane conclusion. Nevertheless, it works on some intelligent people better than on some unintelligent people.)
There exist popular religious doctrines that God will send young children to heaven regardless because they are too young to have been able to do the conversion thing. I think the “Age Of Accountability” concept may be related.
If a child is not likely to convert to the ‘True’ religion in adulthood then they are (believed to be) likely to go to Hell instead of Heaven if they grow up.
Such a child would go to Heaven if murdered while young but Hell if they grow up.
Such a child would be better off in they are murdered.
Therefore...
A quick google on “do young children go to hell?” led me to this, excerpt:
(The scripture quotes in question were totally reaching by the way. But that’s the whole point of theology.)
Knowledge or concepts, the comprehension of which will cause one significant disulitly.
That is over-broad unfortunately. The concept needs to be distinguished from “ugly truths” where the disutility comes from an emotional reaction to how far from ideal the world is.
Knowledge or concepts, the comprehension of which may cause substantial damage to one’s instrumental rationality.
Do we include concepts selected to elicit a maximal emotional response? Such as particularly Funny Jokes or things so sad that they would drive most people to suicide? They do seem to be a different concept but still deserve a similar title. (If not basilisk then at least cockatrice or one of the Gorgons).
I think I know what wedrifid is getting at, but I don’t think Pascal’s Wager would do it. Pascal’s Wager argues that one should act as if one believes in God because the costs are low and the potential benefits (Heaven) are high.
But in order to get to the particular failure state at which I think wedrifid is hinting, you can’t just be betting on God—you have to be absolutely certain that Heaven exists and that its joys outweigh on every axis everything that Earth has to offer. Most people, no matter what they say, are not that certain, which is why we don’t routinely slaughter infants in order to ensure their blameless souls entry into Heaven. (Similar logic has been invoked to rationalize murders—such as innocent deaths at witch trials—but in these cases, as a justification pasted on after the fact rather than an honest motive towards murder.)
No, absolute certainty is definitely not required. The cost is increased so a proportionate increase in
probability*payoff
is required. But this is still all dwarfed by the arbitrarily large payoffs inherent in religious questions. The whole point of ‘afterlife’ focussed doctrine is to encourage the flock to discount all ‘earthly’ matters as trivial compared to eternal questions.No, that would not be a rational reason to refrain from the slaughter. The difference between 90% sure and absolutely certain isn’t really much of a big deal when you have the chance of flipping the sign bit of an arbitrarily large disulility payoff (Hell). A 0.05% hunch would be more than enough.
Rational agents that really have arbitrarily large utility payoffs floating around in their utility function will inevitably do things that look insane to us.
Right, but now you’ve left the standard formulation of Pascal’s Wager. The original Pascal’s Wager includes the stipulation that one loses nothing by behaving as if God were real. To get to a point where you’re willing to kill kids, obviously you have to go a lot further—you must be ready to incur substantial costs as a consequence of belief.
I am sure that this is possible, but wonder why it has not been done yet—or at least appeared on my radar. Might be one of the more darker arts, and a very interesting one!
Creating basilisks is hard- as evidenced by the fact that we have no recorded instance of one ever existing.
For similar reasons, good bodyguards—the kind that would take a bullet for you—are hard to find. (apologies to Magic: The Gathering)
Or: the only basilisks that are easy to create are those that directly disable the host’s ability to spread them.
Is it time to link to Monty Python yet?
No. It would be better to first develop defenses against them. Basilisks seem to only affect people of a certain mental capacity able to understand and process them. If you look up the Charles Langan interview, or his writings, or this Ted/Unabomba guy you see how really bright people can go wrong.
I would hate LW to contribute to that.
I want LWers and myself to not only have a realistic view of reality, but also be able to life in it and be happy and productive.
I’m not sure how you develop defenses to Basilisks without know what they are. Unless we get lucky and there is a fully general countermeasure.
I was just talking about collecting them though- it’s another question entirely whether or not the list should be public. One doesn’t usually leave ammo dumps unlocked.
And, probably more importantly, without certain other mental capacities that allow them to handle information appropriately.
If you tell wedrifid privately, then you have to promise to tell me.
I have a few minor basilisks (not from PU alone, but from combining PU with psychometrics or feminism). Nothing so bad that I think it would make people want to ban me, but it might be disconcerting and depressing for many people, and some of it I’m still thinking through.
Did the two annihilate each other, destroying swathes of your cerebral cortex?
Or I’ll tell you. :)