I am super against having this kind of post on LessWrong. I think association of LW with dating advice is harmful and we should try to avoid it. I also suspect that terminology used in this post makes it worse than it would ordinarily be. I recoil from the phrase ‘mating plan’, and while a negative emotional reaction isn’t usually an argument, it might be relevant in this case since my point is about outside perception.
I haven’t read the OP, am not that interested in it, though Geoffrey Miller is quite thoughtful.
I think that the main things building up what LW is about right now are the core tags, the tagging page, and the upcoming LW books based on the LW review vote. If you look at the core tags, there’s nothing about dating there (“AI” and “World Modeling” etc). If you look at the vote, it’s about epistemology and coordination and AI, not dating. The OP also hasn’t got much karma, so I’m a bit confused that you’re arguing this shouldn’t be discussed on LW, and weak-downvoted this comment. (If you want to argue that a dating post has too much attention, maybe pick something that was better received like Jacobian’s recent piece, which I think embodies a lot of the LW spirit and is quite healthy.)
I’m not much worried about dating posts like this being what we’re known for. Given that it’s a very small part of the site, if it still became one of the ‘attack vectors’, I’m pretty pro just fighting those fights, rather than giving in and letting people on the internet who use the representativeness heuristic to attack people decide what we get to talk about. (Once you open yourself to giving in on those fights, they just start popping up everywhere, and then like 50% of your cognition is controlled by whether or not you’re stepping over those lines.)
I think that the main things building up what LW is about right now are the core tags, the tagging page, and the upcoming LW books based on the LW review vote. If you look at the core tags, there’s nothing about dating there (“AI” and “World Modeling” etc). If you look at the vote, it’s about epistemology and coordination and AI, not dating.
There was also nothing about dating on LW back when I had the discussion I’ve referred to with the person who thought (and probably still thinks) that a big driver behind the appeal of LW is sexism. Someone who tries to destroy your reputation doesn’t pick a representative sample of your output, they pick the parts that make you look the worst. (And I suspect that “someone trying to destroy EY’s reputation” was part of the causal chain that lead to the person believing this.)
This post and Jacobian’s are not the same. Before the edit, I think this post had the property that if the wrong people read it, their opinion of LW is irreversibly extremely negative. I don’t think I’m exaggerating here. (And of course, the edit only happened because I made the comment.) And the part about it having low karma, I mean, it probably has low karma because of people who share my concerns. It has 12 votes; if you remove all downvotes, it doesn’t have low karma anymore. And I didn’t know how much karma it was going to have when I commented.
I’m not much worried about dating posts like this being what we’re known for. Given that it’s a very small part of the site, if it still became one of the ‘attack vectors’, I’m pretty pro just fighting those fights, rather than giving in and letting people on the internet who use the representativeness heuristic to attack people decide what we get to talk about.
I’m pretty frustrated with this paragraph because it seems so clearly be defending the position that feels good. I would much rather be pro fighting than pro censoring. But if your intuition is that the result is net positive, I ask: you have good reasons to trust that intuition?
As I’ve said in another comment, the person I’ve mentioned is highly intelligent, a data scientist, effective altruist, signed the Giving-what-we-can pledge, and now runs their own business. I’m not claiming they’re a representative case, but the damage that has been done in this single instance due to an association of LW with sexism strikes me as so great that I just don’t buy that having posts like this is worth it, and I don’t think you’ve given me a good reason for why it is.
I wasn’t aware it used to talk about a ‘mating plan’ everywhere, which I think is amusing and I agree sounds kind of socially oblivious.
I really think that we shouldn’t optimise for people not joining us because of weak, negative low-level associations. I think the way that you attract good people is by strong wins, not because of not hearing any bad-associations. Nassim Taleb is an example I go to here, where the majority of times I hear about him I think he’s being obnoxious or aggressive, and often just disagree with what he says, but I don’t care too much about reading that because occasionally he’s saying something important that few others are.
Elon Musk is another example, where the majority of coverage I see of him his negative, and sometimes he writes kinda dumb tweets, but he gives me hope for humanity and I don’t care about the rest of the stuff. Had I seen the news coverage first, I’d still have been mindblown by seeing the rockets land and changed my attitude towards him. I could keep going on with examples… new friends occasionally come to me saying they read a review of HPMOR saying Harry’s rude and obnoxious, and I respond “you need to learn that’s not the most important aspect of a person’s character”. Harry is determined and takes responsibility and is curious and is one of the few people who has everyone’s back in that book, so I think you should definitely read and learn from him, and then the friend is like “Huh, wow, okay, I think I’ll read it then. That was very high and specific praise.”
A lot of this comes down to the graphs in Lukeprog’s post on romance (another dating post, I’m so sorry).
I think that LessWrong is home to some of the most honest and truth-seeking convo on the internet. We have amazing thinkers who come here like Zvi and Paul and Anna and Scott and more and the people who care about the conversations they can have will come here even if we have weird associations and some people hate us and call us names.
(Sarah also wrote the forces of blandness post that I think is great and I think about a lot in this context.)
I guess I didn’t address the specific example of your friend. (Btw I am also a person who was heavily involved with EA at Oxford, I ran the 80k student group while I was there and an EAGx!) I’m sorry your friend decided to write-off LessWrong because they heard it was sexist. I know you think that’s a massive cost that we’re paying in terms of thousands of good people avoiding us for that reason too.
I think that negative low-level associations really matter if you’re trying to be a mass movement and scale, like a political movement. Republicans/Democrats kind of primarily work to manage whether the default association is positive or negative, which is why they spend so much time on image-management. I don’t think LW should grow 100x users in the next 4 years. That would be terrible for our mission of refining the art of human rationality and our culture. I think that the strong positive hits are the most important, as I said already.
Suppose you personally get really valuable insights from LW, and that people’s writing here helps you understand yourself as a person and become more virtuous in your action. If you tell your EA friend that LessWrong was a key causal factor in you levelling up as a person, and they reply “well that’s net bad because I once heard they’re sexist” I’m not that impressed by them. And I hope that a self-identified EA would see the epistemic and personal value there as primary rather than the image-management thing as primary. And I think that if we all think everybody knows everyone else thinks the image-management is primary… then I think it’s healthy to take the step of saying out loud “No, actually, the actual intellectual progress on rationality is more important” and following through.
I feel a lot of uncertainty after reading your and Zack’s responses and I think I want to read some of the links (I’m particularly interested in what Wei Dai has to say) and think about this more before saying anything else about it – except for trying to explain what my model going into this conversation actually was. Based on your reply, I don’t think I’ve managed to do that in previous comments.
I agree with basically everything about how LW generates value. My model isn’t as sophisticated, but it’s not substantially different.
The two things that concern me are
People disliking LW right now (like my EA friend)
The AI debate potentially becoming political.
On #1, you said “I know you think that’s a massive cost that we’re paying in terms of thousands of good people avoiding us for that reason too.” I don’t think it’s very common. Certainly this particular combination of technical intelligence with an extreme worry about gender issues is very rare. It’s more like, if the utility of this one case is −1, then I might guess the total direct utility of allowing posts of this kind in the next couple of years is probably somewhere in [-10, 40] or something. (But this might be wrong since there seem to be more good posts about dating than I was aware of.) And I don’t think you can reasonably argue that there won’t be fifty worth of comparable cases.
I currently don’t buy the arguments that make sweeping generalizations about all kinds of censorship (though I could be wrong here, too), which would substantially change the interval.
On #2, it strikes me as obvious that if AI gets political, we have a massive problem, and if it becomes woke not to take AI risk seriously, we have an even larger problem, and it doesn’t seem impossible that tolerating posts like this is a factor. (Think of someone writing a NYT article about AI risk originating from a site that talks about mating plans.) On the above scale, the utility of AI risk becoming anti-woke might be something like −100.000. But I’m mostly thinking about this for the first time, so this is very much subject to change.
I could keep going on with examples… new friends occasionally come to me saying they read a review of HPMOR saying Harry’s rude and obnoxious, and I’m like you need to learn that’s not the most important aspect of a person’s character. Harry is determined and takes responsibility and is curious and is one of the few people who has everyone’s back in that book, so I think you should definitely read and learn from him, and then the friend is like “Huh, wow, okay, I think I’ll read it then. That was shockingly high and specific praise.”
I’ve failed this part of the conversation. I couldn’t get them to read any of it, nor trust that I have any idea what I’m talking about when I said that HPMoR doesn’t seem very sexist.
I think that negative low-level associations really matter if you’re trying to be a mass movement and scale, like a political movement.
Many of the world’s smartest, most competent, and most influential people are ideologues. This probably includes whoever ends up developing and controlling advanced technologies. It would be nice to be able to avoid such people dismissing our ideas out of hand. You may not find them impressive or expect them to make intellectual progress on rationality, but for such progress to matter, the ideas have to be taken seriously outside LW at some point. I guess I don’t understand the case against caution in this area, so long as the cost is only having to avoid some peripheral topics instead of adopting or promoting false beliefs.
Rather than debating the case for or against caution, I think the most interesting question is how to arrange a peaceful schism. Team Shared Maps That Reflect The Territory and Team Seek Power For The Greater Good obviously do not belong in the same “movement” or “community.” It’s understandable that Team Power doesn’t want to be associated with Team Shared Maps because they’re afraid we’ll say things that will get them in trouble. (We totally will.) But for their part of the bargain, Team Power needs to not fraudulently market their beacon as “the rationality community” and thereby confuse innocents who came looking for shared maps.
I think of my team as being “Team Shared Maps That Reflect The Territory But With a Few Blank Spots, Subject to Cautious Private Discussion, Where Depicting the Territory Would Have Caused the Maps to be Burned”. I don’t think calling it “Team Seek Power For The Greater Good” is a fair characterization both because the Team is scrupulous not to draw fake stuff on the map and because the Team does not seek power for itself but rather seeks for it to be possible for true ideas to have influence regardless of what persons are associated with the true ideas.
That’s fair. Maybe our crux is about to what extent “don’t draw fake stuff on the map” is a actually a serious constraint? When standing trial for a crime you didn’t commit, it’s not exactly comforting to be told that the prosecutor never lies, but “merely” reveals Shared Maps That Reflect The Territory But With a Few Blank Spots Where Depicting the Territory Would Have Caused the Defendant to Be Acquitted. It’s good that the prosecutor never lies! But it’s important that the prosecutor is known as the prosecutor, rather than claiming to be the judge. Same thing with a so-called “rationalist” community.
I don’t think anyone understands the phrase “rationalist community” as implying a claim that its members don’t sometimes allow practical considerations to affect which topics they remain silent on. I don’t advocate that people leave out good points merely for being inconvenient to the case they’re making, optimizing for the audience to believe some claim regardless of the truth of that claim, as suggested by the prosecutor analogy. I advocate that people leave out good points for being relatively unimportant and predictably causing (part of) the audience to be harmfully irrational. I.e., if you saw someone else than the defendant commit the murder, then say that, but don’t start talking about how ugly the judge’s children are even if you think the ugliness of the judge’s children slightly helped inspire the real murderer. We can disagree about which discussions are more like talking about whether you saw someone else commit the murder and which discussions are more like talking about how ugly the judge’s children are.
I guess I feel like we’re at an event for the physics institute and someone’s being nerdy/awkward in the corner, and there’s a question of whether or not we should let that person be or whether we should publicly tell them off / kick them out. I feel like the best people there are a bit nerdy and overly analytical, and that’s fine, and deciding to publicly tell them off is over the top and will make all the physicists more uptight and self-aware.
To pick a very concrete problem we’ve worked on: the AI alignment problem is totally taken seriously by very important people who are also aware that LW is weird, but Eliezer goes on the Sam Harris podcast and Bostrom is invited by the UK government to advise and so on and Karnofsky’s got a billion dollars and focusing to a large part on the AI problem. We’re not being defined by this odd stuff, and I think we don’t need to feel like we are. I expect as we find similar concrete problems or proposals, we’ll continue to be taken very seriously and have major success.
As I see it, we’ve had this success partly because many of us have been scrupulous about not being needlessly offensive. (Bostrom is a good example here.) The rationalist brand is already weak (e.g. search Twitter for relevant terms), and if LessWrong had actually tried to have forthright discussions of every interesting topic, that might well have been fatal.
There was also nothing about dating on LW back when I had the discussion I’ve referred to with the person who thought (and probably still thinks) that a big driver behind the appeal of LW is sexism.
I’m having trouble understanding what this would mean. Why would a big driver behind LW’s appeal be sexism?
As I’ve said in another comment, the person I’ve mentioned is highly intelligent, a data scientist, effective altruist, signed the Giving-what-we-can pledge, and now runs their own business.
If someone can look at LW, with its thousands of posts discussing futurism, philosophy, rationality, etc, and come away concluding that the appeal of the site is sexism (as opposed to an interest in those topics), I feel tempted to just write off their views.
Sure, you can find some sexist posts or commenters here or there (I seem to remember a particular troll whom we eventually vanquished with the switchover from LW 1.0 to LW 2.0). But to think that they’re the norm, or that it’s a big part of the general appeal of the site?
To conclude that, it seems like you’d either have to have gotten an extremely biased sample of LW (and not been thoughtful enough to realize this possibility on your own), or you’d have to have some major blindspots in your thinking about these things, causing you to jump to bizarre conclusions.
In either case, it seems like the issue is more with them than with LW, and all else equal, I wouldn’t feel much drive to cater to their opinion. (Even if they’re otherwise an intelligent and productive individual.) People can just have blindspots, and I don’t think you should cater to the people with the most off-base view of you.
Am I missing something? Do you think their view was more justified than this? Or do you just think it’s worth paying more costs to cater to such people, even if you agree that they’re being unreasonable?
Do you think their view was more justified than this?
A clear no. I think their position was utterly ridiculous. I just think that blind spots on this particular topic are so common that it’s not a smart strategy to ignore them.
Why would a big driver behind LW’s appeal be sexism?
I don’t think this currently is true for LW myself, but if a space casually has, say, sexist or racist stuff in it, people looking can be like “oh thank god, a place I can say what I really think [that is sexist or racist] without political correctness stopping me” and then that becomes a selling point for people who want to talk about sexist or racist stuff. Suspect the commenter means something like this.
I have an extremely negative emotional reaction to this.
More seriously. While LW can be construed as “trying to promote something” (i.e. rational thinking), in my opinion it is mostly a place to have rational discussions, using much stronger discursive standards than elsewhere on the internet.
If people decide to judge us on cherry pickings, that is sad, but it is much better than having them control what topics are or are not allowed. I am with Ben on this one.
About your friend in particular, if they have to be turned off of the community because of some posts and the fact we engage with idea at the object-level instead of yucking-out socially awkward ideas, then she might not yet be ready to receive rationality in her heart.
This post triggers a big “NON-QUANTITATIVE ARGUMENT” alarm in my head.
I’m not super confident in my ability to assess what the quantities are, but I’m extremely confident that they matter. It seems to me like your post could be written in exactly the same way if the “wokeness” phenomenon was “half as large” (fewer people care about, or they don’t care as strongly). Or, if it was twice as large. But this can’t be good – any sensible opinion on this issue has to depend on the scope of the problem, unless you think it’s in principle inconceivable for the wokeness phenomenon to be prevalent enough to matter.
I’ve explained the two categories I’m worried about here, and while there have been some updates since (biggest one: it may be good talk about politics now if we assume AI safety is going to be politicized anyway), I still think about it in roughly those terms. Is this a framing that makes sense to you?
It very much is a non-quantitative argument—since it’s a matter of principle. The principle being not to let outside perceptions dictate the topic of conversations.
I can think of situations were the principles could be broken, or unproductive. If upholding it would make it impossible to have these discussions in the first place (because engaging would mean you get stoned, or something) and hiding is not an option (or still too risky), then it would make sense to move conversations towards the overton window.
Said otherwise, the quantity I care about is “ability to have quote rational unquote conversations” and no amount of outside woke prevalence can change that *as long as they don’t drive enough community member away*. It will be a sad day for freedom and for all of us if that ends up one day being the case.
It has 12 votes; if you remove all downvotes, it doesn’t have low karma anymore.
As a note, I wouldn’t have upvoted this post normally, but I didn’t think it deserved to be negative so I gave it one. I’m pretty sure there’s a bunch of people who vote partly based on the current score, so if you remove all the downvotes, you probably remove a bunch of the upvotes too.
Before the edit, I think this post had the property that if the wrong people read it, their opinion of LW is irreversibly extremely negative.
Was the edit just to add the big disclaimer about motivation at the top? If nothing else was changed, then I struggle to see what would have been so objectionable about the pre-edit version. I might be missing something, but I don’t for example see it advocating any views or practices that I’d consider harmful (in contrast to some PUA stuff).
Seems like the worst thing you could reasonably say about it is that it’s a bit heteronormative and male-centric. I don’t think there’s anything wrong with having a dating advice post written from that perspective, but I do think it would have been good to add a sentence clarifying that at the top, just so that non-heterosexual male readers don’t feel like they’re assumed not to be part of the audience.
But other than that, is there anything else about it that would need to change?
Was the edit just to add the big disclaimer about motivation at the top?
No; it was more than that (although that helps, too). I didn’t make a snapshot of the previous version, so I can’t tell you exactly what changed. But the post is much less concerning now than it used to be.
I post on LessWrong because I want people to evaluate my arguments on whether they will make the world better or not. I agree that there are many parts of the internet where I can post and people will play the “does this word give me the bad feels” game. I post on LessWrong to get away from that nonsense.
Actually improving your lives and the lives of others requires discussing what is true. Virtue signalling in my description of dating will just leave both me and my potential partners would just be lonely more often. It’s not worth it.
I think there are a few reasons this post got a comment like Rafael’s but your others didn’t.
Any community that is about dating seems to attract the kind of people needed to turn it into /r/theredpill. So I see the need to post places like here although they need to be more infrequent as not to turn this place sour in the same manner. This is perhaps the inflection point where it has hit too many posts in too short a time.
There does seem to be more risk of violating “do no harm” here than your other posts. You mention trying to seek out a positistion teaching GRE materials where you could flirt with someone you have some level of authority over and seeking out black women who often have insecurities over how they are viewed in regards to what is conventionally considered attractive in America. Seeking out a power imbalance can put you in positions where you hurt someone.
Signaling does have a use. Leaving the article the same, but adding 1-4 additional sentences with signaling stuff just showing you know pick up artist type stuff can hurt people and that isn’t your intention would change the entire tone. People here don’t know you in person and we can’t pick up on body language so sometimes you really do just need to type out the virtues you had hoped people would assume you had. You did have more signaling of that in your first post. The way you talked painted a picture of a honest confused guy whereas these more fleshed out plans paints a serial pick up artist even though I know that isn’t your intention.
Personally, I did just want to call out that you should question the potential power balance of the GRE teaching situation and other than that don’t have many comments. There are some things I think aren’t optimally effective, but I generally think this stuff needs to be discovered through experimentation and too much borrowed power is bad for both people in the relationship. I left my advice on your first post vague when I could have given you a flowchart for a reason. I stick to calling out potential areas you may regret rather than optimizing the actual plan.
I post on LessWrong because I want people to evaluate my arguments on whether they will make the world better or not. I agree that there are many parts of the internet where I can post and people will play the “does this word give me the bad feels” game. I post on LessWrong to get away from that nonsense.
I recognize that my comment was not kind toward you, and I’m sorry for that. But I posted it anyway because I’m more concerned with people seeing this post coming away with a strongly negative view of LW. I’ve already had discussions with someone who has these associations based on much weaker reasons before, and I believe they still hold a negative view of LW to this day, even though 99+% of the content has virtually no relation to gender issues.
My claim is that whatever benefit comes from discussing this topic is not large enough to justify the cost, not that the benefit doesn’t exist. I don’t expect the dating world to get any better, but I don’t think LW should get involved in that fight. There are many topics we would be more effective at solving and that don’t have negative side effects.
(And I’ve listened to every Rationally Speaking episode since Julia became the solo host.)
Thank you for the apology. I understand your motivations better now.
I disagree that the dating world cannot get any better. I think this is an incredibly neglected and moderately tractable area.
Here’s why I still think there is positive utility to discussing this -
1. The association exists because of Scott Alexander’s post. That post gets tons of views and is frankly a terrible introduction to rationalist thinking. A new reader can easily see it as an identity politics post and dismiss rationalism.
2. We best sell rationalism by showing how we think, not that we bite bullets, lots of other communities bite bullets. I try to highlight aspects of how rationalists think about problems in each post, so that visitors get a better image of us (experimentation, random trials, scholarship, etc.). Luke Progs romance posts are a great example.
If the public associated rationalist stuff w/ Luke Progs work there would be a better argument, but the Scott Alexander post is the real face.
Finally, google searches for rationality/less wrong/slate star codex are in a gradual decline, so the value of self-censorship to achieve mainstream adoption is lower. The barriers to mainstream adoption probably are not the dating stuff.
I did replace mating with dating throughout the post for availability heuristic reasons.
My gut feel is similar to yours—dating is similar to politics in that it’s excellent to apply rationality to it, but many people go funny in the head, and it’s a difficult topic to use in talking about rationality. Also, it can attract unwanted attention from outsiders with non-aligned motivations for the conversation.
My hesitation is that I don’t know exactly how to draw the line between dating and other forms of personal advice around pragmatic approach to real-life behaviors. Unlike politics, this is individual (or duo or small-group) rationality. I think it’s valuable to have concrete explorations of how to apply the very general bounded-rationality technique “Examine your goals, understand and improve your capabilities, strategize your behavior over longer terms”. I think it’s very valuable to have discussions around emotions and interactions so complex that one can’t partition it to Bayesean-suitable propositions.
It’s probably worth asking a top-level question to see if there’s any general consensus.
The wrong kind of debate (about dating, politics, etc.) is when people already come with their ideologies fully formed, and try to get the majority of the audience on their side. The kind of debate which in real life would quickly evolve into a shouting contest.
Examining your goals, that would be a valuable thing to do. For example, in the context of dating, are you even aware of what exactly you are trying to achieve? Is it the physical aspects of sex? The emotional connection? Inspiring conversations? Shared values? The ability to plan your future together with someone? How important is it go get all of these from the same person? Is that even realistic? If you had to make a compromise, what is the relative importance of these things? What would be an absolute deal-breaker for you? When you observe people around you, in whose place would you like to be? Why?
But sometimes you don’t even know, unless you already tried. Sometimes you want something because other people are saying it’s good, and only when you try it, you realize it doesn’t make you happy.
So, talking about experience is better than talking about beliefs. But with dating, often the less experience people have, the stronger beliefs they express. Again, that’s like politics: those who have strongest opinions on how to run the country, usually never tried to organize even something small.
Understanding and improving your capabilities—this is often better discussed without discussing dating. I mean, as an example, let’s take the simplistic belief “women prefer rich men”. Assuming that you believe that, and therefore you want to become rich; how exactly would you do that? And suddenly the debate turns to compound interest, passively managed index funds, frugality, etc., and we are not debating dating anymore. Similarly, if you believe the success in dating is mostly about your conversation skills, then we can discuss conversation skills, without ever mentioning dating. Generally, if you believe that X helps at dating, focus on X, and stop talking about dating. Either X will solve your problems, or you were wrong.
I’ll comment on this post from Geoffrey Miller’s perspective (which I still believe is the closest map to the territory for heterosexual men)
1. Examining your goals is really valuable. I agree you should start by exploring your goals and your ethics.
take the simplistic belief “women prefer rich men”. Assuming that you believe that, and therefore you want to become rich;
This is good advice. To clarify neither I nor Miller believe that women prefer rich men. Financial success is probably correlated with extrovertion, intelligence, conscientiousness, social skills, the ability to provide, an effective degree of assertiveness, which are all attributes women have evolved to be attracted to.
But AB testing out the preferred attributes yourself would take a lifetime. The evopsych approach is to get a prior for which traits are attractive from evolutionary thought experiments, then test the beliefs with psych methods. I decided to get my priors from Miller because his epistemology seems sound in interviews and writing. Then I a/b tested his theories by posing hypotheticals to female friends and trying to guess which behavior they would label more attractive. I found Miller’s theories generalize pretty well, much better than my own mind projection. So I went with it. So beliefs about what women prefer are empirical, use you scholarship and low-cost tests.
Generally, if you believe that X helps at dating, focus on X, and stop talking about dating.
I agree with this too. My strategy is hyperfocusing on dating theory for a month, then writing up what you learned for comprehension. Now I can stop talking about dating moving forward, which is awesome!
The wrong kind of debate (about dating, politics, etc.) is when people already come with their ideologies fully formed, and try to get the majority of the audience on their side.
This is usually true. For my part, my orginal ideology a month ago said that women do not prefer high-status men. I realized I was in conflict with the data and my incorrect belief was hurting me. So I changed it. Unfortunately, new readers may assume my original ideology was “women are gold diggers”. Se la vie!
I mentioned “gold digging” as an ideological label, not to imply that being attracted to high-status suitors is the same as gold-digging. Personally, what turns you on cannot be unethical. I wouldn’t judge a woman who has more crushes on captains than skippers or a man who has more crushes on large-breasted women. So if “gold-digging” implies marrying someone for money, in the absence of attraction, that is a different issue. No comment on if gold-digging is ethical, but its a separate question.
This distinction between preferences and behaviors helps escape the ideological traps of discussing romance.
Posts on Less Wrong should focus on getting the goddamned right answer for the right reasons. If the “Less Wrong” and “rationalist” brand names mean anything, they mean that. If something about Snog’s post is wrong—if it proposes beliefs that are false or plans that won’t work, then it should be vigorously critiqued and downvoted.
If the terminology used in the post makes someone, somewhere have negative feelings about the “Less Wrong” brand name? Don’t care; don’t fucking care; can’t afford to care. What does that have to do with maximizing the probability assigned to my observations?
If the terminology used in the post makes someone, somewhere have negative feelings about the “Less Wrong” brand name? Don’t care; don’t fucking care; can’t afford to care.
The person I was referring to is a data scientist and effective altruist with a degree from Oxford who now runs their own business. I’m not claiming that they would be an AI safety researcher if not for associations of LW with sexism – but it’s not even that much of a stretch.
I can respect if you make a utility calculation here that reaches a different result, but the idea that there is no tradeoff or that it’s so obviously one-sided that we shouldn’t be discussing it seems plainly false.
Happy to discuss it. (I feel a little guilty for cussing in a Less Wrong comment, but I am at war with the forces of blandness and it felt appropriate to be forceful.)
Lately, however, I seem to see a lot of people eager to embrace censorship for P.R. reasons, seemingly without noticing or caring that this is a distortionary force on shared maps, as if the Vision was to run whatever marketing algorithm can win the most grant money and lure warm bodies for our robot cult—which I could get behind if I thought money and warm bodies were really the limiting resource for saving the world. But the problem with “systematically correct reasoning except leaving out all the parts of the discussion that might offend someone with a degree from Oxford or Berkeley” as opposed to “systematically correct reasoning” is that the former doesn’t let you get anything right that Oxford or Berkeley gets wrong.
From my perspective, the reason against having articles like this here is a combination of several factors, each of them individually not a big problem, but together it’s a bit too much.
The topic is sensitive to some people.
The quality is quite low.
The author already wroterelatedarticles within one month, which suggests more may be coming.
Individually, each of there things is not a big problem for me. Sensitive topics can be approached carefully. Low-quality articles can be ignored. An author writing multiple articles about their pet topic makes the website more interesting, and can be ignored if you don’t care about the topic.
But put these three things together, and you get something that is irritating (if you are not irritated by the articles, you probably will be by reactions of people who are irritated by them), cannot be easily ignored, and doesn’t bring any benefit that could justify the costs.
(On the good side, the author noticed that not appearing needy is attractive. Good. The idea of hiring an acting coach to improve social skills: interesting. But that’s two good ideas per four articles. Bad ratio.)
Another thing is that the author seems to only be interested in this one topic. But this is not a dating-advice website; this is a rationality website. Yes, we had some dating advice in the past, but it was usually written by people who already got some rationalist creds by writing highly upvoted articles on other topics.
The buzzwords in the first article were too much; luckily the following articles contained less of that:
This post is about applying rationality to my dating life… it was a great triumph over motivated reasoning… I notice my confusion… This explanation agreed with the results of my favorite epistemology… I reinvestigated with Bayesian reasoning… Such a complicated theory should have a low prior.
I was bad at dating. First I believed it was because of some problems in my life; I assumed the women I wanted to date perceived them somehow. But sometimes I was actually more successful when I was distracted by my problems, and less successful when I had my life under control.
Then I got an advice to act less needy. I tried it, and it worked.
I can provide an evolutionary explanation for why it works (scarcity implies higher value), but considering that I was also able to provide an evolutionary explanation for my previous hypothesis, I probably shouldn’t trust these ad-hoc explanations too much.
I agree with Dagon’s analogy that dating is similar to politics in its ability to lower the quality of discussion.
There’s a bunch here to respond to, I’ll take them in order of how relevant they are to my empirical questions, and put the infohazard stuff at the bottom.
I disagree, the Yudkowsky quote is too vague and you misinterpret it. If you talk about being “rational” you will not achieve the way. But if you talk about specific individual epistemic tools with a defined empirical goal and a desire to know and grow stronger, you will better map the territory. My use of those cached thoughts from Yudkowsky made my reasoning way better. Plz comment specific misinterpretations on the original.
Since you don’t specifically call out any misused epistemic tools, I will justify my arguments to simplicity (its the same as Robin Hanson’s signalling argument).
The simplest explanation for when relationships occur is randomness. I approach more in good times but have less success, which is unlikely if each approach is equal. So there is a sorting mechanism I misunderstood. Next I listened to people’s explanations but after many long conversations I noticed an explanation from one instance did not predict behavior in another instance. So I read Cialdini and thought about the next simplest explanation, and arrived at the status signalling explanation. This explanation does a great job of explaining the data.
The signalling explanation outperforms the neediness explanation because neediness suggests that a confident “I like you” on the first date would work (it doesn’t). When I was having a crisis I was desperately needy, in the sense that I craved a friend and companion to help me through the traumatic experience. But I put less effort into signalling interest in relationships. That increased approach success rate. If I had never looked for a simple hypothesis and rejected rationalizations, I would not have noticed the signalling definition of needy. So the arguments to simplicity and heuristics are powerful. My reasoning would have been worse without those “buzzwords”.
Imagine instead:
Your prescribed method would have failed because in aggregate I outperformed my peers even in needy periods. If I’d just compared myself to peers I would not have seen this pattern.
This is really important because in dating you do not make one decision to be “needy” or “not needy”. You make tons of small, contextual decisions about when to text, when to say “I love you”, when to have “the talk”, and who, how and why to approach. I don’t need to know “if needy bad” I need to be able to predict optimal signals in diverse social/relationship contexts. I can’t a/b test the whole relationship, so simple theories with good predictive power (like signalling) are incredibly useful.
TLDR: The language you use to describe your reasoning affects your reasoning
Another thing is that the author seems to only be interested in this one topic. But this is not a dating-advice website; this is a rationality website.
I cannot worry about dating theory my whole life. I crammed the whole process into one month. It worked really well. A lot of posts in rapid succession is a great way to build comprehension, I would recommend it.
The dating websites are full of ideology and unethical people (with a few notable exeptions). If I posted there I would have gotten very different comments I did not want.
Yes, we had some dating advice in the past, but it was usually written by people who already got some rationalist creds by writing highly upvoted articles on other topics.
LIfe optimization is a well accepted theme on LW. Had I written 4 posts about task prioritization, then a summary “task prioritization plan” no one would have complained.
Another thing is that the author seems to only be interested in this one topic.
I prefer not to write anonymously, so I write anonymously only on this topic. Again, I’m sorry to be anonymous but the topic is too sensitive.
Finally—Yes, I am not yet a great writer. I came here to grow stronger. I don’t apologize for trying, you can’t improve if you never get feedback.
I had a negative reaction too. OP, would you really be comfortable with future friends or mates seeing this? Especially since you’ve included a lot of personal details which could identify you?
I came up with a similar plan years ago tbh, and it worked, but I would not have shared it with anyone. Not sure I can justify why, that’s just my reaction.
I’m sad that so many are alone and don’t know why. I was lonely for much of my life and lacked tools to understand or change my romantic life. Talking about these issues with my friends and siblings taught me that our society fails to equip lonely people with useful tools to become more attractive, particularly for men. I mean attractive behaviorally and physically.
The conventional advice is terrible; “be yourself” and “be honest, tell her how you feel” are so easily misinterpreted that they make things worse. Meanwhile pick up artist forums have an uneven epistemology, a weak evidence base, are poorly explained, and are often unethical. A third way is possible. I wrote this to show people it exists, that they don’t have to be lonely and confused forever.
A note on my mating ethics
1. Preferences cannot be immoral. You cannot judge a woman for preferring physically attractive, high status women. You cannot judge me for preferring physically attractive, ambitious women. The conscious part of your brain does not get to override the part that chooses when to be horny (imo, not a psychologist).
2. Honesty is important. I make sure people know my intentions as early as possible (expressing them in a non-awkward way). That is why I start the mating plan with my intentions.
Honesty does not require saying everything you think. I feel like you haven’t really addressed the concern here. And i didn’t say anything about judging you for preferring attractive women.
I am super against having this kind of post on LessWrong. I think association of LW with dating advice is harmful and we should try to avoid it. I also suspect that terminology used in this post makes it worse than it would ordinarily be. I recoil from the phrase ‘mating plan’, and while a negative emotional reaction isn’t usually an argument, it might be relevant in this case since my point is about outside perception.
I haven’t read the OP, am not that interested in it, though Geoffrey Miller is quite thoughtful.
I think that the main things building up what LW is about right now are the core tags, the tagging page, and the upcoming LW books based on the LW review vote. If you look at the core tags, there’s nothing about dating there (“AI” and “World Modeling” etc). If you look at the vote, it’s about epistemology and coordination and AI, not dating. The OP also hasn’t got much karma, so I’m a bit confused that you’re arguing this shouldn’t be discussed on LW, and weak-downvoted this comment. (If you want to argue that a dating post has too much attention, maybe pick something that was better received like Jacobian’s recent piece, which I think embodies a lot of the LW spirit and is quite healthy.)
I’m not much worried about dating posts like this being what we’re known for. Given that it’s a very small part of the site, if it still became one of the ‘attack vectors’, I’m pretty pro just fighting those fights, rather than giving in and letting people on the internet who use the representativeness heuristic to attack people decide what we get to talk about. (Once you open yourself to giving in on those fights, they just start popping up everywhere, and then like 50% of your cognition is controlled by whether or not you’re stepping over those lines.)
There was also nothing about dating on LW back when I had the discussion I’ve referred to with the person who thought (and probably still thinks) that a big driver behind the appeal of LW is sexism. Someone who tries to destroy your reputation doesn’t pick a representative sample of your output, they pick the parts that make you look the worst. (And I suspect that “someone trying to destroy EY’s reputation” was part of the causal chain that lead to the person believing this.)
This post and Jacobian’s are not the same. Before the edit, I think this post had the property that if the wrong people read it, their opinion of LW is irreversibly extremely negative. I don’t think I’m exaggerating here. (And of course, the edit only happened because I made the comment.) And the part about it having low karma, I mean, it probably has low karma because of people who share my concerns. It has 12 votes; if you remove all downvotes, it doesn’t have low karma anymore. And I didn’t know how much karma it was going to have when I commented.
I’m pretty frustrated with this paragraph because it seems so clearly be defending the position that feels good. I would much rather be pro fighting than pro censoring. But if your intuition is that the result is net positive, I ask: you have good reasons to trust that intuition?
As I’ve said in another comment, the person I’ve mentioned is highly intelligent, a data scientist, effective altruist, signed the Giving-what-we-can pledge, and now runs their own business. I’m not claiming they’re a representative case, but the damage that has been done in this single instance due to an association of LW with sexism strikes me as so great that I just don’t buy that having posts like this is worth it, and I don’t think you’ve given me a good reason for why it is.
I wasn’t aware it used to talk about a ‘mating plan’ everywhere, which I think is amusing and I agree sounds kind of socially oblivious.
I really think that we shouldn’t optimise for people not joining us because of weak, negative low-level associations. I think the way that you attract good people is by strong wins, not because of not hearing any bad-associations. Nassim Taleb is an example I go to here, where the majority of times I hear about him I think he’s being obnoxious or aggressive, and often just disagree with what he says, but I don’t care too much about reading that because occasionally he’s saying something important that few others are.
Elon Musk is another example, where the majority of coverage I see of him his negative, and sometimes he writes kinda dumb tweets, but he gives me hope for humanity and I don’t care about the rest of the stuff. Had I seen the news coverage first, I’d still have been mindblown by seeing the rockets land and changed my attitude towards him. I could keep going on with examples… new friends occasionally come to me saying they read a review of HPMOR saying Harry’s rude and obnoxious, and I respond “you need to learn that’s not the most important aspect of a person’s character”. Harry is determined and takes responsibility and is curious and is one of the few people who has everyone’s back in that book, so I think you should definitely read and learn from him, and then the friend is like “Huh, wow, okay, I think I’ll read it then. That was very high and specific praise.”
A lot of this comes down to the graphs in Lukeprog’s post on romance (another dating post, I’m so sorry).
I think that LessWrong is home to some of the most honest and truth-seeking convo on the internet. We have amazing thinkers who come here like Zvi and Paul and Anna and Scott and more and the people who care about the conversations they can have will come here even if we have weird associations and some people hate us and call us names.
(Sarah also wrote the forces of blandness post that I think is great and I think about a lot in this context.)
I guess I didn’t address the specific example of your friend. (Btw I am also a person who was heavily involved with EA at Oxford, I ran the 80k student group while I was there and an EAGx!) I’m sorry your friend decided to write-off LessWrong because they heard it was sexist. I know you think that’s a massive cost that we’re paying in terms of thousands of good people avoiding us for that reason too.
I think that negative low-level associations really matter if you’re trying to be a mass movement and scale, like a political movement. Republicans/Democrats kind of primarily work to manage whether the default association is positive or negative, which is why they spend so much time on image-management. I don’t think LW should grow 100x users in the next 4 years. That would be terrible for our mission of refining the art of human rationality and our culture. I think that the strong positive hits are the most important, as I said already.
Suppose you personally get really valuable insights from LW, and that people’s writing here helps you understand yourself as a person and become more virtuous in your action. If you tell your EA friend that LessWrong was a key causal factor in you levelling up as a person, and they reply “well that’s net bad because I once heard they’re sexist” I’m not that impressed by them. And I hope that a self-identified EA would see the epistemic and personal value there as primary rather than the image-management thing as primary. And I think that if we all think everybody knows everyone else thinks the image-management is primary… then I think it’s healthy to take the step of saying out loud “No, actually, the actual intellectual progress on rationality is more important” and following through.
I feel a lot of uncertainty after reading your and Zack’s responses and I think I want to read some of the links (I’m particularly interested in what Wei Dai has to say) and think about this more before saying anything else about it – except for trying to explain what my model going into this conversation actually was. Based on your reply, I don’t think I’ve managed to do that in previous comments.
I agree with basically everything about how LW generates value. My model isn’t as sophisticated, but it’s not substantially different.
The two things that concern me are
People disliking LW right now (like my EA friend)
The AI debate potentially becoming political.
On #1, you said “I know you think that’s a massive cost that we’re paying in terms of thousands of good people avoiding us for that reason too.” I don’t think it’s very common. Certainly this particular combination of technical intelligence with an extreme worry about gender issues is very rare. It’s more like, if the utility of this one case is −1, then I might guess the total direct utility of allowing posts of this kind in the next couple of years is probably somewhere in [-10, 40] or something. (But this might be wrong since there seem to be more good posts about dating than I was aware of.) And I don’t think you can reasonably argue that there won’t be fifty worth of comparable cases.
I currently don’t buy the arguments that make sweeping generalizations about all kinds of censorship (though I could be wrong here, too), which would substantially change the interval.
On #2, it strikes me as obvious that if AI gets political, we have a massive problem, and if it becomes woke not to take AI risk seriously, we have an even larger problem, and it doesn’t seem impossible that tolerating posts like this is a factor. (Think of someone writing a NYT article about AI risk originating from a site that talks about mating plans.) On the above scale, the utility of AI risk becoming anti-woke might be something like −100.000. But I’m mostly thinking about this for the first time, so this is very much subject to change.
I’ve failed this part of the conversation. I couldn’t get them to read any of it, nor trust that I have any idea what I’m talking about when I said that HPMoR doesn’t seem very sexist.
Many of the world’s smartest, most competent, and most influential people are ideologues. This probably includes whoever ends up developing and controlling advanced technologies. It would be nice to be able to avoid such people dismissing our ideas out of hand. You may not find them impressive or expect them to make intellectual progress on rationality, but for such progress to matter, the ideas have to be taken seriously outside LW at some point. I guess I don’t understand the case against caution in this area, so long as the cost is only having to avoid some peripheral topics instead of adopting or promoting false beliefs.
Rather than debating the case for or against caution, I think the most interesting question is how to arrange a peaceful schism. Team Shared Maps That Reflect The Territory and Team Seek Power For The Greater Good obviously do not belong in the same “movement” or “community.” It’s understandable that Team Power doesn’t want to be associated with Team Shared Maps because they’re afraid we’ll say things that will get them in trouble. (We totally will.) But for their part of the bargain, Team Power needs to not fraudulently market their beacon as “the rationality community” and thereby confuse innocents who came looking for shared maps.
I think of my team as being “Team Shared Maps That Reflect The Territory But With a Few Blank Spots, Subject to Cautious Private Discussion, Where Depicting the Territory Would Have Caused the Maps to be Burned”. I don’t think calling it “Team Seek Power For The Greater Good” is a fair characterization both because the Team is scrupulous not to draw fake stuff on the map and because the Team does not seek power for itself but rather seeks for it to be possible for true ideas to have influence regardless of what persons are associated with the true ideas.
That’s fair. Maybe our crux is about to what extent “don’t draw fake stuff on the map” is a actually a serious constraint? When standing trial for a crime you didn’t commit, it’s not exactly comforting to be told that the prosecutor never lies, but “merely” reveals Shared Maps That Reflect The Territory But With a Few Blank Spots Where Depicting the Territory Would Have Caused the Defendant to Be Acquitted. It’s good that the prosecutor never lies! But it’s important that the prosecutor is known as the prosecutor, rather than claiming to be the judge. Same thing with a so-called “rationalist” community.
I don’t think anyone understands the phrase “rationalist community” as implying a claim that its members don’t sometimes allow practical considerations to affect which topics they remain silent on. I don’t advocate that people leave out good points merely for being inconvenient to the case they’re making, optimizing for the audience to believe some claim regardless of the truth of that claim, as suggested by the prosecutor analogy. I advocate that people leave out good points for being relatively unimportant and predictably causing (part of) the audience to be harmfully irrational. I.e., if you saw someone else than the defendant commit the murder, then say that, but don’t start talking about how ugly the judge’s children are even if you think the ugliness of the judge’s children slightly helped inspire the real murderer. We can disagree about which discussions are more like talking about whether you saw someone else commit the murder and which discussions are more like talking about how ugly the judge’s children are.
I guess I feel like we’re at an event for the physics institute and someone’s being nerdy/awkward in the corner, and there’s a question of whether or not we should let that person be or whether we should publicly tell them off / kick them out. I feel like the best people there are a bit nerdy and overly analytical, and that’s fine, and deciding to publicly tell them off is over the top and will make all the physicists more uptight and self-aware.
To pick a very concrete problem we’ve worked on: the AI alignment problem is totally taken seriously by very important people who are also aware that LW is weird, but Eliezer goes on the Sam Harris podcast and Bostrom is invited by the UK government to advise and so on and Karnofsky’s got a billion dollars and focusing to a large part on the AI problem. We’re not being defined by this odd stuff, and I think we don’t need to feel like we are. I expect as we find similar concrete problems or proposals, we’ll continue to be taken very seriously and have major success.
As I see it, we’ve had this success partly because many of us have been scrupulous about not being needlessly offensive. (Bostrom is a good example here.) The rationalist brand is already weak (e.g. search Twitter for relevant terms), and if LessWrong had actually tried to have forthright discussions of every interesting topic, that might well have been fatal.
Lol mating was not my best choice of word. But hey I’m here to improve my writing.
I’m having trouble understanding what this would mean. Why would a big driver behind LW’s appeal be sexism?
If someone can look at LW, with its thousands of posts discussing futurism, philosophy, rationality, etc, and come away concluding that the appeal of the site is sexism (as opposed to an interest in those topics), I feel tempted to just write off their views.
Sure, you can find some sexist posts or commenters here or there (I seem to remember a particular troll whom we eventually vanquished with the switchover from LW 1.0 to LW 2.0). But to think that they’re the norm, or that it’s a big part of the general appeal of the site?
To conclude that, it seems like you’d either have to have gotten an extremely biased sample of LW (and not been thoughtful enough to realize this possibility on your own), or you’d have to have some major blindspots in your thinking about these things, causing you to jump to bizarre conclusions.
In either case, it seems like the issue is more with them than with LW, and all else equal, I wouldn’t feel much drive to cater to their opinion. (Even if they’re otherwise an intelligent and productive individual.) People can just have blindspots, and I don’t think you should cater to the people with the most off-base view of you.
Am I missing something? Do you think their view was more justified than this? Or do you just think it’s worth paying more costs to cater to such people, even if you agree that they’re being unreasonable?
A clear no. I think their position was utterly ridiculous. I just think that blind spots on this particular topic are so common that it’s not a smart strategy to ignore them.
I don’t think this currently is true for LW myself, but if a space casually has, say, sexist or racist stuff in it, people looking can be like “oh thank god, a place I can say what I really think [that is sexist or racist] without political correctness stopping me” and then that becomes a selling point for people who want to talk about sexist or racist stuff. Suspect the commenter means something like this.
Thanks. That does seem like the most likely interpretation.
I have an extremely negative emotional reaction to this.
More seriously. While LW can be construed as “trying to promote something” (i.e. rational thinking), in my opinion it is mostly a place to have rational discussions, using much stronger discursive standards than elsewhere on the internet.
If people decide to judge us on cherry pickings, that is sad, but it is much better than having them control what topics are or are not allowed. I am with Ben on this one.
About your friend in particular, if they have to be turned off of the community because of some posts and the fact we engage with idea at the object-level instead of yucking-out socially awkward ideas, then she might not yet be ready to receive rationality in her heart.
This post triggers a big “NON-QUANTITATIVE ARGUMENT” alarm in my head.
I’m not super confident in my ability to assess what the quantities are, but I’m extremely confident that they matter. It seems to me like your post could be written in exactly the same way if the “wokeness” phenomenon was “half as large” (fewer people care about, or they don’t care as strongly). Or, if it was twice as large. But this can’t be good – any sensible opinion on this issue has to depend on the scope of the problem, unless you think it’s in principle inconceivable for the wokeness phenomenon to be prevalent enough to matter.
I’ve explained the two categories I’m worried about here, and while there have been some updates since (biggest one: it may be good talk about politics now if we assume AI safety is going to be politicized anyway), I still think about it in roughly those terms. Is this a framing that makes sense to you?
It very much is a non-quantitative argument—since it’s a matter of principle. The principle being not to let outside perceptions dictate the topic of conversations.
I can think of situations were the principles could be broken, or unproductive. If upholding it would make it impossible to have these discussions in the first place (because engaging would mean you get stoned, or something) and hiding is not an option (or still too risky), then it would make sense to move conversations towards the overton window.
Said otherwise, the quantity I care about is “ability to have quote rational unquote conversations” and no amount of outside woke prevalence can change that *as long as they don’t drive enough community member away*. It will be a sad day for freedom and for all of us if that ends up one day being the case.
As a note, I wouldn’t have upvoted this post normally, but I didn’t think it deserved to be negative so I gave it one. I’m pretty sure there’s a bunch of people who vote partly based on the current score, so if you remove all the downvotes, you probably remove a bunch of the upvotes too.
Was the edit just to add the big disclaimer about motivation at the top? If nothing else was changed, then I struggle to see what would have been so objectionable about the pre-edit version. I might be missing something, but I don’t for example see it advocating any views or practices that I’d consider harmful (in contrast to some PUA stuff).
Seems like the worst thing you could reasonably say about it is that it’s a bit heteronormative and male-centric. I don’t think there’s anything wrong with having a dating advice post written from that perspective, but I do think it would have been good to add a sentence clarifying that at the top, just so that non-heterosexual male readers don’t feel like they’re assumed not to be part of the audience.
But other than that, is there anything else about it that would need to change?
OP here to clarify.
Edits—Added disclaimer at the top; changed every instance of “mating” to “dating”; replaced personal details with <anonymized>
I honestly don’t see what is so objectionable about the original version either. I like your last sentence, will add that as well.
No; it was more than that (although that helps, too). I didn’t make a snapshot of the previous version, so I can’t tell you exactly what changed. But the post is much less concerning now than it used to be.
Ah, I see. Thanks!
I disagree. The dating world doesn’t get better if we never think about it. I recommend listening to Dr. Diana Fleischman’s talk on rationally speaking for a transhumanist perspective.
I post on LessWrong because I want people to evaluate my arguments on whether they will make the world better or not. I agree that there are many parts of the internet where I can post and people will play the “does this word give me the bad feels” game. I post on LessWrong to get away from that nonsense.
Actually improving your lives and the lives of others requires discussing what is true. Virtue signalling in my description of dating will just leave both me and my potential partners would just be lonely more often. It’s not worth it.
I think there are a few reasons this post got a comment like Rafael’s but your others didn’t.
Any community that is about dating seems to attract the kind of people needed to turn it into /r/theredpill. So I see the need to post places like here although they need to be more infrequent as not to turn this place sour in the same manner. This is perhaps the inflection point where it has hit too many posts in too short a time.
There does seem to be more risk of violating “do no harm” here than your other posts. You mention trying to seek out a positistion teaching GRE materials where you could flirt with someone you have some level of authority over and seeking out black women who often have insecurities over how they are viewed in regards to what is conventionally considered attractive in America. Seeking out a power imbalance can put you in positions where you hurt someone.
Signaling does have a use. Leaving the article the same, but adding 1-4 additional sentences with signaling stuff just showing you know pick up artist type stuff can hurt people and that isn’t your intention would change the entire tone. People here don’t know you in person and we can’t pick up on body language so sometimes you really do just need to type out the virtues you had hoped people would assume you had. You did have more signaling of that in your first post. The way you talked painted a picture of a honest confused guy whereas these more fleshed out plans paints a serial pick up artist even though I know that isn’t your intention.
Personally, I did just want to call out that you should question the potential power balance of the GRE teaching situation and other than that don’t have many comments. There are some things I think aren’t optimally effective, but I generally think this stuff needs to be discovered through experimentation and too much borrowed power is bad for both people in the relationship. I left my advice on your first post vague when I could have given you a flowchart for a reason. I stick to calling out potential areas you may regret rather than optimizing the actual plan.
I recognize that my comment was not kind toward you, and I’m sorry for that. But I posted it anyway because I’m more concerned with people seeing this post coming away with a strongly negative view of LW. I’ve already had discussions with someone who has these associations based on much weaker reasons before, and I believe they still hold a negative view of LW to this day, even though 99+% of the content has virtually no relation to gender issues.
My claim is that whatever benefit comes from discussing this topic is not large enough to justify the cost, not that the benefit doesn’t exist. I don’t expect the dating world to get any better, but I don’t think LW should get involved in that fight. There are many topics we would be more effective at solving and that don’t have negative side effects.
(And I’ve listened to every Rationally Speaking episode since Julia became the solo host.)
Thank you for the apology. I understand your motivations better now.
I disagree that the dating world cannot get any better. I think this is an incredibly neglected and moderately tractable area.
Here’s why I still think there is positive utility to discussing this -
1. The association exists because of Scott Alexander’s post. That post gets tons of views and is frankly a terrible introduction to rationalist thinking. A new reader can easily see it as an identity politics post and dismiss rationalism.
2. We best sell rationalism by showing how we think, not that we bite bullets, lots of other communities bite bullets. I try to highlight aspects of how rationalists think about problems in each post, so that visitors get a better image of us (experimentation, random trials, scholarship, etc.). Luke Progs romance posts are a great example.
If the public associated rationalist stuff w/ Luke Progs work there would be a better argument, but the Scott Alexander post is the real face.
Finally, google searches for rationality/less wrong/slate star codex are in a gradual decline, so the value of self-censorship to achieve mainstream adoption is lower. The barriers to mainstream adoption probably are not the dating stuff.
I did replace mating with dating throughout the post for availability heuristic reasons.
My gut feel is similar to yours—dating is similar to politics in that it’s excellent to apply rationality to it, but many people go funny in the head, and it’s a difficult topic to use in talking about rationality. Also, it can attract unwanted attention from outsiders with non-aligned motivations for the conversation.
My hesitation is that I don’t know exactly how to draw the line between dating and other forms of personal advice around pragmatic approach to real-life behaviors. Unlike politics, this is individual (or duo or small-group) rationality. I think it’s valuable to have concrete explorations of how to apply the very general bounded-rationality technique “Examine your goals, understand and improve your capabilities, strategize your behavior over longer terms”. I think it’s very valuable to have discussions around emotions and interactions so complex that one can’t partition it to Bayesean-suitable propositions.
It’s probably worth asking a top-level question to see if there’s any general consensus.
The wrong kind of debate (about dating, politics, etc.) is when people already come with their ideologies fully formed, and try to get the majority of the audience on their side. The kind of debate which in real life would quickly evolve into a shouting contest.
Examining your goals, that would be a valuable thing to do. For example, in the context of dating, are you even aware of what exactly you are trying to achieve? Is it the physical aspects of sex? The emotional connection? Inspiring conversations? Shared values? The ability to plan your future together with someone? How important is it go get all of these from the same person? Is that even realistic? If you had to make a compromise, what is the relative importance of these things? What would be an absolute deal-breaker for you? When you observe people around you, in whose place would you like to be? Why?
But sometimes you don’t even know, unless you already tried. Sometimes you want something because other people are saying it’s good, and only when you try it, you realize it doesn’t make you happy.
So, talking about experience is better than talking about beliefs. But with dating, often the less experience people have, the stronger beliefs they express. Again, that’s like politics: those who have strongest opinions on how to run the country, usually never tried to organize even something small.
Understanding and improving your capabilities—this is often better discussed without discussing dating. I mean, as an example, let’s take the simplistic belief “women prefer rich men”. Assuming that you believe that, and therefore you want to become rich; how exactly would you do that? And suddenly the debate turns to compound interest, passively managed index funds, frugality, etc., and we are not debating dating anymore. Similarly, if you believe the success in dating is mostly about your conversation skills, then we can discuss conversation skills, without ever mentioning dating. Generally, if you believe that X helps at dating, focus on X, and stop talking about dating. Either X will solve your problems, or you were wrong.
I’ll comment on this post from Geoffrey Miller’s perspective (which I still believe is the closest map to the territory for heterosexual men)
1. Examining your goals is really valuable. I agree you should start by exploring your goals and your ethics.
This is good advice. To clarify neither I nor Miller believe that women prefer rich men. Financial success is probably correlated with extrovertion, intelligence, conscientiousness, social skills, the ability to provide, an effective degree of assertiveness, which are all attributes women have evolved to be attracted to.
But AB testing out the preferred attributes yourself would take a lifetime. The evopsych approach is to get a prior for which traits are attractive from evolutionary thought experiments, then test the beliefs with psych methods. I decided to get my priors from Miller because his epistemology seems sound in interviews and writing. Then I a/b tested his theories by posing hypotheticals to female friends and trying to guess which behavior they would label more attractive. I found Miller’s theories generalize pretty well, much better than my own mind projection. So I went with it. So beliefs about what women prefer are empirical, use you scholarship and low-cost tests.
I agree with this too. My strategy is hyperfocusing on dating theory for a month, then writing up what you learned for comprehension. Now I can stop talking about dating moving forward, which is awesome!
This is usually true. For my part, my orginal ideology a month ago said that women do not prefer high-status men. I realized I was in conflict with the data and my incorrect belief was hurting me. So I changed it. Unfortunately, new readers may assume my original ideology was “women are gold diggers”. Se la vie!
I mentioned “gold digging” as an ideological label, not to imply that being attracted to high-status suitors is the same as gold-digging. Personally, what turns you on cannot be unethical. I wouldn’t judge a woman who has more crushes on captains than skippers or a man who has more crushes on large-breasted women. So if “gold-digging” implies marrying someone for money, in the absence of attraction, that is a different issue. No comment on if gold-digging is ethical, but its a separate question.
This distinction between preferences and behaviors helps escape the ideological traps of discussing romance.
We already have the Frontpage/Personal distinction to reduce visibility of posts that might scare off cognitive children!
Posts on Less Wrong should focus on getting the goddamned right answer for the right reasons. If the “Less Wrong” and “rationalist” brand names mean anything, they mean that. If something about Snog’s post is wrong—if it proposes beliefs that are false or plans that won’t work, then it should be vigorously critiqued and downvoted.
If the terminology used in the post makes someone, somewhere have negative feelings about the “Less Wrong” brand name? Don’t care; don’t fucking care; can’t afford to care. What does that have to do with maximizing the probability assigned to my observations?
The person I was referring to is a data scientist and effective altruist with a degree from Oxford who now runs their own business. I’m not claiming that they would be an AI safety researcher if not for associations of LW with sexism – but it’s not even that much of a stretch.
I can respect if you make a utility calculation here that reaches a different result, but the idea that there is no tradeoff or that it’s so obviously one-sided that we shouldn’t be discussing it seems plainly false.
Happy to discuss it. (I feel a little guilty for cussing in a Less Wrong comment, but I am at war with the forces of blandness and it felt appropriate to be forceful.)
My understanding of the Vision was that we were going to develop methods of systematically correct reasoning the likes of which the world had never seen, which, among other things, would be useful for preventing unaligned superintelligence from destroying all value in the universe.
Lately, however, I seem to see a lot of people eager to embrace censorship for P.R. reasons, seemingly without noticing or caring that this is a distortionary force on shared maps, as if the Vision was to run whatever marketing algorithm can win the most grant money and lure warm bodies for our robot cult—which I could get behind if I thought money and warm bodies were really the limiting resource for saving the world. But the problem with “systematically correct reasoning except leaving out all the parts of the discussion that might offend someone with a degree from Oxford or Berkeley” as opposed to “systematically correct reasoning” is that the former doesn’t let you get anything right that Oxford or Berkeley gets wrong.
Optimized dating advice isn’t important in itself, but the discourse algorithm that’s too cowardly to even think about dating advice is thereby too constrained to do serious thinking about the things that are important.
I’m too confused/unsure right now to respond to this, but I want to assure you that it’s not because I’m ignoring your comment.
From my perspective, the reason against having articles like this here is a combination of several factors, each of them individually not a big problem, but together it’s a bit too much.
The topic is sensitive to some people.
The quality is quite low.
The author already wrote related articles within one month, which suggests more may be coming.
Individually, each of there things is not a big problem for me. Sensitive topics can be approached carefully. Low-quality articles can be ignored. An author writing multiple articles about their pet topic makes the website more interesting, and can be ignored if you don’t care about the topic.
But put these three things together, and you get something that is irritating (if you are not irritated by the articles, you probably will be by reactions of people who are irritated by them), cannot be easily ignored, and doesn’t bring any benefit that could justify the costs.
(On the good side, the author noticed that not appearing needy is attractive. Good. The idea of hiring an acting coach to improve social skills: interesting. But that’s two good ideas per four articles. Bad ratio.)
Another thing is that the author seems to only be interested in this one topic. But this is not a dating-advice website; this is a rationality website. Yes, we had some dating advice in the past, but it was usually written by people who already got some rationalist creds by writing highly upvoted articles on other topics.
The buzzwords in the first article were too much; luckily the following articles contained less of that:
If you speak overmuch of the Way you will not attain it. Imagine instead:
I agree with Dagon’s analogy that dating is similar to politics in its ability to lower the quality of discussion.
There’s a bunch here to respond to, I’ll take them in order of how relevant they are to my empirical questions, and put the infohazard stuff at the bottom.
1. Buzzwords -
I disagree, the Yudkowsky quote is too vague and you misinterpret it. If you talk about being “rational” you will not achieve the way. But if you talk about specific individual epistemic tools with a defined empirical goal and a desire to know and grow stronger, you will better map the territory. My use of those cached thoughts from Yudkowsky made my reasoning way better. Plz comment specific misinterpretations on the original.
Since you don’t specifically call out any misused epistemic tools, I will justify my arguments to simplicity (its the same as Robin Hanson’s signalling argument).
The simplest explanation for when relationships occur is randomness. I approach more in good times but have less success, which is unlikely if each approach is equal. So there is a sorting mechanism I misunderstood. Next I listened to people’s explanations but after many long conversations I noticed an explanation from one instance did not predict behavior in another instance. So I read Cialdini and thought about the next simplest explanation, and arrived at the status signalling explanation. This explanation does a great job of explaining the data.
The signalling explanation outperforms the neediness explanation because neediness suggests that a confident “I like you” on the first date would work (it doesn’t). When I was having a crisis I was desperately needy, in the sense that I craved a friend and companion to help me through the traumatic experience. But I put less effort into signalling interest in relationships. That increased approach success rate. If I had never looked for a simple hypothesis and rejected rationalizations, I would not have noticed the signalling definition of needy. So the arguments to simplicity and heuristics are powerful. My reasoning would have been worse without those “buzzwords”.
Your prescribed method would have failed because in aggregate I outperformed my peers even in needy periods. If I’d just compared myself to peers I would not have seen this pattern.
This is really important because in dating you do not make one decision to be “needy” or “not needy”. You make tons of small, contextual decisions about when to text, when to say “I love you”, when to have “the talk”, and who, how and why to approach. I don’t need to know “if needy bad” I need to be able to predict optimal signals in diverse social/relationship contexts. I can’t a/b test the whole relationship, so simple theories with good predictive power (like signalling) are incredibly useful.
TLDR: The language you use to describe your reasoning affects your reasoning
I cannot worry about dating theory my whole life. I crammed the whole process into one month. It worked really well. A lot of posts in rapid succession is a great way to build comprehension, I would recommend it.
The dating websites are full of ideology and unethical people (with a few notable exeptions). If I posted there I would have gotten very different comments I did not want.
LIfe optimization is a well accepted theme on LW. Had I written 4 posts about task prioritization, then a summary “task prioritization plan” no one would have complained.
I prefer not to write anonymously, so I write anonymously only on this topic. Again, I’m sorry to be anonymous but the topic is too sensitive.
Finally—Yes, I am not yet a great writer. I came here to grow stronger. I don’t apologize for trying, you can’t improve if you never get feedback.
I don’t think that having dating advice here is necessarily a bad thing, but I also had a negative reaction to the title.
I had a negative reaction too. OP, would you really be comfortable with future friends or mates seeing this? Especially since you’ve included a lot of personal details which could identify you?
I came up with a similar plan years ago tbh, and it worked, but I would not have shared it with anyone. Not sure I can justify why, that’s just my reaction.
I’m sad that so many are alone and don’t know why. I was lonely for much of my life and lacked tools to understand or change my romantic life. Talking about these issues with my friends and siblings taught me that our society fails to equip lonely people with useful tools to become more attractive, particularly for men. I mean attractive behaviorally and physically.
The conventional advice is terrible; “be yourself” and “be honest, tell her how you feel” are so easily misinterpreted that they make things worse. Meanwhile pick up artist forums have an uneven epistemology, a weak evidence base, are poorly explained, and are often unethical. A third way is possible. I wrote this to show people it exists, that they don’t have to be lonely and confused forever.
A note on my mating ethics
1. Preferences cannot be immoral. You cannot judge a woman for preferring physically attractive, high status women. You cannot judge me for preferring physically attractive, ambitious women. The conscious part of your brain does not get to override the part that chooses when to be horny (imo, not a psychologist).
2. Honesty is important. I make sure people know my intentions as early as possible (expressing them in a non-awkward way). That is why I start the mating plan with my intentions.
Honesty does not require saying everything you think. I feel like you haven’t really addressed the concern here. And i didn’t say anything about judging you for preferring attractive women.
Oh no totally, you didn’t say either of those things. I think addressing ethics up front will just help people not judge by availability bias.
And I mean honesty about your relationship goals. Definitely radical honesty will destroy your romantic life. Clarifying that now.