I haven’t read the OP, am not that interested in it, though Geoffrey Miller is quite thoughtful.
I think that the main things building up what LW is about right now are the core tags, the tagging page, and the upcoming LW books based on the LW review vote. If you look at the core tags, there’s nothing about dating there (“AI” and “World Modeling” etc). If you look at the vote, it’s about epistemology and coordination and AI, not dating. The OP also hasn’t got much karma, so I’m a bit confused that you’re arguing this shouldn’t be discussed on LW, and weak-downvoted this comment. (If you want to argue that a dating post has too much attention, maybe pick something that was better received like Jacobian’s recent piece, which I think embodies a lot of the LW spirit and is quite healthy.)
I’m not much worried about dating posts like this being what we’re known for. Given that it’s a very small part of the site, if it still became one of the ‘attack vectors’, I’m pretty pro just fighting those fights, rather than giving in and letting people on the internet who use the representativeness heuristic to attack people decide what we get to talk about. (Once you open yourself to giving in on those fights, they just start popping up everywhere, and then like 50% of your cognition is controlled by whether or not you’re stepping over those lines.)
I think that the main things building up what LW is about right now are the core tags, the tagging page, and the upcoming LW books based on the LW review vote. If you look at the core tags, there’s nothing about dating there (“AI” and “World Modeling” etc). If you look at the vote, it’s about epistemology and coordination and AI, not dating.
There was also nothing about dating on LW back when I had the discussion I’ve referred to with the person who thought (and probably still thinks) that a big driver behind the appeal of LW is sexism. Someone who tries to destroy your reputation doesn’t pick a representative sample of your output, they pick the parts that make you look the worst. (And I suspect that “someone trying to destroy EY’s reputation” was part of the causal chain that lead to the person believing this.)
This post and Jacobian’s are not the same. Before the edit, I think this post had the property that if the wrong people read it, their opinion of LW is irreversibly extremely negative. I don’t think I’m exaggerating here. (And of course, the edit only happened because I made the comment.) And the part about it having low karma, I mean, it probably has low karma because of people who share my concerns. It has 12 votes; if you remove all downvotes, it doesn’t have low karma anymore. And I didn’t know how much karma it was going to have when I commented.
I’m not much worried about dating posts like this being what we’re known for. Given that it’s a very small part of the site, if it still became one of the ‘attack vectors’, I’m pretty pro just fighting those fights, rather than giving in and letting people on the internet who use the representativeness heuristic to attack people decide what we get to talk about.
I’m pretty frustrated with this paragraph because it seems so clearly be defending the position that feels good. I would much rather be pro fighting than pro censoring. But if your intuition is that the result is net positive, I ask: you have good reasons to trust that intuition?
As I’ve said in another comment, the person I’ve mentioned is highly intelligent, a data scientist, effective altruist, signed the Giving-what-we-can pledge, and now runs their own business. I’m not claiming they’re a representative case, but the damage that has been done in this single instance due to an association of LW with sexism strikes me as so great that I just don’t buy that having posts like this is worth it, and I don’t think you’ve given me a good reason for why it is.
I wasn’t aware it used to talk about a ‘mating plan’ everywhere, which I think is amusing and I agree sounds kind of socially oblivious.
I really think that we shouldn’t optimise for people not joining us because of weak, negative low-level associations. I think the way that you attract good people is by strong wins, not because of not hearing any bad-associations. Nassim Taleb is an example I go to here, where the majority of times I hear about him I think he’s being obnoxious or aggressive, and often just disagree with what he says, but I don’t care too much about reading that because occasionally he’s saying something important that few others are.
Elon Musk is another example, where the majority of coverage I see of him his negative, and sometimes he writes kinda dumb tweets, but he gives me hope for humanity and I don’t care about the rest of the stuff. Had I seen the news coverage first, I’d still have been mindblown by seeing the rockets land and changed my attitude towards him. I could keep going on with examples… new friends occasionally come to me saying they read a review of HPMOR saying Harry’s rude and obnoxious, and I respond “you need to learn that’s not the most important aspect of a person’s character”. Harry is determined and takes responsibility and is curious and is one of the few people who has everyone’s back in that book, so I think you should definitely read and learn from him, and then the friend is like “Huh, wow, okay, I think I’ll read it then. That was very high and specific praise.”
A lot of this comes down to the graphs in Lukeprog’s post on romance (another dating post, I’m so sorry).
I think that LessWrong is home to some of the most honest and truth-seeking convo on the internet. We have amazing thinkers who come here like Zvi and Paul and Anna and Scott and more and the people who care about the conversations they can have will come here even if we have weird associations and some people hate us and call us names.
(Sarah also wrote the forces of blandness post that I think is great and I think about a lot in this context.)
I guess I didn’t address the specific example of your friend. (Btw I am also a person who was heavily involved with EA at Oxford, I ran the 80k student group while I was there and an EAGx!) I’m sorry your friend decided to write-off LessWrong because they heard it was sexist. I know you think that’s a massive cost that we’re paying in terms of thousands of good people avoiding us for that reason too.
I think that negative low-level associations really matter if you’re trying to be a mass movement and scale, like a political movement. Republicans/Democrats kind of primarily work to manage whether the default association is positive or negative, which is why they spend so much time on image-management. I don’t think LW should grow 100x users in the next 4 years. That would be terrible for our mission of refining the art of human rationality and our culture. I think that the strong positive hits are the most important, as I said already.
Suppose you personally get really valuable insights from LW, and that people’s writing here helps you understand yourself as a person and become more virtuous in your action. If you tell your EA friend that LessWrong was a key causal factor in you levelling up as a person, and they reply “well that’s net bad because I once heard they’re sexist” I’m not that impressed by them. And I hope that a self-identified EA would see the epistemic and personal value there as primary rather than the image-management thing as primary. And I think that if we all think everybody knows everyone else thinks the image-management is primary… then I think it’s healthy to take the step of saying out loud “No, actually, the actual intellectual progress on rationality is more important” and following through.
I feel a lot of uncertainty after reading your and Zack’s responses and I think I want to read some of the links (I’m particularly interested in what Wei Dai has to say) and think about this more before saying anything else about it – except for trying to explain what my model going into this conversation actually was. Based on your reply, I don’t think I’ve managed to do that in previous comments.
I agree with basically everything about how LW generates value. My model isn’t as sophisticated, but it’s not substantially different.
The two things that concern me are
People disliking LW right now (like my EA friend)
The AI debate potentially becoming political.
On #1, you said “I know you think that’s a massive cost that we’re paying in terms of thousands of good people avoiding us for that reason too.” I don’t think it’s very common. Certainly this particular combination of technical intelligence with an extreme worry about gender issues is very rare. It’s more like, if the utility of this one case is −1, then I might guess the total direct utility of allowing posts of this kind in the next couple of years is probably somewhere in [-10, 40] or something. (But this might be wrong since there seem to be more good posts about dating than I was aware of.) And I don’t think you can reasonably argue that there won’t be fifty worth of comparable cases.
I currently don’t buy the arguments that make sweeping generalizations about all kinds of censorship (though I could be wrong here, too), which would substantially change the interval.
On #2, it strikes me as obvious that if AI gets political, we have a massive problem, and if it becomes woke not to take AI risk seriously, we have an even larger problem, and it doesn’t seem impossible that tolerating posts like this is a factor. (Think of someone writing a NYT article about AI risk originating from a site that talks about mating plans.) On the above scale, the utility of AI risk becoming anti-woke might be something like −100.000. But I’m mostly thinking about this for the first time, so this is very much subject to change.
I could keep going on with examples… new friends occasionally come to me saying they read a review of HPMOR saying Harry’s rude and obnoxious, and I’m like you need to learn that’s not the most important aspect of a person’s character. Harry is determined and takes responsibility and is curious and is one of the few people who has everyone’s back in that book, so I think you should definitely read and learn from him, and then the friend is like “Huh, wow, okay, I think I’ll read it then. That was shockingly high and specific praise.”
I’ve failed this part of the conversation. I couldn’t get them to read any of it, nor trust that I have any idea what I’m talking about when I said that HPMoR doesn’t seem very sexist.
I think that negative low-level associations really matter if you’re trying to be a mass movement and scale, like a political movement.
Many of the world’s smartest, most competent, and most influential people are ideologues. This probably includes whoever ends up developing and controlling advanced technologies. It would be nice to be able to avoid such people dismissing our ideas out of hand. You may not find them impressive or expect them to make intellectual progress on rationality, but for such progress to matter, the ideas have to be taken seriously outside LW at some point. I guess I don’t understand the case against caution in this area, so long as the cost is only having to avoid some peripheral topics instead of adopting or promoting false beliefs.
Rather than debating the case for or against caution, I think the most interesting question is how to arrange a peaceful schism. Team Shared Maps That Reflect The Territory and Team Seek Power For The Greater Good obviously do not belong in the same “movement” or “community.” It’s understandable that Team Power doesn’t want to be associated with Team Shared Maps because they’re afraid we’ll say things that will get them in trouble. (We totally will.) But for their part of the bargain, Team Power needs to not fraudulently market their beacon as “the rationality community” and thereby confuse innocents who came looking for shared maps.
I think of my team as being “Team Shared Maps That Reflect The Territory But With a Few Blank Spots, Subject to Cautious Private Discussion, Where Depicting the Territory Would Have Caused the Maps to be Burned”. I don’t think calling it “Team Seek Power For The Greater Good” is a fair characterization both because the Team is scrupulous not to draw fake stuff on the map and because the Team does not seek power for itself but rather seeks for it to be possible for true ideas to have influence regardless of what persons are associated with the true ideas.
That’s fair. Maybe our crux is about to what extent “don’t draw fake stuff on the map” is a actually a serious constraint? When standing trial for a crime you didn’t commit, it’s not exactly comforting to be told that the prosecutor never lies, but “merely” reveals Shared Maps That Reflect The Territory But With a Few Blank Spots Where Depicting the Territory Would Have Caused the Defendant to Be Acquitted. It’s good that the prosecutor never lies! But it’s important that the prosecutor is known as the prosecutor, rather than claiming to be the judge. Same thing with a so-called “rationalist” community.
I don’t think anyone understands the phrase “rationalist community” as implying a claim that its members don’t sometimes allow practical considerations to affect which topics they remain silent on. I don’t advocate that people leave out good points merely for being inconvenient to the case they’re making, optimizing for the audience to believe some claim regardless of the truth of that claim, as suggested by the prosecutor analogy. I advocate that people leave out good points for being relatively unimportant and predictably causing (part of) the audience to be harmfully irrational. I.e., if you saw someone else than the defendant commit the murder, then say that, but don’t start talking about how ugly the judge’s children are even if you think the ugliness of the judge’s children slightly helped inspire the real murderer. We can disagree about which discussions are more like talking about whether you saw someone else commit the murder and which discussions are more like talking about how ugly the judge’s children are.
I guess I feel like we’re at an event for the physics institute and someone’s being nerdy/awkward in the corner, and there’s a question of whether or not we should let that person be or whether we should publicly tell them off / kick them out. I feel like the best people there are a bit nerdy and overly analytical, and that’s fine, and deciding to publicly tell them off is over the top and will make all the physicists more uptight and self-aware.
To pick a very concrete problem we’ve worked on: the AI alignment problem is totally taken seriously by very important people who are also aware that LW is weird, but Eliezer goes on the Sam Harris podcast and Bostrom is invited by the UK government to advise and so on and Karnofsky’s got a billion dollars and focusing to a large part on the AI problem. We’re not being defined by this odd stuff, and I think we don’t need to feel like we are. I expect as we find similar concrete problems or proposals, we’ll continue to be taken very seriously and have major success.
As I see it, we’ve had this success partly because many of us have been scrupulous about not being needlessly offensive. (Bostrom is a good example here.) The rationalist brand is already weak (e.g. search Twitter for relevant terms), and if LessWrong had actually tried to have forthright discussions of every interesting topic, that might well have been fatal.
There was also nothing about dating on LW back when I had the discussion I’ve referred to with the person who thought (and probably still thinks) that a big driver behind the appeal of LW is sexism.
I’m having trouble understanding what this would mean. Why would a big driver behind LW’s appeal be sexism?
As I’ve said in another comment, the person I’ve mentioned is highly intelligent, a data scientist, effective altruist, signed the Giving-what-we-can pledge, and now runs their own business.
If someone can look at LW, with its thousands of posts discussing futurism, philosophy, rationality, etc, and come away concluding that the appeal of the site is sexism (as opposed to an interest in those topics), I feel tempted to just write off their views.
Sure, you can find some sexist posts or commenters here or there (I seem to remember a particular troll whom we eventually vanquished with the switchover from LW 1.0 to LW 2.0). But to think that they’re the norm, or that it’s a big part of the general appeal of the site?
To conclude that, it seems like you’d either have to have gotten an extremely biased sample of LW (and not been thoughtful enough to realize this possibility on your own), or you’d have to have some major blindspots in your thinking about these things, causing you to jump to bizarre conclusions.
In either case, it seems like the issue is more with them than with LW, and all else equal, I wouldn’t feel much drive to cater to their opinion. (Even if they’re otherwise an intelligent and productive individual.) People can just have blindspots, and I don’t think you should cater to the people with the most off-base view of you.
Am I missing something? Do you think their view was more justified than this? Or do you just think it’s worth paying more costs to cater to such people, even if you agree that they’re being unreasonable?
Do you think their view was more justified than this?
A clear no. I think their position was utterly ridiculous. I just think that blind spots on this particular topic are so common that it’s not a smart strategy to ignore them.
Why would a big driver behind LW’s appeal be sexism?
I don’t think this currently is true for LW myself, but if a space casually has, say, sexist or racist stuff in it, people looking can be like “oh thank god, a place I can say what I really think [that is sexist or racist] without political correctness stopping me” and then that becomes a selling point for people who want to talk about sexist or racist stuff. Suspect the commenter means something like this.
I have an extremely negative emotional reaction to this.
More seriously. While LW can be construed as “trying to promote something” (i.e. rational thinking), in my opinion it is mostly a place to have rational discussions, using much stronger discursive standards than elsewhere on the internet.
If people decide to judge us on cherry pickings, that is sad, but it is much better than having them control what topics are or are not allowed. I am with Ben on this one.
About your friend in particular, if they have to be turned off of the community because of some posts and the fact we engage with idea at the object-level instead of yucking-out socially awkward ideas, then she might not yet be ready to receive rationality in her heart.
This post triggers a big “NON-QUANTITATIVE ARGUMENT” alarm in my head.
I’m not super confident in my ability to assess what the quantities are, but I’m extremely confident that they matter. It seems to me like your post could be written in exactly the same way if the “wokeness” phenomenon was “half as large” (fewer people care about, or they don’t care as strongly). Or, if it was twice as large. But this can’t be good – any sensible opinion on this issue has to depend on the scope of the problem, unless you think it’s in principle inconceivable for the wokeness phenomenon to be prevalent enough to matter.
I’ve explained the two categories I’m worried about here, and while there have been some updates since (biggest one: it may be good talk about politics now if we assume AI safety is going to be politicized anyway), I still think about it in roughly those terms. Is this a framing that makes sense to you?
It very much is a non-quantitative argument—since it’s a matter of principle. The principle being not to let outside perceptions dictate the topic of conversations.
I can think of situations were the principles could be broken, or unproductive. If upholding it would make it impossible to have these discussions in the first place (because engaging would mean you get stoned, or something) and hiding is not an option (or still too risky), then it would make sense to move conversations towards the overton window.
Said otherwise, the quantity I care about is “ability to have quote rational unquote conversations” and no amount of outside woke prevalence can change that *as long as they don’t drive enough community member away*. It will be a sad day for freedom and for all of us if that ends up one day being the case.
It has 12 votes; if you remove all downvotes, it doesn’t have low karma anymore.
As a note, I wouldn’t have upvoted this post normally, but I didn’t think it deserved to be negative so I gave it one. I’m pretty sure there’s a bunch of people who vote partly based on the current score, so if you remove all the downvotes, you probably remove a bunch of the upvotes too.
Before the edit, I think this post had the property that if the wrong people read it, their opinion of LW is irreversibly extremely negative.
Was the edit just to add the big disclaimer about motivation at the top? If nothing else was changed, then I struggle to see what would have been so objectionable about the pre-edit version. I might be missing something, but I don’t for example see it advocating any views or practices that I’d consider harmful (in contrast to some PUA stuff).
Seems like the worst thing you could reasonably say about it is that it’s a bit heteronormative and male-centric. I don’t think there’s anything wrong with having a dating advice post written from that perspective, but I do think it would have been good to add a sentence clarifying that at the top, just so that non-heterosexual male readers don’t feel like they’re assumed not to be part of the audience.
But other than that, is there anything else about it that would need to change?
Was the edit just to add the big disclaimer about motivation at the top?
No; it was more than that (although that helps, too). I didn’t make a snapshot of the previous version, so I can’t tell you exactly what changed. But the post is much less concerning now than it used to be.
I haven’t read the OP, am not that interested in it, though Geoffrey Miller is quite thoughtful.
I think that the main things building up what LW is about right now are the core tags, the tagging page, and the upcoming LW books based on the LW review vote. If you look at the core tags, there’s nothing about dating there (“AI” and “World Modeling” etc). If you look at the vote, it’s about epistemology and coordination and AI, not dating. The OP also hasn’t got much karma, so I’m a bit confused that you’re arguing this shouldn’t be discussed on LW, and weak-downvoted this comment. (If you want to argue that a dating post has too much attention, maybe pick something that was better received like Jacobian’s recent piece, which I think embodies a lot of the LW spirit and is quite healthy.)
I’m not much worried about dating posts like this being what we’re known for. Given that it’s a very small part of the site, if it still became one of the ‘attack vectors’, I’m pretty pro just fighting those fights, rather than giving in and letting people on the internet who use the representativeness heuristic to attack people decide what we get to talk about. (Once you open yourself to giving in on those fights, they just start popping up everywhere, and then like 50% of your cognition is controlled by whether or not you’re stepping over those lines.)
There was also nothing about dating on LW back when I had the discussion I’ve referred to with the person who thought (and probably still thinks) that a big driver behind the appeal of LW is sexism. Someone who tries to destroy your reputation doesn’t pick a representative sample of your output, they pick the parts that make you look the worst. (And I suspect that “someone trying to destroy EY’s reputation” was part of the causal chain that lead to the person believing this.)
This post and Jacobian’s are not the same. Before the edit, I think this post had the property that if the wrong people read it, their opinion of LW is irreversibly extremely negative. I don’t think I’m exaggerating here. (And of course, the edit only happened because I made the comment.) And the part about it having low karma, I mean, it probably has low karma because of people who share my concerns. It has 12 votes; if you remove all downvotes, it doesn’t have low karma anymore. And I didn’t know how much karma it was going to have when I commented.
I’m pretty frustrated with this paragraph because it seems so clearly be defending the position that feels good. I would much rather be pro fighting than pro censoring. But if your intuition is that the result is net positive, I ask: you have good reasons to trust that intuition?
As I’ve said in another comment, the person I’ve mentioned is highly intelligent, a data scientist, effective altruist, signed the Giving-what-we-can pledge, and now runs their own business. I’m not claiming they’re a representative case, but the damage that has been done in this single instance due to an association of LW with sexism strikes me as so great that I just don’t buy that having posts like this is worth it, and I don’t think you’ve given me a good reason for why it is.
I wasn’t aware it used to talk about a ‘mating plan’ everywhere, which I think is amusing and I agree sounds kind of socially oblivious.
I really think that we shouldn’t optimise for people not joining us because of weak, negative low-level associations. I think the way that you attract good people is by strong wins, not because of not hearing any bad-associations. Nassim Taleb is an example I go to here, where the majority of times I hear about him I think he’s being obnoxious or aggressive, and often just disagree with what he says, but I don’t care too much about reading that because occasionally he’s saying something important that few others are.
Elon Musk is another example, where the majority of coverage I see of him his negative, and sometimes he writes kinda dumb tweets, but he gives me hope for humanity and I don’t care about the rest of the stuff. Had I seen the news coverage first, I’d still have been mindblown by seeing the rockets land and changed my attitude towards him. I could keep going on with examples… new friends occasionally come to me saying they read a review of HPMOR saying Harry’s rude and obnoxious, and I respond “you need to learn that’s not the most important aspect of a person’s character”. Harry is determined and takes responsibility and is curious and is one of the few people who has everyone’s back in that book, so I think you should definitely read and learn from him, and then the friend is like “Huh, wow, okay, I think I’ll read it then. That was very high and specific praise.”
A lot of this comes down to the graphs in Lukeprog’s post on romance (another dating post, I’m so sorry).
I think that LessWrong is home to some of the most honest and truth-seeking convo on the internet. We have amazing thinkers who come here like Zvi and Paul and Anna and Scott and more and the people who care about the conversations they can have will come here even if we have weird associations and some people hate us and call us names.
(Sarah also wrote the forces of blandness post that I think is great and I think about a lot in this context.)
I guess I didn’t address the specific example of your friend. (Btw I am also a person who was heavily involved with EA at Oxford, I ran the 80k student group while I was there and an EAGx!) I’m sorry your friend decided to write-off LessWrong because they heard it was sexist. I know you think that’s a massive cost that we’re paying in terms of thousands of good people avoiding us for that reason too.
I think that negative low-level associations really matter if you’re trying to be a mass movement and scale, like a political movement. Republicans/Democrats kind of primarily work to manage whether the default association is positive or negative, which is why they spend so much time on image-management. I don’t think LW should grow 100x users in the next 4 years. That would be terrible for our mission of refining the art of human rationality and our culture. I think that the strong positive hits are the most important, as I said already.
Suppose you personally get really valuable insights from LW, and that people’s writing here helps you understand yourself as a person and become more virtuous in your action. If you tell your EA friend that LessWrong was a key causal factor in you levelling up as a person, and they reply “well that’s net bad because I once heard they’re sexist” I’m not that impressed by them. And I hope that a self-identified EA would see the epistemic and personal value there as primary rather than the image-management thing as primary. And I think that if we all think everybody knows everyone else thinks the image-management is primary… then I think it’s healthy to take the step of saying out loud “No, actually, the actual intellectual progress on rationality is more important” and following through.
I feel a lot of uncertainty after reading your and Zack’s responses and I think I want to read some of the links (I’m particularly interested in what Wei Dai has to say) and think about this more before saying anything else about it – except for trying to explain what my model going into this conversation actually was. Based on your reply, I don’t think I’ve managed to do that in previous comments.
I agree with basically everything about how LW generates value. My model isn’t as sophisticated, but it’s not substantially different.
The two things that concern me are
People disliking LW right now (like my EA friend)
The AI debate potentially becoming political.
On #1, you said “I know you think that’s a massive cost that we’re paying in terms of thousands of good people avoiding us for that reason too.” I don’t think it’s very common. Certainly this particular combination of technical intelligence with an extreme worry about gender issues is very rare. It’s more like, if the utility of this one case is −1, then I might guess the total direct utility of allowing posts of this kind in the next couple of years is probably somewhere in [-10, 40] or something. (But this might be wrong since there seem to be more good posts about dating than I was aware of.) And I don’t think you can reasonably argue that there won’t be fifty worth of comparable cases.
I currently don’t buy the arguments that make sweeping generalizations about all kinds of censorship (though I could be wrong here, too), which would substantially change the interval.
On #2, it strikes me as obvious that if AI gets political, we have a massive problem, and if it becomes woke not to take AI risk seriously, we have an even larger problem, and it doesn’t seem impossible that tolerating posts like this is a factor. (Think of someone writing a NYT article about AI risk originating from a site that talks about mating plans.) On the above scale, the utility of AI risk becoming anti-woke might be something like −100.000. But I’m mostly thinking about this for the first time, so this is very much subject to change.
I’ve failed this part of the conversation. I couldn’t get them to read any of it, nor trust that I have any idea what I’m talking about when I said that HPMoR doesn’t seem very sexist.
Many of the world’s smartest, most competent, and most influential people are ideologues. This probably includes whoever ends up developing and controlling advanced technologies. It would be nice to be able to avoid such people dismissing our ideas out of hand. You may not find them impressive or expect them to make intellectual progress on rationality, but for such progress to matter, the ideas have to be taken seriously outside LW at some point. I guess I don’t understand the case against caution in this area, so long as the cost is only having to avoid some peripheral topics instead of adopting or promoting false beliefs.
Rather than debating the case for or against caution, I think the most interesting question is how to arrange a peaceful schism. Team Shared Maps That Reflect The Territory and Team Seek Power For The Greater Good obviously do not belong in the same “movement” or “community.” It’s understandable that Team Power doesn’t want to be associated with Team Shared Maps because they’re afraid we’ll say things that will get them in trouble. (We totally will.) But for their part of the bargain, Team Power needs to not fraudulently market their beacon as “the rationality community” and thereby confuse innocents who came looking for shared maps.
I think of my team as being “Team Shared Maps That Reflect The Territory But With a Few Blank Spots, Subject to Cautious Private Discussion, Where Depicting the Territory Would Have Caused the Maps to be Burned”. I don’t think calling it “Team Seek Power For The Greater Good” is a fair characterization both because the Team is scrupulous not to draw fake stuff on the map and because the Team does not seek power for itself but rather seeks for it to be possible for true ideas to have influence regardless of what persons are associated with the true ideas.
That’s fair. Maybe our crux is about to what extent “don’t draw fake stuff on the map” is a actually a serious constraint? When standing trial for a crime you didn’t commit, it’s not exactly comforting to be told that the prosecutor never lies, but “merely” reveals Shared Maps That Reflect The Territory But With a Few Blank Spots Where Depicting the Territory Would Have Caused the Defendant to Be Acquitted. It’s good that the prosecutor never lies! But it’s important that the prosecutor is known as the prosecutor, rather than claiming to be the judge. Same thing with a so-called “rationalist” community.
I don’t think anyone understands the phrase “rationalist community” as implying a claim that its members don’t sometimes allow practical considerations to affect which topics they remain silent on. I don’t advocate that people leave out good points merely for being inconvenient to the case they’re making, optimizing for the audience to believe some claim regardless of the truth of that claim, as suggested by the prosecutor analogy. I advocate that people leave out good points for being relatively unimportant and predictably causing (part of) the audience to be harmfully irrational. I.e., if you saw someone else than the defendant commit the murder, then say that, but don’t start talking about how ugly the judge’s children are even if you think the ugliness of the judge’s children slightly helped inspire the real murderer. We can disagree about which discussions are more like talking about whether you saw someone else commit the murder and which discussions are more like talking about how ugly the judge’s children are.
I guess I feel like we’re at an event for the physics institute and someone’s being nerdy/awkward in the corner, and there’s a question of whether or not we should let that person be or whether we should publicly tell them off / kick them out. I feel like the best people there are a bit nerdy and overly analytical, and that’s fine, and deciding to publicly tell them off is over the top and will make all the physicists more uptight and self-aware.
To pick a very concrete problem we’ve worked on: the AI alignment problem is totally taken seriously by very important people who are also aware that LW is weird, but Eliezer goes on the Sam Harris podcast and Bostrom is invited by the UK government to advise and so on and Karnofsky’s got a billion dollars and focusing to a large part on the AI problem. We’re not being defined by this odd stuff, and I think we don’t need to feel like we are. I expect as we find similar concrete problems or proposals, we’ll continue to be taken very seriously and have major success.
As I see it, we’ve had this success partly because many of us have been scrupulous about not being needlessly offensive. (Bostrom is a good example here.) The rationalist brand is already weak (e.g. search Twitter for relevant terms), and if LessWrong had actually tried to have forthright discussions of every interesting topic, that might well have been fatal.
Lol mating was not my best choice of word. But hey I’m here to improve my writing.
I’m having trouble understanding what this would mean. Why would a big driver behind LW’s appeal be sexism?
If someone can look at LW, with its thousands of posts discussing futurism, philosophy, rationality, etc, and come away concluding that the appeal of the site is sexism (as opposed to an interest in those topics), I feel tempted to just write off their views.
Sure, you can find some sexist posts or commenters here or there (I seem to remember a particular troll whom we eventually vanquished with the switchover from LW 1.0 to LW 2.0). But to think that they’re the norm, or that it’s a big part of the general appeal of the site?
To conclude that, it seems like you’d either have to have gotten an extremely biased sample of LW (and not been thoughtful enough to realize this possibility on your own), or you’d have to have some major blindspots in your thinking about these things, causing you to jump to bizarre conclusions.
In either case, it seems like the issue is more with them than with LW, and all else equal, I wouldn’t feel much drive to cater to their opinion. (Even if they’re otherwise an intelligent and productive individual.) People can just have blindspots, and I don’t think you should cater to the people with the most off-base view of you.
Am I missing something? Do you think their view was more justified than this? Or do you just think it’s worth paying more costs to cater to such people, even if you agree that they’re being unreasonable?
A clear no. I think their position was utterly ridiculous. I just think that blind spots on this particular topic are so common that it’s not a smart strategy to ignore them.
I don’t think this currently is true for LW myself, but if a space casually has, say, sexist or racist stuff in it, people looking can be like “oh thank god, a place I can say what I really think [that is sexist or racist] without political correctness stopping me” and then that becomes a selling point for people who want to talk about sexist or racist stuff. Suspect the commenter means something like this.
Thanks. That does seem like the most likely interpretation.
I have an extremely negative emotional reaction to this.
More seriously. While LW can be construed as “trying to promote something” (i.e. rational thinking), in my opinion it is mostly a place to have rational discussions, using much stronger discursive standards than elsewhere on the internet.
If people decide to judge us on cherry pickings, that is sad, but it is much better than having them control what topics are or are not allowed. I am with Ben on this one.
About your friend in particular, if they have to be turned off of the community because of some posts and the fact we engage with idea at the object-level instead of yucking-out socially awkward ideas, then she might not yet be ready to receive rationality in her heart.
This post triggers a big “NON-QUANTITATIVE ARGUMENT” alarm in my head.
I’m not super confident in my ability to assess what the quantities are, but I’m extremely confident that they matter. It seems to me like your post could be written in exactly the same way if the “wokeness” phenomenon was “half as large” (fewer people care about, or they don’t care as strongly). Or, if it was twice as large. But this can’t be good – any sensible opinion on this issue has to depend on the scope of the problem, unless you think it’s in principle inconceivable for the wokeness phenomenon to be prevalent enough to matter.
I’ve explained the two categories I’m worried about here, and while there have been some updates since (biggest one: it may be good talk about politics now if we assume AI safety is going to be politicized anyway), I still think about it in roughly those terms. Is this a framing that makes sense to you?
It very much is a non-quantitative argument—since it’s a matter of principle. The principle being not to let outside perceptions dictate the topic of conversations.
I can think of situations were the principles could be broken, or unproductive. If upholding it would make it impossible to have these discussions in the first place (because engaging would mean you get stoned, or something) and hiding is not an option (or still too risky), then it would make sense to move conversations towards the overton window.
Said otherwise, the quantity I care about is “ability to have quote rational unquote conversations” and no amount of outside woke prevalence can change that *as long as they don’t drive enough community member away*. It will be a sad day for freedom and for all of us if that ends up one day being the case.
As a note, I wouldn’t have upvoted this post normally, but I didn’t think it deserved to be negative so I gave it one. I’m pretty sure there’s a bunch of people who vote partly based on the current score, so if you remove all the downvotes, you probably remove a bunch of the upvotes too.
Was the edit just to add the big disclaimer about motivation at the top? If nothing else was changed, then I struggle to see what would have been so objectionable about the pre-edit version. I might be missing something, but I don’t for example see it advocating any views or practices that I’d consider harmful (in contrast to some PUA stuff).
Seems like the worst thing you could reasonably say about it is that it’s a bit heteronormative and male-centric. I don’t think there’s anything wrong with having a dating advice post written from that perspective, but I do think it would have been good to add a sentence clarifying that at the top, just so that non-heterosexual male readers don’t feel like they’re assumed not to be part of the audience.
But other than that, is there anything else about it that would need to change?
OP here to clarify.
Edits—Added disclaimer at the top; changed every instance of “mating” to “dating”; replaced personal details with <anonymized>
I honestly don’t see what is so objectionable about the original version either. I like your last sentence, will add that as well.
No; it was more than that (although that helps, too). I didn’t make a snapshot of the previous version, so I can’t tell you exactly what changed. But the post is much less concerning now than it used to be.
Ah, I see. Thanks!