I wasn’t aware it used to talk about a ‘mating plan’ everywhere, which I think is amusing and I agree sounds kind of socially oblivious.
I really think that we shouldn’t optimise for people not joining us because of weak, negative low-level associations. I think the way that you attract good people is by strong wins, not because of not hearing any bad-associations. Nassim Taleb is an example I go to here, where the majority of times I hear about him I think he’s being obnoxious or aggressive, and often just disagree with what he says, but I don’t care too much about reading that because occasionally he’s saying something important that few others are.
Elon Musk is another example, where the majority of coverage I see of him his negative, and sometimes he writes kinda dumb tweets, but he gives me hope for humanity and I don’t care about the rest of the stuff. Had I seen the news coverage first, I’d still have been mindblown by seeing the rockets land and changed my attitude towards him. I could keep going on with examples… new friends occasionally come to me saying they read a review of HPMOR saying Harry’s rude and obnoxious, and I respond “you need to learn that’s not the most important aspect of a person’s character”. Harry is determined and takes responsibility and is curious and is one of the few people who has everyone’s back in that book, so I think you should definitely read and learn from him, and then the friend is like “Huh, wow, okay, I think I’ll read it then. That was very high and specific praise.”
A lot of this comes down to the graphs in Lukeprog’s post on romance (another dating post, I’m so sorry).
I think that LessWrong is home to some of the most honest and truth-seeking convo on the internet. We have amazing thinkers who come here like Zvi and Paul and Anna and Scott and more and the people who care about the conversations they can have will come here even if we have weird associations and some people hate us and call us names.
(Sarah also wrote the forces of blandness post that I think is great and I think about a lot in this context.)
I guess I didn’t address the specific example of your friend. (Btw I am also a person who was heavily involved with EA at Oxford, I ran the 80k student group while I was there and an EAGx!) I’m sorry your friend decided to write-off LessWrong because they heard it was sexist. I know you think that’s a massive cost that we’re paying in terms of thousands of good people avoiding us for that reason too.
I think that negative low-level associations really matter if you’re trying to be a mass movement and scale, like a political movement. Republicans/Democrats kind of primarily work to manage whether the default association is positive or negative, which is why they spend so much time on image-management. I don’t think LW should grow 100x users in the next 4 years. That would be terrible for our mission of refining the art of human rationality and our culture. I think that the strong positive hits are the most important, as I said already.
Suppose you personally get really valuable insights from LW, and that people’s writing here helps you understand yourself as a person and become more virtuous in your action. If you tell your EA friend that LessWrong was a key causal factor in you levelling up as a person, and they reply “well that’s net bad because I once heard they’re sexist” I’m not that impressed by them. And I hope that a self-identified EA would see the epistemic and personal value there as primary rather than the image-management thing as primary. And I think that if we all think everybody knows everyone else thinks the image-management is primary… then I think it’s healthy to take the step of saying out loud “No, actually, the actual intellectual progress on rationality is more important” and following through.
I feel a lot of uncertainty after reading your and Zack’s responses and I think I want to read some of the links (I’m particularly interested in what Wei Dai has to say) and think about this more before saying anything else about it – except for trying to explain what my model going into this conversation actually was. Based on your reply, I don’t think I’ve managed to do that in previous comments.
I agree with basically everything about how LW generates value. My model isn’t as sophisticated, but it’s not substantially different.
The two things that concern me are
People disliking LW right now (like my EA friend)
The AI debate potentially becoming political.
On #1, you said “I know you think that’s a massive cost that we’re paying in terms of thousands of good people avoiding us for that reason too.” I don’t think it’s very common. Certainly this particular combination of technical intelligence with an extreme worry about gender issues is very rare. It’s more like, if the utility of this one case is −1, then I might guess the total direct utility of allowing posts of this kind in the next couple of years is probably somewhere in [-10, 40] or something. (But this might be wrong since there seem to be more good posts about dating than I was aware of.) And I don’t think you can reasonably argue that there won’t be fifty worth of comparable cases.
I currently don’t buy the arguments that make sweeping generalizations about all kinds of censorship (though I could be wrong here, too), which would substantially change the interval.
On #2, it strikes me as obvious that if AI gets political, we have a massive problem, and if it becomes woke not to take AI risk seriously, we have an even larger problem, and it doesn’t seem impossible that tolerating posts like this is a factor. (Think of someone writing a NYT article about AI risk originating from a site that talks about mating plans.) On the above scale, the utility of AI risk becoming anti-woke might be something like −100.000. But I’m mostly thinking about this for the first time, so this is very much subject to change.
I could keep going on with examples… new friends occasionally come to me saying they read a review of HPMOR saying Harry’s rude and obnoxious, and I’m like you need to learn that’s not the most important aspect of a person’s character. Harry is determined and takes responsibility and is curious and is one of the few people who has everyone’s back in that book, so I think you should definitely read and learn from him, and then the friend is like “Huh, wow, okay, I think I’ll read it then. That was shockingly high and specific praise.”
I’ve failed this part of the conversation. I couldn’t get them to read any of it, nor trust that I have any idea what I’m talking about when I said that HPMoR doesn’t seem very sexist.
I think that negative low-level associations really matter if you’re trying to be a mass movement and scale, like a political movement.
Many of the world’s smartest, most competent, and most influential people are ideologues. This probably includes whoever ends up developing and controlling advanced technologies. It would be nice to be able to avoid such people dismissing our ideas out of hand. You may not find them impressive or expect them to make intellectual progress on rationality, but for such progress to matter, the ideas have to be taken seriously outside LW at some point. I guess I don’t understand the case against caution in this area, so long as the cost is only having to avoid some peripheral topics instead of adopting or promoting false beliefs.
Rather than debating the case for or against caution, I think the most interesting question is how to arrange a peaceful schism. Team Shared Maps That Reflect The Territory and Team Seek Power For The Greater Good obviously do not belong in the same “movement” or “community.” It’s understandable that Team Power doesn’t want to be associated with Team Shared Maps because they’re afraid we’ll say things that will get them in trouble. (We totally will.) But for their part of the bargain, Team Power needs to not fraudulently market their beacon as “the rationality community” and thereby confuse innocents who came looking for shared maps.
I think of my team as being “Team Shared Maps That Reflect The Territory But With a Few Blank Spots, Subject to Cautious Private Discussion, Where Depicting the Territory Would Have Caused the Maps to be Burned”. I don’t think calling it “Team Seek Power For The Greater Good” is a fair characterization both because the Team is scrupulous not to draw fake stuff on the map and because the Team does not seek power for itself but rather seeks for it to be possible for true ideas to have influence regardless of what persons are associated with the true ideas.
That’s fair. Maybe our crux is about to what extent “don’t draw fake stuff on the map” is a actually a serious constraint? When standing trial for a crime you didn’t commit, it’s not exactly comforting to be told that the prosecutor never lies, but “merely” reveals Shared Maps That Reflect The Territory But With a Few Blank Spots Where Depicting the Territory Would Have Caused the Defendant to Be Acquitted. It’s good that the prosecutor never lies! But it’s important that the prosecutor is known as the prosecutor, rather than claiming to be the judge. Same thing with a so-called “rationalist” community.
I don’t think anyone understands the phrase “rationalist community” as implying a claim that its members don’t sometimes allow practical considerations to affect which topics they remain silent on. I don’t advocate that people leave out good points merely for being inconvenient to the case they’re making, optimizing for the audience to believe some claim regardless of the truth of that claim, as suggested by the prosecutor analogy. I advocate that people leave out good points for being relatively unimportant and predictably causing (part of) the audience to be harmfully irrational. I.e., if you saw someone else than the defendant commit the murder, then say that, but don’t start talking about how ugly the judge’s children are even if you think the ugliness of the judge’s children slightly helped inspire the real murderer. We can disagree about which discussions are more like talking about whether you saw someone else commit the murder and which discussions are more like talking about how ugly the judge’s children are.
I guess I feel like we’re at an event for the physics institute and someone’s being nerdy/awkward in the corner, and there’s a question of whether or not we should let that person be or whether we should publicly tell them off / kick them out. I feel like the best people there are a bit nerdy and overly analytical, and that’s fine, and deciding to publicly tell them off is over the top and will make all the physicists more uptight and self-aware.
To pick a very concrete problem we’ve worked on: the AI alignment problem is totally taken seriously by very important people who are also aware that LW is weird, but Eliezer goes on the Sam Harris podcast and Bostrom is invited by the UK government to advise and so on and Karnofsky’s got a billion dollars and focusing to a large part on the AI problem. We’re not being defined by this odd stuff, and I think we don’t need to feel like we are. I expect as we find similar concrete problems or proposals, we’ll continue to be taken very seriously and have major success.
As I see it, we’ve had this success partly because many of us have been scrupulous about not being needlessly offensive. (Bostrom is a good example here.) The rationalist brand is already weak (e.g. search Twitter for relevant terms), and if LessWrong had actually tried to have forthright discussions of every interesting topic, that might well have been fatal.
I wasn’t aware it used to talk about a ‘mating plan’ everywhere, which I think is amusing and I agree sounds kind of socially oblivious.
I really think that we shouldn’t optimise for people not joining us because of weak, negative low-level associations. I think the way that you attract good people is by strong wins, not because of not hearing any bad-associations. Nassim Taleb is an example I go to here, where the majority of times I hear about him I think he’s being obnoxious or aggressive, and often just disagree with what he says, but I don’t care too much about reading that because occasionally he’s saying something important that few others are.
Elon Musk is another example, where the majority of coverage I see of him his negative, and sometimes he writes kinda dumb tweets, but he gives me hope for humanity and I don’t care about the rest of the stuff. Had I seen the news coverage first, I’d still have been mindblown by seeing the rockets land and changed my attitude towards him. I could keep going on with examples… new friends occasionally come to me saying they read a review of HPMOR saying Harry’s rude and obnoxious, and I respond “you need to learn that’s not the most important aspect of a person’s character”. Harry is determined and takes responsibility and is curious and is one of the few people who has everyone’s back in that book, so I think you should definitely read and learn from him, and then the friend is like “Huh, wow, okay, I think I’ll read it then. That was very high and specific praise.”
A lot of this comes down to the graphs in Lukeprog’s post on romance (another dating post, I’m so sorry).
I think that LessWrong is home to some of the most honest and truth-seeking convo on the internet. We have amazing thinkers who come here like Zvi and Paul and Anna and Scott and more and the people who care about the conversations they can have will come here even if we have weird associations and some people hate us and call us names.
(Sarah also wrote the forces of blandness post that I think is great and I think about a lot in this context.)
I guess I didn’t address the specific example of your friend. (Btw I am also a person who was heavily involved with EA at Oxford, I ran the 80k student group while I was there and an EAGx!) I’m sorry your friend decided to write-off LessWrong because they heard it was sexist. I know you think that’s a massive cost that we’re paying in terms of thousands of good people avoiding us for that reason too.
I think that negative low-level associations really matter if you’re trying to be a mass movement and scale, like a political movement. Republicans/Democrats kind of primarily work to manage whether the default association is positive or negative, which is why they spend so much time on image-management. I don’t think LW should grow 100x users in the next 4 years. That would be terrible for our mission of refining the art of human rationality and our culture. I think that the strong positive hits are the most important, as I said already.
Suppose you personally get really valuable insights from LW, and that people’s writing here helps you understand yourself as a person and become more virtuous in your action. If you tell your EA friend that LessWrong was a key causal factor in you levelling up as a person, and they reply “well that’s net bad because I once heard they’re sexist” I’m not that impressed by them. And I hope that a self-identified EA would see the epistemic and personal value there as primary rather than the image-management thing as primary. And I think that if we all think everybody knows everyone else thinks the image-management is primary… then I think it’s healthy to take the step of saying out loud “No, actually, the actual intellectual progress on rationality is more important” and following through.
I feel a lot of uncertainty after reading your and Zack’s responses and I think I want to read some of the links (I’m particularly interested in what Wei Dai has to say) and think about this more before saying anything else about it – except for trying to explain what my model going into this conversation actually was. Based on your reply, I don’t think I’ve managed to do that in previous comments.
I agree with basically everything about how LW generates value. My model isn’t as sophisticated, but it’s not substantially different.
The two things that concern me are
People disliking LW right now (like my EA friend)
The AI debate potentially becoming political.
On #1, you said “I know you think that’s a massive cost that we’re paying in terms of thousands of good people avoiding us for that reason too.” I don’t think it’s very common. Certainly this particular combination of technical intelligence with an extreme worry about gender issues is very rare. It’s more like, if the utility of this one case is −1, then I might guess the total direct utility of allowing posts of this kind in the next couple of years is probably somewhere in [-10, 40] or something. (But this might be wrong since there seem to be more good posts about dating than I was aware of.) And I don’t think you can reasonably argue that there won’t be fifty worth of comparable cases.
I currently don’t buy the arguments that make sweeping generalizations about all kinds of censorship (though I could be wrong here, too), which would substantially change the interval.
On #2, it strikes me as obvious that if AI gets political, we have a massive problem, and if it becomes woke not to take AI risk seriously, we have an even larger problem, and it doesn’t seem impossible that tolerating posts like this is a factor. (Think of someone writing a NYT article about AI risk originating from a site that talks about mating plans.) On the above scale, the utility of AI risk becoming anti-woke might be something like −100.000. But I’m mostly thinking about this for the first time, so this is very much subject to change.
I’ve failed this part of the conversation. I couldn’t get them to read any of it, nor trust that I have any idea what I’m talking about when I said that HPMoR doesn’t seem very sexist.
Many of the world’s smartest, most competent, and most influential people are ideologues. This probably includes whoever ends up developing and controlling advanced technologies. It would be nice to be able to avoid such people dismissing our ideas out of hand. You may not find them impressive or expect them to make intellectual progress on rationality, but for such progress to matter, the ideas have to be taken seriously outside LW at some point. I guess I don’t understand the case against caution in this area, so long as the cost is only having to avoid some peripheral topics instead of adopting or promoting false beliefs.
Rather than debating the case for or against caution, I think the most interesting question is how to arrange a peaceful schism. Team Shared Maps That Reflect The Territory and Team Seek Power For The Greater Good obviously do not belong in the same “movement” or “community.” It’s understandable that Team Power doesn’t want to be associated with Team Shared Maps because they’re afraid we’ll say things that will get them in trouble. (We totally will.) But for their part of the bargain, Team Power needs to not fraudulently market their beacon as “the rationality community” and thereby confuse innocents who came looking for shared maps.
I think of my team as being “Team Shared Maps That Reflect The Territory But With a Few Blank Spots, Subject to Cautious Private Discussion, Where Depicting the Territory Would Have Caused the Maps to be Burned”. I don’t think calling it “Team Seek Power For The Greater Good” is a fair characterization both because the Team is scrupulous not to draw fake stuff on the map and because the Team does not seek power for itself but rather seeks for it to be possible for true ideas to have influence regardless of what persons are associated with the true ideas.
That’s fair. Maybe our crux is about to what extent “don’t draw fake stuff on the map” is a actually a serious constraint? When standing trial for a crime you didn’t commit, it’s not exactly comforting to be told that the prosecutor never lies, but “merely” reveals Shared Maps That Reflect The Territory But With a Few Blank Spots Where Depicting the Territory Would Have Caused the Defendant to Be Acquitted. It’s good that the prosecutor never lies! But it’s important that the prosecutor is known as the prosecutor, rather than claiming to be the judge. Same thing with a so-called “rationalist” community.
I don’t think anyone understands the phrase “rationalist community” as implying a claim that its members don’t sometimes allow practical considerations to affect which topics they remain silent on. I don’t advocate that people leave out good points merely for being inconvenient to the case they’re making, optimizing for the audience to believe some claim regardless of the truth of that claim, as suggested by the prosecutor analogy. I advocate that people leave out good points for being relatively unimportant and predictably causing (part of) the audience to be harmfully irrational. I.e., if you saw someone else than the defendant commit the murder, then say that, but don’t start talking about how ugly the judge’s children are even if you think the ugliness of the judge’s children slightly helped inspire the real murderer. We can disagree about which discussions are more like talking about whether you saw someone else commit the murder and which discussions are more like talking about how ugly the judge’s children are.
I guess I feel like we’re at an event for the physics institute and someone’s being nerdy/awkward in the corner, and there’s a question of whether or not we should let that person be or whether we should publicly tell them off / kick them out. I feel like the best people there are a bit nerdy and overly analytical, and that’s fine, and deciding to publicly tell them off is over the top and will make all the physicists more uptight and self-aware.
To pick a very concrete problem we’ve worked on: the AI alignment problem is totally taken seriously by very important people who are also aware that LW is weird, but Eliezer goes on the Sam Harris podcast and Bostrom is invited by the UK government to advise and so on and Karnofsky’s got a billion dollars and focusing to a large part on the AI problem. We’re not being defined by this odd stuff, and I think we don’t need to feel like we are. I expect as we find similar concrete problems or proposals, we’ll continue to be taken very seriously and have major success.
As I see it, we’ve had this success partly because many of us have been scrupulous about not being needlessly offensive. (Bostrom is a good example here.) The rationalist brand is already weak (e.g. search Twitter for relevant terms), and if LessWrong had actually tried to have forthright discussions of every interesting topic, that might well have been fatal.
Lol mating was not my best choice of word. But hey I’m here to improve my writing.