I feel a lot of uncertainty after reading your and Zack’s responses and I think I want to read some of the links (I’m particularly interested in what Wei Dai has to say) and think about this more before saying anything else about it – except for trying to explain what my model going into this conversation actually was. Based on your reply, I don’t think I’ve managed to do that in previous comments.
I agree with basically everything about how LW generates value. My model isn’t as sophisticated, but it’s not substantially different.
The two things that concern me are
People disliking LW right now (like my EA friend)
The AI debate potentially becoming political.
On #1, you said “I know you think that’s a massive cost that we’re paying in terms of thousands of good people avoiding us for that reason too.” I don’t think it’s very common. Certainly this particular combination of technical intelligence with an extreme worry about gender issues is very rare. It’s more like, if the utility of this one case is −1, then I might guess the total direct utility of allowing posts of this kind in the next couple of years is probably somewhere in [-10, 40] or something. (But this might be wrong since there seem to be more good posts about dating than I was aware of.) And I don’t think you can reasonably argue that there won’t be fifty worth of comparable cases.
I currently don’t buy the arguments that make sweeping generalizations about all kinds of censorship (though I could be wrong here, too), which would substantially change the interval.
On #2, it strikes me as obvious that if AI gets political, we have a massive problem, and if it becomes woke not to take AI risk seriously, we have an even larger problem, and it doesn’t seem impossible that tolerating posts like this is a factor. (Think of someone writing a NYT article about AI risk originating from a site that talks about mating plans.) On the above scale, the utility of AI risk becoming anti-woke might be something like −100.000. But I’m mostly thinking about this for the first time, so this is very much subject to change.
I could keep going on with examples… new friends occasionally come to me saying they read a review of HPMOR saying Harry’s rude and obnoxious, and I’m like you need to learn that’s not the most important aspect of a person’s character. Harry is determined and takes responsibility and is curious and is one of the few people who has everyone’s back in that book, so I think you should definitely read and learn from him, and then the friend is like “Huh, wow, okay, I think I’ll read it then. That was shockingly high and specific praise.”
I’ve failed this part of the conversation. I couldn’t get them to read any of it, nor trust that I have any idea what I’m talking about when I said that HPMoR doesn’t seem very sexist.
I feel a lot of uncertainty after reading your and Zack’s responses and I think I want to read some of the links (I’m particularly interested in what Wei Dai has to say) and think about this more before saying anything else about it – except for trying to explain what my model going into this conversation actually was. Based on your reply, I don’t think I’ve managed to do that in previous comments.
I agree with basically everything about how LW generates value. My model isn’t as sophisticated, but it’s not substantially different.
The two things that concern me are
People disliking LW right now (like my EA friend)
The AI debate potentially becoming political.
On #1, you said “I know you think that’s a massive cost that we’re paying in terms of thousands of good people avoiding us for that reason too.” I don’t think it’s very common. Certainly this particular combination of technical intelligence with an extreme worry about gender issues is very rare. It’s more like, if the utility of this one case is −1, then I might guess the total direct utility of allowing posts of this kind in the next couple of years is probably somewhere in [-10, 40] or something. (But this might be wrong since there seem to be more good posts about dating than I was aware of.) And I don’t think you can reasonably argue that there won’t be fifty worth of comparable cases.
I currently don’t buy the arguments that make sweeping generalizations about all kinds of censorship (though I could be wrong here, too), which would substantially change the interval.
On #2, it strikes me as obvious that if AI gets political, we have a massive problem, and if it becomes woke not to take AI risk seriously, we have an even larger problem, and it doesn’t seem impossible that tolerating posts like this is a factor. (Think of someone writing a NYT article about AI risk originating from a site that talks about mating plans.) On the above scale, the utility of AI risk becoming anti-woke might be something like −100.000. But I’m mostly thinking about this for the first time, so this is very much subject to change.
I’ve failed this part of the conversation. I couldn’t get them to read any of it, nor trust that I have any idea what I’m talking about when I said that HPMoR doesn’t seem very sexist.