We risk being an echo-chamber of people who aren’t hurt by the problems we discuss.
I don’t see this as a problem, really. The entire point is to have high-value discussions. Being inclusive isn’t the point. It’d be nice, sure, and there’s no reason to drive away minority groups for no reason.
I mean, I don’t see us trying to spread internet access and English language instruction in Africa so that the inhabitants can help discuss how to solve their malaria problems. As long as we can get enough input about what the problem is actually like, we don’t need to be inclusive in order to solve problems. And in the African malaria case, being inclusive would obviously hurt our problem-solving capability.
Eh, yes and no. This attitude (“we know what’s best; your input is not required”) has historically almost always been wrong and frequently dangerous and deserves close attention, and I think it mostly fails here. In very, very specific instances (GiveWell-esque philanthropy, eg), maybe not, but in terms of, say, feminism? If anyone on LW is interested tackling feminist issues, having very few women would be a major issue. Even when not addressing specific issues, if you’re trying to develop models of how human beings think, and everyone in the conversation is a very specific sort of person, you’re going to have a much harder time getting it right.
This attitude (“we know what’s best; your input is not required”) has historically almost always been wrong
Has it really? The cases where it went wrong jump to mind more easily than those where it went right, but I don’t know which way the balance tips overall (and I suspect neither do your nor most readers—it’s a difficult question!).
For example, in past centuries Europe has seen a great rise in litteracy, and a drop in all kinds of mortality, through the adoption of widespread education, modern medical practices, etc. A lot of this seems to have been driven in a top-down way by bureaucratic governments who considered they were working for The Greater Good Of The Nation, and didn’t care that much about the opinion of a bunch of unwashed superstitious hicks.
(Some books on the topic: Seeing Like a State; The Discovery of France … I haven’t read either unfortunately)
I don’t see this as a problem, really. The entire point is to have high-value discussions.
High-value discussions here, so far as is apparent to me, seem to be better described as “High-value for modestly wealthy white and ethnic Jewish city-dwelling men, many of them programmers”. If it turns out said men get enough out of this to noticeably improve the lives of the huge populations (some of which might even contain intelligent, rational individuals or subgroups), that’s all fine and well. But so far, it mostly just sounds like rich programmers signalling at each other.
Which makes me wonder what the hell I’m still doing here; in spite of not feeling particularly welcome, or getting much out of discussions, I haven’t felt like not continuing to read and sometimes comment would make a good response. Yet, since I’m almost definitely not going to be able to contribute to a world-changing AI, directly or otherwise, and don’t have money to spare for EA or xrisk reduction, I don’t see why LW should care. (Ok, so I made a thinly veiled argument for why LW should care, but I also acknowledged it was rather weak.)
Even with malaria nets (which seem like a very simple case), having information from the people who are using them could be important. Is using malaria nets harder than it sounds? Are there other diseases which deserve more attention?
One of the topics here is that sometimes experts get things wrong. Of course, so do non-experts, but one of the checks on experts is people who have local experience.
Even with malaria nets (which seem like a very simple case), having information from the people who are using them could be important.
Even then, is trying to encourage sub-saharan African participation in the Effective Altruism movement really the best way to gather data about their needs and values? Wouldn’t it be more cost effective to hire an information-gathering specialist of some sort to conduct investigations?
The entire point is to have high-value discussions.
Feminism and possible racial differences seem like pretty low-value discussion topics to me… interesting way out of proportion to their usefulness, kind of like politics.
Feminism and possible racial differences seem like pretty low-value discussion topics to me...
That’s an incredibly short-sighted attitude. Feminism and race realism are just the focus of the current controversy. I’m pretty confident that you could pick just about any topic in social science (and some topics in the natural sciences as well—evolution, anyone?) and some people will want to prevent or bias discussions of it for political reasons. It’s not clear why we should be putting up with this nonsense at all.
My argument is: (1) Feminism and race realism are interesting for the same reasons politics are interesting and (2) they aren’t especially high value. If this argument is valid, then for the same reasons LW has an informal ban on politics discussion, it might make sense to have an informal ban on feminism and race realism discussion.
You don’t address either of my points. Instead you make a slippery slope argument, saying that if there’s an informal ban on feminism/race realism then maybe we will start making informal bans on all of social science. I don’t find this slippery slope argument especially persuasive (such arguments are widely considered fallacious). I trust the Less Wrong community to evaluate the heat-to-light ratio of different topics and determine which should have informal bans and which shouldn’t.
“some people will want to prevent or bias discussions of it for political reasons”—to clarify, I’m in favor of informal bans against making arguments for any side on highly interesting but fairly useless topics. Also, it seems like for some of these topics, “people getting their feelings hurt” is also a consideration and this seems like a legitimate cost to be weighed when determining whether discussing a given topic is worthwhile.
There’s obviously a level of exclusivity that also hurts our problem-solving, as well. At some point a programmer in the Bay Area with $20k/yr of disposable income and 20 hours a week to spare is going to do more than a subsaharan african farmer with $200/yr of disposable income, 6 hours a week of free time, and no internet access.
I don’t see how it would actually hurt our problem-solving, though, if we were to try to solicit input from people who don’t have the leisure time or education to provide it. It would be a phenomenal waste of resources, to be sure, but aside from that I don’t see how it would harm the community.
I don’t see this as a problem, really. The entire point is to have high-value discussions. Being inclusive isn’t the point. It’d be nice, sure, and there’s no reason to drive away minority groups for no reason.
I mean, I don’t see us trying to spread internet access and English language instruction in Africa so that the inhabitants can help discuss how to solve their malaria problems. As long as we can get enough input about what the problem is actually like, we don’t need to be inclusive in order to solve problems. And in the African malaria case, being inclusive would obviously hurt our problem-solving capability.
Eh, yes and no. This attitude (“we know what’s best; your input is not required”) has historically almost always been wrong and frequently dangerous and deserves close attention, and I think it mostly fails here. In very, very specific instances (GiveWell-esque philanthropy, eg), maybe not, but in terms of, say, feminism? If anyone on LW is interested tackling feminist issues, having very few women would be a major issue. Even when not addressing specific issues, if you’re trying to develop models of how human beings think, and everyone in the conversation is a very specific sort of person, you’re going to have a much harder time getting it right.
Has it really? The cases where it went wrong jump to mind more easily than those where it went right, but I don’t know which way the balance tips overall (and I suspect neither do your nor most readers—it’s a difficult question!).
For example, in past centuries Europe has seen a great rise in litteracy, and a drop in all kinds of mortality, through the adoption of widespread education, modern medical practices, etc. A lot of this seems to have been driven in a top-down way by bureaucratic governments who considered they were working for The Greater Good Of The Nation, and didn’t care that much about the opinion of a bunch of unwashed superstitious hicks.
(Some books on the topic: Seeing Like a State; The Discovery of France … I haven’t read either unfortunately)
High-value discussions here, so far as is apparent to me, seem to be better described as “High-value for modestly wealthy white and ethnic Jewish city-dwelling men, many of them programmers”. If it turns out said men get enough out of this to noticeably improve the lives of the huge populations (some of which might even contain intelligent, rational individuals or subgroups), that’s all fine and well. But so far, it mostly just sounds like rich programmers signalling at each other.
Which makes me wonder what the hell I’m still doing here; in spite of not feeling particularly welcome, or getting much out of discussions, I haven’t felt like not continuing to read and sometimes comment would make a good response. Yet, since I’m almost definitely not going to be able to contribute to a world-changing AI, directly or otherwise, and don’t have money to spare for EA or xrisk reduction, I don’t see why LW should care. (Ok, so I made a thinly veiled argument for why LW should care, but I also acknowledged it was rather weak.)
My LW reading comes out of my Internet-as-television time, and so does Hacker News. The two appear very similar in target audience.
Out of curiousity, what sites come out of your Internet-as-non-television time?
I live in my GMail. Wikipedia editing, well, really it’s a form of television I pretend isn’t. The rest is looking for something in particular.
So what do you consider a high-value use of your free time?
Even with malaria nets (which seem like a very simple case), having information from the people who are using them could be important. Is using malaria nets harder than it sounds? Are there other diseases which deserve more attention?
One of the topics here is that sometimes experts get things wrong. Of course, so do non-experts, but one of the checks on experts is people who have local experience.
Even then, is trying to encourage sub-saharan African participation in the Effective Altruism movement really the best way to gather data about their needs and values? Wouldn’t it be more cost effective to hire an information-gathering specialist of some sort to conduct investigations?
Feminism and possible racial differences seem like pretty low-value discussion topics to me… interesting way out of proportion to their usefulness, kind of like politics.
That’s an incredibly short-sighted attitude. Feminism and race realism are just the focus of the current controversy. I’m pretty confident that you could pick just about any topic in social science (and some topics in the natural sciences as well—evolution, anyone?) and some people will want to prevent or bias discussions of it for political reasons. It’s not clear why we should be putting up with this nonsense at all.
My argument is: (1) Feminism and race realism are interesting for the same reasons politics are interesting and (2) they aren’t especially high value. If this argument is valid, then for the same reasons LW has an informal ban on politics discussion, it might make sense to have an informal ban on feminism and race realism discussion.
You don’t address either of my points. Instead you make a slippery slope argument, saying that if there’s an informal ban on feminism/race realism then maybe we will start making informal bans on all of social science. I don’t find this slippery slope argument especially persuasive (such arguments are widely considered fallacious). I trust the Less Wrong community to evaluate the heat-to-light ratio of different topics and determine which should have informal bans and which shouldn’t.
“some people will want to prevent or bias discussions of it for political reasons”—to clarify, I’m in favor of informal bans against making arguments for any side on highly interesting but fairly useless topics. Also, it seems like for some of these topics, “people getting their feelings hurt” is also a consideration and this seems like a legitimate cost to be weighed when determining whether discussing a given topic is worthwhile.
Maybe I’m being dense, but I don’t see why this is obviously true.
There’s obviously a level of exclusivity that also hurts our problem-solving, as well. At some point a programmer in the Bay Area with $20k/yr of disposable income and 20 hours a week to spare is going to do more than a subsaharan african farmer with $200/yr of disposable income, 6 hours a week of free time, and no internet access.
I don’t see how it would actually hurt our problem-solving, though, if we were to try to solicit input from people who don’t have the leisure time or education to provide it. It would be a phenomenal waste of resources, to be sure, but aside from that I don’t see how it would harm the community.