It bothers me how many of these comments pick nits (“plowing isn’t especially feminine”, “you can’t unilaterally declare Crocker’s Rules”) instead of actually engaging with what has been said.
(And those are just women’s issues; women are not the only group that sometimes has problems in geek culture, or specifically on Less Wrong.)
It bothers me how many of these comments pick nits (“plowing isn’t especially feminine”, “you can’t unilaterally declare Crocker’s Rules”) instead of actually engaging with what has been said.
What would differentiate picking nits and engaging with what was said?
Like SaidAchmiz points out, there’s not all that much to say when someone shares information. I’m certainly not going to share the off-site experiences of female friends that were told to me in confidence, and my experiences are not particularly relevant, and so I don’t have much to add.
One of the issues that has poisoned conversations about feminism I have been in previously, and which I sincerely hope does not happen here, is that the feminists in the conversation did not have a strong ability to discern between useful and useless criticisms. I understand that many people don’t listen to women, especially about their experience as women; I understand that many people dismiss good feminist arguments, or challenge them with bad arguments.
But when people do listen, and respond with good arguments- and then their good arguments are trivialized or dismissed- then we’re not having a conversation, but a lecture. The people putting forth good arguments realize they’re not welcome and leave, and only the trolls are left.
Especially in the context of minimizing inferential distance, it’s important to have experience exchange both ways. For example, DMs shutting down a player’s attempt to deviate from the script is a common enough experience that I expect more than half of D&D players can relate, and letting the person who shared the anecdote know that “yep, this is a common problem” is valuable information that can help them feel less singled out. Of course, this can be interpreted as a status-reduction move; they’re trivializing the concerns and making the speaker less special! This is the uncharitable interpretation and so in general I recommend against it.
(And those are just women’s issues; women are not the only group that sometimes has problems in geek culture, or specifically on Less Wrong.)
It really bothers me that you’re not taking seriously either the (hopefully unintentional) misuse of Crocker’s Rules or the unintentional violation of IRC norms. Those rules apply to everyone and are in place for good reason, and pointing out rule violations should not be seen as picking nits if you want those rules to stick around.
Especially in the context of minimizing inferential distance, it’s important to have experience exchange both ways. For example, DMs shutting down a player’s attempt to deviate from the script is a common enough experience that I expect more than half of D&D players can relate, and letting the person who shared the anecdote know that “yep, this is a common problem” is valuable information that can help them feel less singled out. Of course, this can be interpreted as a status-reduction move; they’re trivializing the concerns and making the speaker less special! This is the uncharitable interpretation and so in general I recommend against it.
I think this is an excellent point, and in the interests both of minimizing inferential distance and perhaps making some other points relevant to smart/geeky women’s issues, I offer a personal anecdote:
My early experiences as a D&D player included some memorable instances when I tried to “deviate from script”, though at the time I didn’t entirely understand that there was a script and that I was deviating from it; I was doing what seemed to make sense in my character’s situation. My DMs would sometimes be unprepared, would respond either by explicitly stating that I had gone off script or by more subtly trying to corral me back onto the rails, and some frustration would ensue; I would be frustrated because I felt like my freedom of character action, my ability to flex my imagination, was being curtailed.
My DMs were frustrated too, though the nature of the DM’s frustration was not something I understood until later, when I started to DM my own games, and learned firsthand about the way combinatorial explosion rears its head in adventure and world design, about the difficulty of anticipating the imaginations of several intelligent, creative, self-selected-for-out-of-the-box-thinking people, and many other issues. As a DM, these problems are solvable with effort and practice, and I’ve gotten better over the almost 10 years that I’ve been a DM; I try rather hard to set up my world and adventures to allow for maximum freedom of choice and action (or at least the convincing illusion of such; much DMing comes down to sleight-of-hand).
Most of my DMing experience has been for an all-male group of experienced tabletop gamers, but recently I had the opportunity to run a semi-regular game for a group that was (shock and gasp!) majority-female. About half of the players, including two of the girls*, were entirely new to D&D and tabletop roleplaying in general; this was their very first game.
The games and my DMing met with satisfaction; all involved, as far as I can tell, enjoyed themselves, to the extent that after the game ended and we had to go our separate ways (the setting for this was a summer-long internship), a couple of the first-timers immediately went on to seek out regular D&D groups, which means that the D&D game I ran was what got them into this particular part of geekdom (that is, tabletop roleplaying gaming). All the players who expressed their satisfaction — including, notably, the first-timers — said that prominent among the things that contributed to their enjoyment of the game was the feeling of freedom, of options; the sense that their imagination and creativity in deciding what their characters could do, was not artificially constrained.
I took pride in this, because I’ve worked hard to develop the DMing skills that allow for such flexibility; my own early experiences are what prompted me to keep firmly in mind this particular failure mode of DMing (the inflexible script). I took pride also in being the vehicle through which intelligent women are introduced to geekdom (or, for those who were already geeks but in different ways, have their horizons expanded).
Of course, a certain awareness of women’s experiences, such as those mentioned in this post, and of certain of the sorts of gender-related failures that plague geekdom, did also (I hope!) help in creating the sort of atmosphere in which female geeks/gamers could feel comfortable.
* “girls”: college-age women, several years younger than me. No belittlement intended.
What would differentiate picking nits and engaging with what was said?
See “Better Disagreement”. Nitpicking occupies level DH3-4: mere contradiction and responding to minor points, but not addressing the central point of the post.
(If you disagree with the rubric presented in “Better Disagreement”, respond there.)
I think Better Disagreement uses a confrontational lens that isn’t particularly suited to these situations. If the central point of the post is “these are real female experiences that you should be aware of,” DH7 seems like a cruel joke at best: “This is what a real real female would experience, and even then we shouldn’t be aware of it!”
It seems to me that helpful complaint comments will often come in two forms: error correction and alternative perspectives. If, say, an anecdote about EY in one of these posts spelled his name “Elezer,” pointing out that they missed an “i” could be labeled as nit picking, but it doesn’t seem like a helpful label: fix it, say thanks, and be happy that the post is better! If most of the comments are minor corrections, but the post is highly upvoted, remember that each of those upvotes is a short comment saying “I want to see more posts like this post.” (If most of the comments are corrections and the post has low karma, the post has deeper problems that should get fixed.)
Alternative perspectives are trickier territory. Suppose that Anonymous Alice writes a story about how she was hurt that she said “good morning” to Name-changed Norman and Norman didn’t respond; it made her feel unimportant and unappreciated. Bob comments that, if he were Norman and he didn’t respond, it would have been because he was totally focused on what he was doing and didn’t notice the greeting, not because it was a deliberate snub.
Both people like Bob and people like Alice have information they can acquire from this exchange- Bobs can learn that greetings are more important than they originally thought they were, and Alices can learn that greetings are less important than they originally thought they were. The next time someone doesn’t greet Alice, she can tell herself “they look busy” instead of “I’m not important enough to warrant a greeting;” the next time Bob sees someone that he doesn’t remember greeting that morning, he can greet them to make sure they don’t feel unappreciated.
But the way that Alice and Bob write their comments, and read the other’s comment, will have a big impact on how productive their perspective exchange is. It helps to acknowledge the other person’s perspective, and cast yours as adding to theirs rather than contradicting theirs as much as possible. This is particularly tough when it comes to interpretations- if Alice says Norman was rude and Bob doesn’t think that’s the case, they can get bogged down by confusing the word “rude” for an empirical fact about reality that they can go out there and measure. Standard advice is to word things in terms of feelings: instead of “Norman snubbed me” which asserts intention, something like “I feel less important when Norman doesn’t greet me” is much less contentious, and a discussion about how much Alice’s importance is related to Norman’s greetings is likely to be more productive by virtue of being more precise.
“This is what a real real female would experience, and even then we shouldn’t be aware of it!”
I’m pretty sure there is an awesome steel man some of the epic level contrarian rationalists here could make for this. I would totally pay money to read it for the entertainment value.
I’m pretty sure there is an awesome steel man some of the epic level contrarian rationalists here could make for this.
Of course it’s always possible to argue both sides of debate. So let’s try it for the sake of the argument:
Every human is unique. Effective social interactions means that you listen to the other person. It’s about being in the moment and perceiving the other person without preconceived notions. Being empathic is not about having an intellectual concept of what the other person is going through. It’s about actually feeling the emotion that the other person is feeling with them.
If you want that men and woman interact better with each other you should encourage them to treat each individual uniquely. If a man learns an intellectual concept according to which he should do X whenever a woman does Y, the man isn’t authentically interacting with the woman.
If the man uses an intellectual rule for the interaction he will pay less attention to his own emotions.
How does a man get better at being in the moment? How does he get more in touch with his own emotions, to get a better feeling for the interaction?
Meditation is a way where we have good research that shows that mediation improves the ability of people to be in the moment by dealing more effectively with their emotions.
In Zen Buddhism there the concept of the “beginners mind”. The practioner tries to let go of any preconceived notions to be more in touch with the moment. He doesn’t add additional mental rules.
In my own experience my interactions with women are much better for both parties when I’m in the moment and in touch with my emotions than when I’m in my head and think “I don’t want to do anything to upset the woman I’m interacting with”.
How do I know that the interaction is better for the woman and not only myself? When I’m dancing the woman likes to dance closer when I’m in touch with myself instead of being in my head. She also smiles more.
There are a lot of Asbergers people who know a lot about what a “real female would experience” on a intellectual level. When it comes to real interaction they are however all the time in their head. They are not in touch with their emotions and therefore they mess up the social interaction.
If you now start and give a guy all sort of additional intellectual concepts of how to treat woman, you risk that the guy spends more time in his own head. He will be less in touch with his own emotions. Less emotional intelligence means that the social interaction is less pleasent for all participants who are involved.
While I see the theoretic argument that more knowledge should help. I don’t know of any empiric evidence that it does. I don’t think that men primarily treat woman poorly because they have the wrong intellectual concepts. The prime reason is rather low emotional intellience.
Meditating and letting go of all preconveived notions of what it’s like to be the other person allows us to treat the person with more empathy. Giving someone more stuff to think about while being in an interaction would be the opposite of meditation.
If a post has 39 “short comments saying “I want to see more posts like this post.”″ and 153 nitpicks, that says something about the community reaction. This is especially relevant since “but this detail is wrong” seems to be a common reaction to these kinds of issues on geek fora.
(Yes, not nearly all posts are nitpicks, and my meta-complaining doesn’t contribute all that much signal either.)
This is especially relevant since “but this detail is wrong” seems to be a common reaction to these kinds of issues on geek fora.
It feels to me like we both have an empirical disagreement about whether or not this behavior is amplified when discussing “these kind of issues” and a normative disagreement about whether this behavior is constructive or destructive.
For any post, one should expect the number of corrections to be related to the number of things that need to be corrected, modulated by how interesting the post is. A post which three people read is likely to not get any corrections; a post which hundreds of people read is likely to get almost all of its errors noticed and flagged. Discussions about privilege tend to have wide interest, but as a category I haven’t noticed them being significantly better than other posts, and so I would expect them to receive more corrections than posts of similar quality, because they’re wider interest. It could be the case that the posts make people more defensive and thus more critical, but it’s not clear to me that hypothesis is necessary.
In general, corrections seem constructive to me; it both improves the quality of the post and helps bring the author and audience closer together. It can come across as hostile, and it’s often worth putting extra effort into critical comments to make them friendlier and more precise, but I’m curious to hear if you feel differently and if so, why you have that impression.
All of what you say is true; it is also true that I’m somewhat thin-skinned on this point due to negative experiences on non-LW fora; but I also think that there is a real effect. It is true that the comments on this post are not significantly more critical/nitpicky than the comments on How minimal is our intelligence. However, the comments here do seem to pick far more nits than, say, the comments on How to have things correctly.
The first post is heavily fact-based and defends a thesis based on—of necessity—incomplete data and back-projection of mechanisms that are not fully understood. I don’t mean to say that it is a bad post; but there are certainly plenty of legitimate alternative viewpoints and footnotes that could be added, and it is no surprise that there are a lot of both in the comments section.
The second post is an idiosyncratic, personal narrative; it is intended to speak a wider truth, but it’s clearly one person’s very personal view. It, too, is not a bad post; but it’s not a terribly fact-based one, and the comments find fewer nits to pick.
This post seems closer to the second post—personal narratives—but the comment section more closely resembles that of the first post.
As to the desirability of this effect: it’s good to be a bit more careful around whatever minorities you have on the site, and this goes double for when the minority is trying to express a personal narrative. I do believe there are some nits that could be picked in this post, but I’m less convinced that the cumulative improvement to the post is worth the cumulative… well, not quite invalidation, but the comments section does bother me, at least.
It sounds like you are complaining that people are treating arguments as logical constructions that stand or fall based on their own merit, rather than as soldiers for a grand and noble cause which we must endorse lest we betray our own side.
If that’s not what you mean, can you clarify your point better?
That it would be more epistemically and instrumentally productive not to throw up a cloud of nitpicking which closely resembles quite common attempts to avoid getting the point that there is actually a problem here.
The counterpoint to that is “If you’re interested in being on the right side of disputes, you will refute your opponents’ arguments. But if you’re interested in producing truth, you will fix your opponents’ arguments for them. To win, you must fight not only the creature you encounter; you [also] must fight the most horrible thing that can be constructed from its corpse.”
http://www.acceleratingfuture.com/steven/?p=155
Mostly, what David_Gerard says, better than I managed to express it; in part, “be nice to whatever minorities you have”; and finally, yes, “this is a good cause; we should champion it”. “Arguments as soldiers” is partly a valid criticism, but note that we’re looking at a bunch of narratives, not a logical argument; and note that very little “improvement of the other’s arguments” seem to be going on.
I don’t know what you expect when you say “actually engaging what has been said”—the post is a collection of interesting and well-written anecdotes, but it doesn’t actually have a strong central point that is asking for a reaction.
It’s not saying “you should change your behavior in such-and-such a way” or “doing such-and-such a thing is wrong and we should all condemn it” or asking for help or advice or an answer or even opinions …
Perhaps an instance of Why Our Kind Can’t Cooperate; people who agree, do not respond… as for me, I find myself with two kinds of responses to these anecdotes. For some, I think “Wow, what an unfortunate example of systemic sexism etc.; how informative, and how useful that this is here.” Other people have already commented to that effect. I’m not sure what I might say in terms of engaging with such content, but perhaps something will come to me, in which case I’ll say something.
For others… well, here’s an example:
It’s lunchtime in fourth grade. I am explaining to Leslie, who has no friends but me, why we should stick together. “We’re both rejects,” I tell her. She draws back, affronted. “We’re not rejects!” she says. I’m puzzled. It hadn’t occurred to me that she wanted to be normal.
My response is a mental shrug. I am male. I can relate to this anecdote completely. I, too, have never much understood the desire to be “normal”, and I find that as I’ve gotten older, I disdain it more and more.
But what has this to do with minimizing the inferential distance between men and women...?
Here’s another:
It’s Bridget’s thirteenth birthday, and four of us are spending the night at her house. While her parents sleep, we are roleplaying that we have been captured by Imperials and are escaping a detention cell. This is not papers-and-dice roleplaying, but advanced make-believe with lots of pretend blaster battles and dodging behind furniture.
Christine and Cass, aspiring writers, use roleplaying as a way to test out plots in which they make daring raids and die nobly. Bridget, a future lawyer, and I, a future social worker, use it as a way to test out moral principles. Bridget has been trying to persuade us that the Empire is a legitimate government and we shouldn’t be trying to overthrow it at all. I’ve been trying to persuade Amy that shooting stormtroopers is wrong. They are having none of it.
We all like daring escapes, though, so we do plenty of that.
The gist of this anecdote seems to be “girls like Star Wars too”. Duly noted. As an anecdote in isolation I can’t say it surprises me. (At least two of my female friends are huge Dr. Who geeks. In general I would be surprised if anyone here found “geek girls exist” to be a novel and unexpected claim.) It’s not necessarily clear what more general conclusion I ought to draw from this, or what conclusion (if any) is implied by the OP, and so the extent of my potential engagement is limited.
I think the point of the Star Wars anecdote is:
Woman do engage in roleplaying but when they do they don’t focus on papers-and-dice fighting and instead have a discussion about moral issues.
The woman who wrote the example with the evil elves probably wanted to show that she didn’t cared primarily about battling the evil elves but that she rather wanted to help the farmers directly.
Well… if that’s the intended point, then I just don’t think it’s well-supported by the anecdote.
I tell the story here of a D&D gaming group I ran which was over half female. I play D&D with several more women on a semi-regular basis. There are some differences in play style between some of the guys I play with and some of the girls I play with, but there’s no monolithic bloc such that I can even begin to generalize, even ignoring the small sample size and selection effects.
To put it another way, the anecdote in question justifies an existentially quantified claim, but in no way does it justify a universally quantified claim. And anything in-between requires that stuff that you famously don’t get by pluralizing “anecdote”.
I think the point of the Star Wars anecdote is: Woman do engage in roleplaying but when they do they don’t focus on papers-and-dice fighting and instead have a discussion about moral issues.
Is that actually true, though ? This seems to fit the pattern of “men are combative, women are nurturing”, which is often denounced as a stereotype; at the very least, there is a lot of debate on whether or not this principle is generally applicable.
I’m not saying that the statement is wrong, necessarily; only that I require more evidence to be convinced.
This seems to fit the pattern of “men are combative, women are nurturing”,
I would read it more as “men like to model situations, women like to model people.” This may be a stereotype, but I’ve noticed it to be anecdotally true. Men, when spending time together socially, tend to talk more about sports and politics than women do; women spend more time talking about other people (i.e. gossip) and analyzing their motivations. Fighting elves is a situation; you don’t have to try to understand the elves’ motivations and ‘drama’ in order to fight them.
“This may be a stereotype, but I’ve noticed it to be anecdotally true.”
“but”
What do you think sterotypes are? Generally they tend to be statements that are true 30-90% of the time, which should provide plenty of room for confirming annecdotes.
It is possible for people to criticize or comment on specific (possibly minor issues) while still learning from or getting the overall set of points made by something.
It bothers me how many of these comments pick nits (“plowing isn’t especially feminine”, “you can’t unilaterally declare Crocker’s Rules”) instead of actually engaging with what has been said.
Those are things that actually are said. If a point is blatantly wrong or the entire usage of “Crocker’s Rules” is, in fact, inappropriate then those things are wrong and inappropriate and can be declared as such. If it happened that nobody engaged with the intended point of the article that would perhaps just indicate that people weren’t interested (or weren’t interested in discussing it here). That is… not the case.
How is gwern still allowed on this site without making a significant apology and reparations? It is making me seriously reconsider any funding that I would give to CFAR or SIAI.
I think gwern’s expressed attitudes toward transsexuals are both harmful and not rationally defensible — i.e. if he thought about them sensibly with access to good data, he’d want to change them rather than parading them.
However, I don’t think LW should ban people on the basis of that sort of attitude. Everyone is an asshole on some topic. (Me, I can be an asshole about open source. Some of my best friends are Windows users, but ….)
Coercing “apology and reparations” is counterproductive because of the example it sets. It would mean that anyone who takes sufficient control here is in a position to make that sort of demand of others. That’s an undesirable concentration of power and opportunity for blackmail.
FYI, we have racists and misogynists here, too. I sure wish they would recognize that they should stay the hell off of the topics upon which they are cranks.
FYI, we have racists and misogynists here, too. I sure wish they would recognize that they should stay the hell off of the topics upon which they are cranks.
We agree that there are cranks on race and sex here; we just disagree on which side it is. It is hard to differentiate being a crank and there being pervasive irrationality on a forum dedicated to human rationality.
How is gwern still allowed on this site without making a significant apology and reparations?
Are you suggesting banning users from LW if they make any unwelcoming comments anywhere else without apologizing for them? The absence of that policy seems to be the “how,” and I think I much prefer not having that policy to having that policy.
It is making me seriously reconsider any funding that I would give to CFAR or SIAI.
Is your true rejection to funding CFAR or SIAI that they don’t have a policy in place for the forum affiliated with them? I’m having a hard time picturing the value system which says “AI risk is the most important place for my charitable dollars, and SIAI is well-poised to turn additional donated dollars into lowered AI risk, but donations should go elsewhere until they alter the policy on their associated internet forum so that a user apologizes for trans-unfriendly comments made offsite.”
Is your true rejection to funding CFAR or SIAI that they don’t have a policy in place for the forum affiliated with them? I’m having a hard time picturing the value system which says “AI risk is the most important place for my charitable dollars, and SIAI is well-poised to turn additional donated dollars into lowered AI risk, but donations should go elsewhere until they alter the policy on their associated internet forum so that a user apologizes for trans-unfriendly comments made offsite.”
He could instead mean something closer to “AI risk seems to be an important contribution for charitable dollars, but the SIAI’s lack of careful control and moderation of their own fora even given its potential PR risk makes me question whether they are competent enough or organized enough to substantially help deal with AI risk.”
But I suspect the value system in question here is actually one where charity is intertwined with signaling and buying fuzzies. In that context, not giving charity to an organization that has had some connection to an individual who says disgusting things (or low-status things) makes sense.
But I suspect the value system in question here is actually one where charity is intertwined with signaling and buying fuzzies. In that context, not giving charity to an organization that has had some connection to an individual who says disgusting things (or low-status things) makes sense.
Agreed, but I suspect that if one is donating to charity for signaling and buying fuzzies, they are unlikely to donate to CFAR or SIAI in the first place, since there are other places that offer warmer fuzzies and signals that resonate with wider audiences.
It may be difficult to actually decide which makes the most sense to donate to to maximize signaling (especially because doing so consciously can itself be difficult). Moreover, if one is trying to maximize signaling it may make sense to donate to a bunch of different causes. And some degree of signaling and fuzzy-buying is likely mediated by one’s peer group, so if one spends time on LW or in closely aligned circles then CFAR and SIAI may be effective places to purchase signaling credibility with the people one cares about.
He could instead mean something closer to “AI risk seems to be an important contribution for charitable dollars, but the SIAI’s lack of careful control and moderation of their own fora even given its potential PR risk makes me question whether they are competent enough or organized enough to substantially help deal with AI risk.”
That is indeed my concern. If CFAR can’t avoid a Jerry Sandusky/Joe Paterno type scenario (which I am reasonably probable it is capable of, given one of its founders wrote HPMOR), then it is literally a horrendous joke and I should be allocating my contributions to somewhere more productive.
That is indeed my concern. If CFAR can’t avoid a Jerry Sandusky/Joe Paterno type scenario (which I am reasonably probable it is capable of, given one of its founders wrote HPMOR), then it is literally a horrendous joke and I should be allocating my contributions to somewhere more productive.
This confuses me. First of all, the probability of such a scenario is tiny (how many universities have the exact same complete lack of safeguards and transparency and how many had an international scandal?) Second, the difference between writing HPMR and the difference between being associated with one of the most prominent universities in the US seems pretty large. A small point that does back up your concerns somewhat- it may be worth noting that the SI early on did have a serious embezzlement problem at one point. But the difference of “has an unmoderated IRC forum where people say hateful stuff” and the scale of a massive coverup of a decade long pedophilia scandal seems pretty clear. Finally, the inability to potentially deal with an unlikely scandal, even if one did have evidence for that, isn’t a reason to think that they are incompetent in other ways.
Frankly, it seems as an outside observer that your reaction is likely more connected to the simple fact that these were pretty disgusting statements that can easily trigger a large emotional reaction. But this website is devoted to rationality, and the name of it is Less Wrong. Increasing the world’s total existential risk because a certain person who isn’t even an SI higher-up or anything similar said some hateful things is not a rational move.
But the difference of “has an unmoderated IRC forum where people say hateful stuff” [...]
Lesswrong does not have an unmoderated IRC forum. There is an IRC forum called #lesswrong on freenode which is mostly populated by people who read lesswrong but it has no official LW backing or involvement. SIAI/FHI/CFAR or whoever is in charge of LW should ask the #lesswrong mod to close it and take ##lesswrong if they want it. This is how freenode rules treat unofficial IRC channels.
Anything that seems like support for Dallas or Ritalin/Rational_Brony was unintentional.
As I said before, appealing to an online forum crowd from an associated chat channel, whether official or not, is invariably a bad idea, because of the difference in the expectations of privacy. It harms the forum (but usually not the channel) and so is often a bannable offence in the forums which support banning users. Anyone bringing the same issue up in an unrelated thread, like Dallas did, ought to be banned for trolling.
There was an attempt by someone to change the forum policies (about censorship, that time) by doing something terrible if the policies weren’t changed. EY and company said “we don’t give in to blackmail,” the policies were not changed, and the person possibly carried through on their threat. It’s worth bringing up only to discourage future attempts at blackmail.
Rather, I meant to say: I expect LW posters to largely agree that it can be correct to select an option which has lower expected utility according to naive calculation so as to prevent such situations from arising in the first place (in that it is correct to have a decision function that selects such options, and that if you don’t actually select such options then you don’t have that decision function). It seems possibly reasonable to construe an organization having access to high utility but opposing specific human rights issues as creating such a situation (I do not comment on whether or not this is actually the case in our world).
A list of outcomes possible in the future (in order of my preference):
We create AI which corresponds to my values.
Life on Earth persists under my value set.
Life on Earth is totally exterminated.
Life on Earth persists under its current value set.
We create an AI which does not correspond to my values.
If LW is not trying to eradicate the scourge of transphobia, than clearly SIAI has moved from 1 to 5, and I should be trying to dismantle it, rather than fund it.
So to be clear, you are claiming that the destruction of all life on Earth is a better alternative than life continuing with the common current values?
(5) We create an AI which does not correspond to my values.
So part of the whole point of attempts to things like CEV is that they will (ideally) not use any individual’s fixed values but rather will try to use what everyone’s values would be if they were smarter and knew more.
If LW is not trying to eradicate the scourge of transphobia, than clearly SIAI has moved from 1 to 5, and I should be trying to dismantle it, rather than fund it.
If your value set is so focused on the complete destruction of the world rather than let any deviation from your values to be implemented, then I suspect that LW and SI were already trying to accomplish something you’d regard as 5. Moreover, it seems that you are confused about priorities: LW isn’t an organization devoted to dealing with LGBTQE issues. You might as well complain that LW isn’t trying to eradicate malaria. The goal of LW is to improve rationality, and the goal of SI is to construct safe general AI. If one or both of those happens to solve other problems or result in a value shift making things better for trans individuals then that will be a consequence, but it doesn’t make it their job to do so.
Frankly, any value system which says “I’d rather have all life destroyed then everyone live under a value system slightly different than my own” seems more like something out of the worst sort of utopian fanaticism than anything else. One of the major ways human society has improved over time and become more peaceful is that we’ve learned that we don’t have to frame everything as an existential struggle. Sometimes it does actually make sense to compromise, or at least, wait to resolve things. We live in a era of truly awesome weaponry, and it is only this willingness to place the survival of humanity over disagreements in values that has seen us to this day. It is from the moderation of Reagan, Nixon, Carter, Kruschev, Breznev, Andropov and others that we are around to have this discussion instead of trying to desperately survive in the crumbled, radioactive ruins of human civilization.
If CFAR can’t avoid a Jerry Sandusky/Joe Paterno type scenario
So, I agree that any organization that works with minors should be held to high standards (and CFAR does run a camp for high schoolers). I don’t think the forum policy gives much evidence about the likelihood of children being victimized by employees, though.
which I am reasonably probable it is capable of, given one of its founders wrote HPMOR
It’s not clear to me how skill at writing HPMOR is related skill at avoiding PR gaffes. Have you looked at EY’s okcupid page? There are a lot of things there that don’t look like they’re written with public relations in mind.
ivan: Someone just told me… “well… having their food
labeled as GMO makes them uncomfortable like having sex with a
trans person”
>.<
[18:10]
whaaat?
That seems pretty plausible.
Not particularly backed intuitive dislike.
I mean, conditional on uncomfortability of both.
Algo: makes sense. both are unnatural and deceptive
gwern: Both are?
[18:13]
Algo: yeah, one is a monstrous abortion pretending to be
its opposite and deluding the eye thanks to the latest scientific
techniques, and the other is a weird fruit
gwern, “deceptive” is a pretty terrible word to use
for trans people.
gwern, what a disgusting thing to say.
[18:14]
startling: more or less disgusting than a GMO fruit rotting for a
week?
inquiring minds need to know!
And it wasn’t even an isolated incident:
also all of my anger toward drethelin is completely gone
[20:54]
startling: maybe he started estrogen supplementation
gwern, okay?
startling: we won’t judge him for it. well, maybe you won’t, I find trannies really creepy
There wasn’t even the possibility that it was some bizarre form of “off-color humor”. Gwern admitted it himself:
I realize that, which is why I avoid anything to do with transexuals on LW: I won’t defend my feelings since I know perfectly well that objectively there is no reason to dislike such people, but my feelings exist anyway and mean that anything I might write on the topic is fruit of a poisoned tree.
IRC, on the other hand, is ephemeral and officially not publicly logged so I don’t put as much of a filter on my stream of consciousness.
So this looks pretty nasty and is frankly disappointing. But he’s acknowledged the irrational aspect of it and hasn’t brought the statements himself to LW. Moreover, as Gwern correctly notes, IRC is a medium where people are often lacking any substantial filter. The proper response would be for Gwern to just avoid discussing these issues (which in fact he says he does). In any event, I fail to see how this comments mandate “reparations”. If people on IRC want to appropriately rebuke him when he says this sort knee-jerk stupid shit when it comes up, that makes sense. The connection this has to SI or CFAR is pretty minimal.
This is one of the worst posts that I’ve ever seen on LW. Though I agree completely that gwern’s comments are inappropriate and unacceptable, they’re off-the-cuff remarks in a private setting not intended for the record, and he shouldn’t be pilloried for them.
It bothers me how many of these comments pick nits (“plowing isn’t especially feminine”, “you can’t unilaterally declare Crocker’s Rules”) instead of actually engaging with what has been said.
(And those are just women’s issues; women are not the only group that sometimes has problems in geek culture, or specifically on Less Wrong.)
What would differentiate picking nits and engaging with what was said?
Like SaidAchmiz points out, there’s not all that much to say when someone shares information. I’m certainly not going to share the off-site experiences of female friends that were told to me in confidence, and my experiences are not particularly relevant, and so I don’t have much to add.
One of the issues that has poisoned conversations about feminism I have been in previously, and which I sincerely hope does not happen here, is that the feminists in the conversation did not have a strong ability to discern between useful and useless criticisms. I understand that many people don’t listen to women, especially about their experience as women; I understand that many people dismiss good feminist arguments, or challenge them with bad arguments.
But when people do listen, and respond with good arguments- and then their good arguments are trivialized or dismissed- then we’re not having a conversation, but a lecture. The people putting forth good arguments realize they’re not welcome and leave, and only the trolls are left.
Especially in the context of minimizing inferential distance, it’s important to have experience exchange both ways. For example, DMs shutting down a player’s attempt to deviate from the script is a common enough experience that I expect more than half of D&D players can relate, and letting the person who shared the anecdote know that “yep, this is a common problem” is valuable information that can help them feel less singled out. Of course, this can be interpreted as a status-reduction move; they’re trivializing the concerns and making the speaker less special! This is the uncharitable interpretation and so in general I recommend against it.
It really bothers me that you’re not taking seriously either the (hopefully unintentional) misuse of Crocker’s Rules or the unintentional violation of IRC norms. Those rules apply to everyone and are in place for good reason, and pointing out rule violations should not be seen as picking nits if you want those rules to stick around.
I think this is an excellent point, and in the interests both of minimizing inferential distance and perhaps making some other points relevant to smart/geeky women’s issues, I offer a personal anecdote:
My early experiences as a D&D player included some memorable instances when I tried to “deviate from script”, though at the time I didn’t entirely understand that there was a script and that I was deviating from it; I was doing what seemed to make sense in my character’s situation. My DMs would sometimes be unprepared, would respond either by explicitly stating that I had gone off script or by more subtly trying to corral me back onto the rails, and some frustration would ensue; I would be frustrated because I felt like my freedom of character action, my ability to flex my imagination, was being curtailed.
My DMs were frustrated too, though the nature of the DM’s frustration was not something I understood until later, when I started to DM my own games, and learned firsthand about the way combinatorial explosion rears its head in adventure and world design, about the difficulty of anticipating the imaginations of several intelligent, creative, self-selected-for-out-of-the-box-thinking people, and many other issues. As a DM, these problems are solvable with effort and practice, and I’ve gotten better over the almost 10 years that I’ve been a DM; I try rather hard to set up my world and adventures to allow for maximum freedom of choice and action (or at least the convincing illusion of such; much DMing comes down to sleight-of-hand).
Most of my DMing experience has been for an all-male group of experienced tabletop gamers, but recently I had the opportunity to run a semi-regular game for a group that was (shock and gasp!) majority-female. About half of the players, including two of the girls*, were entirely new to D&D and tabletop roleplaying in general; this was their very first game.
The games and my DMing met with satisfaction; all involved, as far as I can tell, enjoyed themselves, to the extent that after the game ended and we had to go our separate ways (the setting for this was a summer-long internship), a couple of the first-timers immediately went on to seek out regular D&D groups, which means that the D&D game I ran was what got them into this particular part of geekdom (that is, tabletop roleplaying gaming). All the players who expressed their satisfaction — including, notably, the first-timers — said that prominent among the things that contributed to their enjoyment of the game was the feeling of freedom, of options; the sense that their imagination and creativity in deciding what their characters could do, was not artificially constrained.
I took pride in this, because I’ve worked hard to develop the DMing skills that allow for such flexibility; my own early experiences are what prompted me to keep firmly in mind this particular failure mode of DMing (the inflexible script). I took pride also in being the vehicle through which intelligent women are introduced to geekdom (or, for those who were already geeks but in different ways, have their horizons expanded).
Of course, a certain awareness of women’s experiences, such as those mentioned in this post, and of certain of the sorts of gender-related failures that plague geekdom, did also (I hope!) help in creating the sort of atmosphere in which female geeks/gamers could feel comfortable.
* “girls”: college-age women, several years younger than me. No belittlement intended.
Maybe we need a “minimize inferential distance to DMs” thread?
See “Better Disagreement”. Nitpicking occupies level DH3-4: mere contradiction and responding to minor points, but not addressing the central point of the post.
(If you disagree with the rubric presented in “Better Disagreement”, respond there.)
I think Better Disagreement uses a confrontational lens that isn’t particularly suited to these situations. If the central point of the post is “these are real female experiences that you should be aware of,” DH7 seems like a cruel joke at best: “This is what a real real female would experience, and even then we shouldn’t be aware of it!”
It seems to me that helpful complaint comments will often come in two forms: error correction and alternative perspectives. If, say, an anecdote about EY in one of these posts spelled his name “Elezer,” pointing out that they missed an “i” could be labeled as nit picking, but it doesn’t seem like a helpful label: fix it, say thanks, and be happy that the post is better! If most of the comments are minor corrections, but the post is highly upvoted, remember that each of those upvotes is a short comment saying “I want to see more posts like this post.” (If most of the comments are corrections and the post has low karma, the post has deeper problems that should get fixed.)
Alternative perspectives are trickier territory. Suppose that Anonymous Alice writes a story about how she was hurt that she said “good morning” to Name-changed Norman and Norman didn’t respond; it made her feel unimportant and unappreciated. Bob comments that, if he were Norman and he didn’t respond, it would have been because he was totally focused on what he was doing and didn’t notice the greeting, not because it was a deliberate snub.
Both people like Bob and people like Alice have information they can acquire from this exchange- Bobs can learn that greetings are more important than they originally thought they were, and Alices can learn that greetings are less important than they originally thought they were. The next time someone doesn’t greet Alice, she can tell herself “they look busy” instead of “I’m not important enough to warrant a greeting;” the next time Bob sees someone that he doesn’t remember greeting that morning, he can greet them to make sure they don’t feel unappreciated.
But the way that Alice and Bob write their comments, and read the other’s comment, will have a big impact on how productive their perspective exchange is. It helps to acknowledge the other person’s perspective, and cast yours as adding to theirs rather than contradicting theirs as much as possible. This is particularly tough when it comes to interpretations- if Alice says Norman was rude and Bob doesn’t think that’s the case, they can get bogged down by confusing the word “rude” for an empirical fact about reality that they can go out there and measure. Standard advice is to word things in terms of feelings: instead of “Norman snubbed me” which asserts intention, something like “I feel less important when Norman doesn’t greet me” is much less contentious, and a discussion about how much Alice’s importance is related to Norman’s greetings is likely to be more productive by virtue of being more precise.
I’m pretty sure there is an awesome steel man some of the epic level contrarian rationalists here could make for this. I would totally pay money to read it for the entertainment value.
Too bad it would cause epic drama too.
Of course it’s always possible to argue both sides of debate. So let’s try it for the sake of the argument:
Every human is unique. Effective social interactions means that you listen to the other person. It’s about being in the moment and perceiving the other person without preconceived notions. Being empathic is not about having an intellectual concept of what the other person is going through. It’s about actually feeling the emotion that the other person is feeling with them.
If you want that men and woman interact better with each other you should encourage them to treat each individual uniquely. If a man learns an intellectual concept according to which he should do X whenever a woman does Y, the man isn’t authentically interacting with the woman. If the man uses an intellectual rule for the interaction he will pay less attention to his own emotions.
How does a man get better at being in the moment? How does he get more in touch with his own emotions, to get a better feeling for the interaction?
Meditation is a way where we have good research that shows that mediation improves the ability of people to be in the moment by dealing more effectively with their emotions. In Zen Buddhism there the concept of the “beginners mind”. The practioner tries to let go of any preconceived notions to be more in touch with the moment. He doesn’t add additional mental rules.
In my own experience my interactions with women are much better for both parties when I’m in the moment and in touch with my emotions than when I’m in my head and think “I don’t want to do anything to upset the woman I’m interacting with”. How do I know that the interaction is better for the woman and not only myself? When I’m dancing the woman likes to dance closer when I’m in touch with myself instead of being in my head. She also smiles more.
There are a lot of Asbergers people who know a lot about what a “real female would experience” on a intellectual level. When it comes to real interaction they are however all the time in their head. They are not in touch with their emotions and therefore they mess up the social interaction.
If you now start and give a guy all sort of additional intellectual concepts of how to treat woman, you risk that the guy spends more time in his own head. He will be less in touch with his own emotions. Less emotional intelligence means that the social interaction is less pleasent for all participants who are involved.
While I see the theoretic argument that more knowledge should help. I don’t know of any empiric evidence that it does. I don’t think that men primarily treat woman poorly because they have the wrong intellectual concepts. The prime reason is rather low emotional intellience.
Meditating and letting go of all preconveived notions of what it’s like to be the other person allows us to treat the person with more empathy. Giving someone more stuff to think about while being in an interaction would be the opposite of meditation.
If a post has 39 “short comments saying “I want to see more posts like this post.”″ and 153 nitpicks, that says something about the community reaction. This is especially relevant since “but this detail is wrong” seems to be a common reaction to these kinds of issues on geek fora.
(Yes, not nearly all posts are nitpicks, and my meta-complaining doesn’t contribute all that much signal either.)
See “Support That Sounds Like Dissent”.
It feels to me like we both have an empirical disagreement about whether or not this behavior is amplified when discussing “these kind of issues” and a normative disagreement about whether this behavior is constructive or destructive.
For any post, one should expect the number of corrections to be related to the number of things that need to be corrected, modulated by how interesting the post is. A post which three people read is likely to not get any corrections; a post which hundreds of people read is likely to get almost all of its errors noticed and flagged. Discussions about privilege tend to have wide interest, but as a category I haven’t noticed them being significantly better than other posts, and so I would expect them to receive more corrections than posts of similar quality, because they’re wider interest. It could be the case that the posts make people more defensive and thus more critical, but it’s not clear to me that hypothesis is necessary.
In general, corrections seem constructive to me; it both improves the quality of the post and helps bring the author and audience closer together. It can come across as hostile, and it’s often worth putting extra effort into critical comments to make them friendlier and more precise, but I’m curious to hear if you feel differently and if so, why you have that impression.
All of what you say is true; it is also true that I’m somewhat thin-skinned on this point due to negative experiences on non-LW fora; but I also think that there is a real effect. It is true that the comments on this post are not significantly more critical/nitpicky than the comments on How minimal is our intelligence. However, the comments here do seem to pick far more nits than, say, the comments on How to have things correctly.
The first post is heavily fact-based and defends a thesis based on—of necessity—incomplete data and back-projection of mechanisms that are not fully understood. I don’t mean to say that it is a bad post; but there are certainly plenty of legitimate alternative viewpoints and footnotes that could be added, and it is no surprise that there are a lot of both in the comments section.
The second post is an idiosyncratic, personal narrative; it is intended to speak a wider truth, but it’s clearly one person’s very personal view. It, too, is not a bad post; but it’s not a terribly fact-based one, and the comments find fewer nits to pick.
This post seems closer to the second post—personal narratives—but the comment section more closely resembles that of the first post.
As to the desirability of this effect: it’s good to be a bit more careful around whatever minorities you have on the site, and this goes double for when the minority is trying to express a personal narrative. I do believe there are some nits that could be picked in this post, but I’m less convinced that the cumulative improvement to the post is worth the cumulative… well, not quite invalidation, but the comments section does bother me, at least.
It sounds like you are complaining that people are treating arguments as logical constructions that stand or fall based on their own merit, rather than as soldiers for a grand and noble cause which we must endorse lest we betray our own side.
If that’s not what you mean, can you clarify your point better?
That it would be more epistemically and instrumentally productive not to throw up a cloud of nitpicking which closely resembles quite common attempts to avoid getting the point that there is actually a problem here.
Why are you defending scoundrels again? :P
The counterpoint to that is “If you’re interested in being on the right side of disputes, you will refute your opponents’ arguments. But if you’re interested in producing truth, you will fix your opponents’ arguments for them. To win, you must fight not only the creature you encounter; you [also] must fight the most horrible thing that can be constructed from its corpse.” http://www.acceleratingfuture.com/steven/?p=155
Mostly, what David_Gerard says, better than I managed to express it; in part, “be nice to whatever minorities you have”; and finally, yes, “this is a good cause; we should champion it”. “Arguments as soldiers” is partly a valid criticism, but note that we’re looking at a bunch of narratives, not a logical argument; and note that very little “improvement of the other’s arguments” seem to be going on.
Have you read the comment sections on this site before? I don’t think LWers where any more nitpicky than usual.
So, I just wanna be sure I understand the substance of your reply:
JoachimSchipper is expressing frustration with nitpicking, and your (nitpicky) reply is that it’s not unusually nitpicky?
Yep. And you responded by nitpicking one meta level up. I love this site.
Just one? EDIT: Okay, okay. ;p
This comment is relevant.
I don’t know what you expect when you say “actually engaging what has been said”—the post is a collection of interesting and well-written anecdotes, but it doesn’t actually have a strong central point that is asking for a reaction.
It’s not saying “you should change your behavior in such-and-such a way” or “doing such-and-such a thing is wrong and we should all condemn it” or asking for help or advice or an answer or even opinions …
Perhaps an instance of Why Our Kind Can’t Cooperate; people who agree, do not respond… as for me, I find myself with two kinds of responses to these anecdotes. For some, I think “Wow, what an unfortunate example of systemic sexism etc.; how informative, and how useful that this is here.” Other people have already commented to that effect. I’m not sure what I might say in terms of engaging with such content, but perhaps something will come to me, in which case I’ll say something.
For others… well, here’s an example:
My response is a mental shrug. I am male. I can relate to this anecdote completely. I, too, have never much understood the desire to be “normal”, and I find that as I’ve gotten older, I disdain it more and more.
But what has this to do with minimizing the inferential distance between men and women...?
Here’s another:
The gist of this anecdote seems to be “girls like Star Wars too”. Duly noted. As an anecdote in isolation I can’t say it surprises me. (At least two of my female friends are huge Dr. Who geeks. In general I would be surprised if anyone here found “geek girls exist” to be a novel and unexpected claim.) It’s not necessarily clear what more general conclusion I ought to draw from this, or what conclusion (if any) is implied by the OP, and so the extent of my potential engagement is limited.
I think the point of the Star Wars anecdote is: Woman do engage in roleplaying but when they do they don’t focus on papers-and-dice fighting and instead have a discussion about moral issues.
The woman who wrote the example with the evil elves probably wanted to show that she didn’t cared primarily about battling the evil elves but that she rather wanted to help the farmers directly.
Well… if that’s the intended point, then I just don’t think it’s well-supported by the anecdote.
I tell the story here of a D&D gaming group I ran which was over half female. I play D&D with several more women on a semi-regular basis. There are some differences in play style between some of the guys I play with and some of the girls I play with, but there’s no monolithic bloc such that I can even begin to generalize, even ignoring the small sample size and selection effects.
To put it another way, the anecdote in question justifies an existentially quantified claim, but in no way does it justify a universally quantified claim. And anything in-between requires that stuff that you famously don’t get by pluralizing “anecdote”.
Is that actually true, though ? This seems to fit the pattern of “men are combative, women are nurturing”, which is often denounced as a stereotype; at the very least, there is a lot of debate on whether or not this principle is generally applicable.
I’m not saying that the statement is wrong, necessarily; only that I require more evidence to be convinced.
I would read it more as “men like to model situations, women like to model people.” This may be a stereotype, but I’ve noticed it to be anecdotally true. Men, when spending time together socially, tend to talk more about sports and politics than women do; women spend more time talking about other people (i.e. gossip) and analyzing their motivations. Fighting elves is a situation; you don’t have to try to understand the elves’ motivations and ‘drama’ in order to fight them.
“This may be a stereotype, but I’ve noticed it to be anecdotally true.” “but” What do you think sterotypes are? Generally they tend to be statements that are true 30-90% of the time, which should provide plenty of room for confirming annecdotes.
It is possible for people to criticize or comment on specific (possibly minor issues) while still learning from or getting the overall set of points made by something.
Those are things that actually are said. If a point is blatantly wrong or the entire usage of “Crocker’s Rules” is, in fact, inappropriate then those things are wrong and inappropriate and can be declared as such. If it happened that nobody engaged with the intended point of the article that would perhaps just indicate that people weren’t interested (or weren’t interested in discussing it here). That is… not the case.
How is gwern still allowed on this site without making a significant apology and reparations? It is making me seriously reconsider any funding that I would give to CFAR or SIAI.
I think gwern’s expressed attitudes toward transsexuals are both harmful and not rationally defensible — i.e. if he thought about them sensibly with access to good data, he’d want to change them rather than parading them.
However, I don’t think LW should ban people on the basis of that sort of attitude. Everyone is an asshole on some topic. (Me, I can be an asshole about open source. Some of my best friends are Windows users, but ….)
Coercing “apology and reparations” is counterproductive because of the example it sets. It would mean that anyone who takes sufficient control here is in a position to make that sort of demand of others. That’s an undesirable concentration of power and opportunity for blackmail.
FYI, we have racists and misogynists here, too. I sure wish they would recognize that they should stay the hell off of the topics upon which they are cranks.
We agree that there are cranks on race and sex here; we just disagree on which side it is. It is hard to differentiate being a crank and there being pervasive irrationality on a forum dedicated to human rationality.
Are you suggesting banning users from LW if they make any unwelcoming comments anywhere else without apologizing for them? The absence of that policy seems to be the “how,” and I think I much prefer not having that policy to having that policy.
Is your true rejection to funding CFAR or SIAI that they don’t have a policy in place for the forum affiliated with them? I’m having a hard time picturing the value system which says “AI risk is the most important place for my charitable dollars, and SIAI is well-poised to turn additional donated dollars into lowered AI risk, but donations should go elsewhere until they alter the policy on their associated internet forum so that a user apologizes for trans-unfriendly comments made offsite.”
He could instead mean something closer to “AI risk seems to be an important contribution for charitable dollars, but the SIAI’s lack of careful control and moderation of their own fora even given its potential PR risk makes me question whether they are competent enough or organized enough to substantially help deal with AI risk.”
But I suspect the value system in question here is actually one where charity is intertwined with signaling and buying fuzzies. In that context, not giving charity to an organization that has had some connection to an individual who says disgusting things (or low-status things) makes sense.
Agreed, but I suspect that if one is donating to charity for signaling and buying fuzzies, they are unlikely to donate to CFAR or SIAI in the first place, since there are other places that offer warmer fuzzies and signals that resonate with wider audiences.
It may be difficult to actually decide which makes the most sense to donate to to maximize signaling (especially because doing so consciously can itself be difficult). Moreover, if one is trying to maximize signaling it may make sense to donate to a bunch of different causes. And some degree of signaling and fuzzy-buying is likely mediated by one’s peer group, so if one spends time on LW or in closely aligned circles then CFAR and SIAI may be effective places to purchase signaling credibility with the people one cares about.
That is indeed my concern. If CFAR can’t avoid a Jerry Sandusky/Joe Paterno type scenario (which I am reasonably probable it is capable of, given one of its founders wrote HPMOR), then it is literally a horrendous joke and I should be allocating my contributions to somewhere more productive.
This confuses me. First of all, the probability of such a scenario is tiny (how many universities have the exact same complete lack of safeguards and transparency and how many had an international scandal?) Second, the difference between writing HPMR and the difference between being associated with one of the most prominent universities in the US seems pretty large. A small point that does back up your concerns somewhat- it may be worth noting that the SI early on did have a serious embezzlement problem at one point. But the difference of “has an unmoderated IRC forum where people say hateful stuff” and the scale of a massive coverup of a decade long pedophilia scandal seems pretty clear. Finally, the inability to potentially deal with an unlikely scandal, even if one did have evidence for that, isn’t a reason to think that they are incompetent in other ways.
Frankly, it seems as an outside observer that your reaction is likely more connected to the simple fact that these were pretty disgusting statements that can easily trigger a large emotional reaction. But this website is devoted to rationality, and the name of it is Less Wrong. Increasing the world’s total existential risk because a certain person who isn’t even an SI higher-up or anything similar said some hateful things is not a rational move.
Lesswrong does not have an unmoderated IRC forum. There is an IRC forum called #lesswrong on freenode which is mostly populated by people who read lesswrong but it has no official LW backing or involvement. SIAI/FHI/CFAR or whoever is in charge of LW should ask the #lesswrong mod to close it and take ##lesswrong if they want it. This is how freenode rules treat unofficial IRC channels.
Anything that seems like support for Dallas or Ritalin/Rational_Brony was unintentional.
As I said before, appealing to an online forum crowd from an associated chat channel, whether official or not, is invariably a bad idea, because of the difference in the expectations of privacy. It harms the forum (but usually not the channel) and so is often a bannable offence in the forums which support banning users. Anyone bringing the same issue up in an unrelated thread, like Dallas did, ought to be banned for trolling.
Unless this can be construed as blackmail, in which case, it is.
There was an attempt by someone to change the forum policies (about censorship, that time) by doing something terrible if the policies weren’t changed. EY and company said “we don’t give in to blackmail,” the policies were not changed, and the person possibly carried through on their threat. It’s worth bringing up only to discourage future attempts at blackmail.
Rather, I meant to say: I expect LW posters to largely agree that it can be correct to select an option which has lower expected utility according to naive calculation so as to prevent such situations from arising in the first place (in that it is correct to have a decision function that selects such options, and that if you don’t actually select such options then you don’t have that decision function). It seems possibly reasonable to construe an organization having access to high utility but opposing specific human rights issues as creating such a situation (I do not comment on whether or not this is actually the case in our world).
A list of outcomes possible in the future (in order of my preference):
We create AI which corresponds to my values.
Life on Earth persists under my value set.
Life on Earth is totally exterminated.
Life on Earth persists under its current value set.
We create an AI which does not correspond to my values.
If LW is not trying to eradicate the scourge of transphobia, than clearly SIAI has moved from 1 to 5, and I should be trying to dismantle it, rather than fund it.
So to be clear, you are claiming that the destruction of all life on Earth is a better alternative than life continuing with the common current values?
So part of the whole point of attempts to things like CEV is that they will (ideally) not use any individual’s fixed values but rather will try to use what everyone’s values would be if they were smarter and knew more.
If your value set is so focused on the complete destruction of the world rather than let any deviation from your values to be implemented, then I suspect that LW and SI were already trying to accomplish something you’d regard as 5. Moreover, it seems that you are confused about priorities: LW isn’t an organization devoted to dealing with LGBTQE issues. You might as well complain that LW isn’t trying to eradicate malaria. The goal of LW is to improve rationality, and the goal of SI is to construct safe general AI. If one or both of those happens to solve other problems or result in a value shift making things better for trans individuals then that will be a consequence, but it doesn’t make it their job to do so.
Frankly, any value system which says “I’d rather have all life destroyed then everyone live under a value system slightly different than my own” seems more like something out of the worst sort of utopian fanaticism than anything else. One of the major ways human society has improved over time and become more peaceful is that we’ve learned that we don’t have to frame everything as an existential struggle. Sometimes it does actually make sense to compromise, or at least, wait to resolve things. We live in a era of truly awesome weaponry, and it is only this willingness to place the survival of humanity over disagreements in values that has seen us to this day. It is from the moderation of Reagan, Nixon, Carter, Kruschev, Breznev, Andropov and others that we are around to have this discussion instead of trying to desperately survive in the crumbled, radioactive ruins of human civilization.
So, I agree that any organization that works with minors should be held to high standards (and CFAR does run a camp for high schoolers). I don’t think the forum policy gives much evidence about the likelihood of children being victimized by employees, though.
It’s not clear to me how skill at writing HPMOR is related skill at avoiding PR gaffes. Have you looked at EY’s okcupid page? There are a lot of things there that don’t look like they’re written with public relations in mind.
Why are you writing that here? Did you mean to reply to some other comment or am I missing something?
“How is Eliezer still allowed to censor LW materials? It is making me seriously consider increasing x-risk!”
Can you please explain what this comment refers to.
The link JoachimSchipper refers to shows gwern being pretty clearly evil.
And it wasn’t even an isolated incident:
There wasn’t even the possibility that it was some bizarre form of “off-color humor”. Gwern admitted it himself:
So this looks pretty nasty and is frankly disappointing. But he’s acknowledged the irrational aspect of it and hasn’t brought the statements himself to LW. Moreover, as Gwern correctly notes, IRC is a medium where people are often lacking any substantial filter. The proper response would be for Gwern to just avoid discussing these issues (which in fact he says he does). In any event, I fail to see how this comments mandate “reparations”. If people on IRC want to appropriately rebuke him when he says this sort knee-jerk stupid shit when it comes up, that makes sense. The connection this has to SI or CFAR is pretty minimal.
This is one of the worst posts that I’ve ever seen on LW. Though I agree completely that gwern’s comments are inappropriate and unacceptable, they’re off-the-cuff remarks in a private setting not intended for the record, and he shouldn’t be pilloried for them.