I didn’t look at the contents of that talk yet, but I felt uncomfortable about a specific speaker/talk being named and singled out for the target of rather hard-to-respond-to criticism (consider how you might take it if you came across a forum discussion calling your talk misleading and not well-reasoned, without going into any specifics), so I edited out those details for now.
I feel that the AI risk community should do its best to build friendly rather than hostile relationships with mainstream computer science researchers. In particular, there have been cases before where researchers looked at how their work was being discussed on LW, picked up a condescending tone, and decided that LW/AI risk people were not worth engaging with. Writing a response outlining one’s disagreement to the talk (in the style of e.g. “Response to Cegłowski on superintelligence”) wouldn’t be a problem as it communicates engagement with the talk. But if we are referencing people’s work in a manner which communicates a curt dismissal, I think we should be careful about naming specific people.
I’ve added specifics. I hope this improves things. If not, feel free to edit it out.
Thanks for pointing out the problems with my question. I see now that I was wrong to combine strong language with no specifics and a concrete target. I would amend it, but then the context for the discussion would be gone.
We briefly discussed this internally. I reverted Kaj’s edit since I think we should basically never touch other user’s content unless it is dealing with some real information hazards, or threatens violence or doxxes a specific individual (and probably some weird edge cases that are rare and can’t easily enumerate, but which “broad PR concerns” are definitely not an instance of).
(We also sometimes edit user’s content if there is some broken formatting or something in that reference class, though that feels like a different kind of thing)
Probably useful to clarify so people can understand how moderation works:
The semi-official norms around moderation tend to be “moderators have discretion to take actions without waiting for consensus, but then should report actions they took to other moderators for sanity checking.” (I don’t think this is formal policy but I’d personally endorse it being policy. Waiting for consensus on things often makes it impossible to take action in time sensitive situations, but checking in after the fact gets you most of the benefit)
consider how you might take it if you came across a forum discussion calling your talk misleading and not well-reasoned, without going into any specifics
I would be grateful for the free marketing! (And entertainment—internet randos’ distorted impressions of you are fascinating to read.) Certainly, it would be better for people to discuss the specifics of your work, but it’s a competitive market for attention out there: vague discussion is better than none at all!
there have been cases before where researchers looked at how their work was being discussed on LW, picked up a condescending tone, and decided that LW/AI risk people were not worth engaging with
If I’m interpreting this correctly, this doesn’t seem very consistent with the first paragraph? First, you seem to be saying that it’s unfair to Sussman to make him the target of vague criticism (“consider how you might take it”). But then you seem to saying that it looks bad for “us” (you know, the “AI risk community”, Yudkowski’s robot cult, whatever you want to call it) to be making vague criticisms that will get us written off as cranks (“not worth engaging with”). But I mostly wouldn’t expect both concerns to be operative in the same world—in the possible world where Sussman feels bad about being named and singled out, that means he’s taking “us” seriously enough for our curt dismissal to hurt, but in the possible world where we’re written off as cranks, then being named and singled out doesn’t hurt.
(I’m not very confident in this analysis, but it seems important to practice trying to combat rationalization in social/political thinking??)
But I mostly wouldn’t expect both concerns to be operative in the same world—in the possible world where Sussman feels bad about being named and singled out, that means he’s taking “us” seriously enough for our curt dismissal to hurt, but in the possible world where we’re written off as cranks, then being named and singled out doesn’t hurt.
The world can change as a result of one of the concerns. At first you’re taking someone seriously (or might at least be open to taking them seriously), then they say something hurtful, then you write them off to make it hurt less. Sour grapes.
Also, the reactions of people who are not being directly criticized but who respect the person being criticized are also important. Even if the target of the criticism never saw it, other people in the target’s peer group may also feel disrespected and react in a similar way. (This is not speculation—I’ve seen various computer scientists have this reaction to writings on LW, many times.)
Hi rmoehn,
I didn’t look at the contents of that talk yet, but I felt uncomfortable about a specific speaker/talk being named and singled out for the target of rather hard-to-respond-to criticism (consider how you might take it if you came across a forum discussion calling your talk misleading and not well-reasoned, without going into any specifics), so I edited out those details for now.
I feel that the AI risk community should do its best to build friendly rather than hostile relationships with mainstream computer science researchers. In particular, there have been cases before where researchers looked at how their work was being discussed on LW, picked up a condescending tone, and decided that LW/AI risk people were not worth engaging with. Writing a response outlining one’s disagreement to the talk (in the style of e.g. “Response to Cegłowski on superintelligence”) wouldn’t be a problem as it communicates engagement with the talk. But if we are referencing people’s work in a manner which communicates a curt dismissal, I think we should be careful about naming specific people.
The question in general is fine, though. :)
I’ve added specifics. I hope this improves things. If not, feel free to edit it out.
Thanks for pointing out the problems with my question. I see now that I was wrong to combine strong language with no specifics and a concrete target. I would amend it, but then the context for the discussion would be gone.
Thanks, those specifics are great!
We briefly discussed this internally. I reverted Kaj’s edit since I think we should basically never touch other user’s content unless it is dealing with some real information hazards, or threatens violence or doxxes a specific individual (and probably some weird edge cases that are rare and can’t easily enumerate, but which “broad PR concerns” are definitely not an instance of).
(We also sometimes edit user’s content if there is some broken formatting or something in that reference class, though that feels like a different kind of thing)
Probably useful to clarify so people can understand how moderation works:
The semi-official norms around moderation tend to be “moderators have discretion to take actions without waiting for consensus, but then should report actions they took to other moderators for sanity checking.” (I don’t think this is formal policy but I’d personally endorse it being policy. Waiting for consensus on things often makes it impossible to take action in time sensitive situations, but checking in after the fact gets you most of the benefit)
This should be advertised in meta.
And also we edit other users’ content when they give us permission to, which happens a lot.
(Note: posted after the parent was retracted.)
I would be grateful for the free marketing! (And entertainment—internet randos’ distorted impressions of you are fascinating to read.) Certainly, it would be better for people to discuss the specifics of your work, but it’s a competitive market for attention out there: vague discussion is better than none at all!
If I’m interpreting this correctly, this doesn’t seem very consistent with the first paragraph? First, you seem to be saying that it’s unfair to Sussman to make him the target of vague criticism (“consider how you might take it”). But then you seem to saying that it looks bad for “us” (you know, the “AI risk community”, Yudkowski’s robot cult, whatever you want to call it) to be making vague criticisms that will get us written off as cranks (“not worth engaging with”). But I mostly wouldn’t expect both concerns to be operative in the same world—in the possible world where Sussman feels bad about being named and singled out, that means he’s taking “us” seriously enough for our curt dismissal to hurt, but in the possible world where we’re written off as cranks, then being named and singled out doesn’t hurt.
(I’m not very confident in this analysis, but it seems important to practice trying to combat rationalization in social/political thinking??)
The world can change as a result of one of the concerns. At first you’re taking someone seriously (or might at least be open to taking them seriously), then they say something hurtful, then you write them off to make it hurt less. Sour grapes.
Also, the reactions of people who are not being directly criticized but who respect the person being criticized are also important. Even if the target of the criticism never saw it, other people in the target’s peer group may also feel disrespected and react in a similar way. (This is not speculation—I’ve seen various computer scientists have this reaction to writings on LW, many times.)
Moderators are discussing this with each other now. We do not have consensus on this.