I think there will be some degree to which clearly demonstrating that false accusations were made will ripple out into the social graph naturally (even with the anonymization), and will have consequences. I also think there are some ways to privately reach out to some smaller subset of people who might have a particularly good reason to know about this.
If this is an acceptable resolution, why didn’t you just let the problems with NonLinear ripply out into the social graph naturally?
I think that’s a good question, and indeed I think that should be the default thing that happens!
In this case we decided to do something different because we received a lot of evidence that Nonlinear was actively suppressing negative information about them. As Ben’s post states, the primary reason we got involved with this was that we heard Nonlinear was actively pressuring past employees to not say bad things about them, and many employees we talked to fely very scared of retribution if they told anyone about this, even privately, as long as it could somehow get back to Nonlinear:
Most importantly to me, I especially [wanted to write this post] because it seems to me that Nonlinear has tried to prevent this negative information from being shared
For me the moment I decided that it would be good for us to dedicate substantial time to this was when I saw the “your career in EA could be over in a few messages” screenshot messages. I think if someone starts sending messages like this, different systems need to kick in to preserve healthy information flow.
(In case people are confused about the vote totals here and in other parts of the thread, practically all my comments on this post regardless of content, have been getting downvoted shortly after posting with a total downvote strength of 10, usually split over 2-3 votes. I also think there is a lot of legitimate voting in this thread, but I am pretty sure in this specific pattern.)
An organization gets applications from all kinds of people at once, whereas an individual can only ever work at one org. It’s easier to discreetly contact most of the most relevant parties about some individual than it is to do the same with an organization.
I also think it’s fair to hold orgs that recruit within the EA or rationalist communities to slightly higher standards because they benefit directly from association with these communities.
That said, I agree with habryka (and others) that
I think if the accusations are very thoroughly falsified and shown to be highly deceptive in their presentation, I can also imagine some scenarios where it might make sense to stop anonymizing, though I think the bar for that does seem pretty high.
I agree in general, but think the force of this is weaker in this specific instance because NonLinear seems like a really small org. Most of the issues raised seem to be associated with in-person work and I would be surprised if NonLinear ever went above 10 in-person employees. So at most this seems like one order of magnitude in difference. Clearly the case is different for major corporations or orgs that directly interact with many more people.
Note that one of the reasons why I cared about getting this report out was that Nonlinear was getting more influential as a middleman in the AI Safety funding ecosystem, through which they affected many people’s lives and I think had influence beyond what a naive headcount would suggest. The Nonlinear network had many hundreds of applications.
As a personal example, I also think Lightcone, given that its at the center of a bunch of funding stuff, and infrastructure work, should also be subject to greater scrutiny than specific individuals, given the number of individuals that are affected by our work. And we are about the same size as Nonlinear, I think.
If this is an acceptable resolution, why didn’t you just let the problems with NonLinear ripply out into the social graph naturally?
I think that’s a good question, and indeed I think that should be the default thing that happens!
In this case we decided to do something different because we received a lot of evidence that Nonlinear was actively suppressing negative information about them. As Ben’s post states, the primary reason we got involved with this was that we heard Nonlinear was actively pressuring past employees to not say bad things about them, and many employees we talked to fely very scared of retribution if they told anyone about this, even privately, as long as it could somehow get back to Nonlinear:
For me the moment I decided that it would be good for us to dedicate substantial time to this was when I saw the “your career in EA could be over in a few messages” screenshot messages. I think if someone starts sending messages like this, different systems need to kick in to preserve healthy information flow.
(In case people are confused about the vote totals here and in other parts of the thread, practically all my comments on this post regardless of content, have been getting downvoted shortly after posting with a total downvote strength of 10, usually split over 2-3 votes. I also think there is a lot of legitimate voting in this thread, but I am pretty sure in this specific pattern.)
This matches my experience too. When I initially made pretty milquetoast criticisms here all of my comments went down by ~10.
An organization gets applications from all kinds of people at once, whereas an individual can only ever work at one org. It’s easier to discreetly contact most of the most relevant parties about some individual than it is to do the same with an organization.
I also think it’s fair to hold orgs that recruit within the EA or rationalist communities to slightly higher standards because they benefit directly from association with these communities.
That said, I agree with habryka (and others) that
I agree in general, but think the force of this is weaker in this specific instance because NonLinear seems like a really small org. Most of the issues raised seem to be associated with in-person work and I would be surprised if NonLinear ever went above 10 in-person employees. So at most this seems like one order of magnitude in difference. Clearly the case is different for major corporations or orgs that directly interact with many more people.
Note that one of the reasons why I cared about getting this report out was that Nonlinear was getting more influential as a middleman in the AI Safety funding ecosystem, through which they affected many people’s lives and I think had influence beyond what a naive headcount would suggest. The Nonlinear network had many hundreds of applications.
As a personal example, I also think Lightcone, given that its at the center of a bunch of funding stuff, and infrastructure work, should also be subject to greater scrutiny than specific individuals, given the number of individuals that are affected by our work. And we are about the same size as Nonlinear, I think.