I think there is totally some shared responsibility for any claims that Ben endorsed, and I also think the post could have done a better job at making many things more explicit quotes, so that they would seem less endorsed, where Ben’s ability to independently verify them was limited.
I don’t think any retaliation against Alice is unacceptable. I think if Alice did indeed make important accusatory claims that were inaccurate, she should face some consequences. I think Ben and Lightcone should also lose points for anything that seems endorsed in the post, or does not have an explicit disclaimer right next to the relevant piece of text, that is verified to be false.
We’re working on some comments and posts that engage with that question more thoroughly, and I expect we will take responsibility for some errors here. I also still believe that the overall standard of care and attention in this investigation was really very high, and I expect won’t be met by future investigations by different people. Some errors are unavoidable given the time available to do this, and the complexity of the situation.
In as much as Ben’s central claims in the post are falsified, then I think that would be pretty massive and would make me think we made a much bigger mistake, but that seems quite unlikely to me at this point (though more of that in future comments).
I think if Alice did indeed make important accusatory claims that were inaccurate, she should face some consequences.
What sort of consequences are you thinking could apply, given that she made these accusations pseudonymously and I assume doxxing and libel suits are off limits?
I don’t know, and agree it’s messy, but also it doesn’t seem hopeless.
I think there will be some degree to which clearly demonstrating that false accusations were made will ripple out into the social graph naturally (even with the anonymization), and will have consequences. I also think there are some ways to privately reach out to some smaller subset of people who might have a particularly good reason to know about this.
I think if the accusations are very thoroughly falsified and shown to be highly deceptive in their presentation, I can also imagine some scenarios where it might make sense to stop anonymizing, though I think the bar for that does seem pretty high.
I think there will be some degree to which clearly demonstrating that false accusations were made will ripple out into the social graph naturally (even with the anonymization), and will have consequences. I also think there are some ways to privately reach out to some smaller subset of people who might have a particularly good reason to know about this.
If this is an acceptable resolution, why didn’t you just let the problems with NonLinear ripply out into the social graph naturally?
I think that’s a good question, and indeed I think that should be the default thing that happens!
In this case we decided to do something different because we received a lot of evidence that Nonlinear was actively suppressing negative information about them. As Ben’s post states, the primary reason we got involved with this was that we heard Nonlinear was actively pressuring past employees to not say bad things about them, and many employees we talked to fely very scared of retribution if they told anyone about this, even privately, as long as it could somehow get back to Nonlinear:
Most importantly to me, I especially [wanted to write this post] because it seems to me that Nonlinear has tried to prevent this negative information from being shared
For me the moment I decided that it would be good for us to dedicate substantial time to this was when I saw the “your career in EA could be over in a few messages” screenshot messages. I think if someone starts sending messages like this, different systems need to kick in to preserve healthy information flow.
(In case people are confused about the vote totals here and in other parts of the thread, practically all my comments on this post regardless of content, have been getting downvoted shortly after posting with a total downvote strength of 10, usually split over 2-3 votes. I also think there is a lot of legitimate voting in this thread, but I am pretty sure in this specific pattern.)
An organization gets applications from all kinds of people at once, whereas an individual can only ever work at one org. It’s easier to discreetly contact most of the most relevant parties about some individual than it is to do the same with an organization.
I also think it’s fair to hold orgs that recruit within the EA or rationalist communities to slightly higher standards because they benefit directly from association with these communities.
That said, I agree with habryka (and others) that
I think if the accusations are very thoroughly falsified and shown to be highly deceptive in their presentation, I can also imagine some scenarios where it might make sense to stop anonymizing, though I think the bar for that does seem pretty high.
I agree in general, but think the force of this is weaker in this specific instance because NonLinear seems like a really small org. Most of the issues raised seem to be associated with in-person work and I would be surprised if NonLinear ever went above 10 in-person employees. So at most this seems like one order of magnitude in difference. Clearly the case is different for major corporations or orgs that directly interact with many more people.
Note that one of the reasons why I cared about getting this report out was that Nonlinear was getting more influential as a middleman in the AI Safety funding ecosystem, through which they affected many people’s lives and I think had influence beyond what a naive headcount would suggest. The Nonlinear network had many hundreds of applications.
As a personal example, I also think Lightcone, given that its at the center of a bunch of funding stuff, and infrastructure work, should also be subject to greater scrutiny than specific individuals, given the number of individuals that are affected by our work. And we are about the same size as Nonlinear, I think.
I think there is totally some shared responsibility for any claims that Ben endorsed, and I also think the post could have done a better job at making many things more explicit quotes, so that they would seem less endorsed, where Ben’s ability to independently verify them was limited.
I don’t think any retaliation against Alice is unacceptable. I think if Alice did indeed make important accusatory claims that were inaccurate, she should face some consequences. I think Ben and Lightcone should also lose points for anything that seems endorsed in the post, or does not have an explicit disclaimer right next to the relevant piece of text, that is verified to be false.
We’re working on some comments and posts that engage with that question more thoroughly, and I expect we will take responsibility for some errors here. I also still believe that the overall standard of care and attention in this investigation was really very high, and I expect won’t be met by future investigations by different people. Some errors are unavoidable given the time available to do this, and the complexity of the situation.
In as much as Ben’s central claims in the post are falsified, then I think that would be pretty massive and would make me think we made a much bigger mistake, but that seems quite unlikely to me at this point (though more of that in future comments).
What sort of consequences are you thinking could apply, given that she made these accusations pseudonymously and I assume doxxing and libel suits are off limits?
I don’t know, and agree it’s messy, but also it doesn’t seem hopeless.
I think there will be some degree to which clearly demonstrating that false accusations were made will ripple out into the social graph naturally (even with the anonymization), and will have consequences. I also think there are some ways to privately reach out to some smaller subset of people who might have a particularly good reason to know about this.
I think if the accusations are very thoroughly falsified and shown to be highly deceptive in their presentation, I can also imagine some scenarios where it might make sense to stop anonymizing, though I think the bar for that does seem pretty high.
If this is an acceptable resolution, why didn’t you just let the problems with NonLinear ripply out into the social graph naturally?
I think that’s a good question, and indeed I think that should be the default thing that happens!
In this case we decided to do something different because we received a lot of evidence that Nonlinear was actively suppressing negative information about them. As Ben’s post states, the primary reason we got involved with this was that we heard Nonlinear was actively pressuring past employees to not say bad things about them, and many employees we talked to fely very scared of retribution if they told anyone about this, even privately, as long as it could somehow get back to Nonlinear:
For me the moment I decided that it would be good for us to dedicate substantial time to this was when I saw the “your career in EA could be over in a few messages” screenshot messages. I think if someone starts sending messages like this, different systems need to kick in to preserve healthy information flow.
(In case people are confused about the vote totals here and in other parts of the thread, practically all my comments on this post regardless of content, have been getting downvoted shortly after posting with a total downvote strength of 10, usually split over 2-3 votes. I also think there is a lot of legitimate voting in this thread, but I am pretty sure in this specific pattern.)
This matches my experience too. When I initially made pretty milquetoast criticisms here all of my comments went down by ~10.
An organization gets applications from all kinds of people at once, whereas an individual can only ever work at one org. It’s easier to discreetly contact most of the most relevant parties about some individual than it is to do the same with an organization.
I also think it’s fair to hold orgs that recruit within the EA or rationalist communities to slightly higher standards because they benefit directly from association with these communities.
That said, I agree with habryka (and others) that
I agree in general, but think the force of this is weaker in this specific instance because NonLinear seems like a really small org. Most of the issues raised seem to be associated with in-person work and I would be surprised if NonLinear ever went above 10 in-person employees. So at most this seems like one order of magnitude in difference. Clearly the case is different for major corporations or orgs that directly interact with many more people.
Note that one of the reasons why I cared about getting this report out was that Nonlinear was getting more influential as a middleman in the AI Safety funding ecosystem, through which they affected many people’s lives and I think had influence beyond what a naive headcount would suggest. The Nonlinear network had many hundreds of applications.
As a personal example, I also think Lightcone, given that its at the center of a bunch of funding stuff, and infrastructure work, should also be subject to greater scrutiny than specific individuals, given the number of individuals that are affected by our work. And we are about the same size as Nonlinear, I think.