Yeah, sorry. I agree that my comment “the OP speaks for me” is leading a lot of people to false views that I should correct. It’s somehow tricky because there’s a different thing I worry will be obscured by my doing this, but I’ll do it anyhow as is correct and try to come back for that different thing later.
To the best of my knowledge, the leadership of neither MIRI nor CFAR has ever slept with a subordinate, much less many of them.
Agreed.
While I think staff at CFAR and MIRI probably engaged in motivated reasoning sometimes wrt PR, neither org engaged in anything close to the level of obsessive, anti-epistemic reputational control alleged in Zoe’s post. CFAR and MIRI staff were certainly not required to sign NDAs agreeing they wouldn’t talk badly about the org—in fact, in my experience CFAR staff much more commonly share criticism of the org than praise. CFAR staff were regularly encouraged to share their ideas at workshops and on LessWrong, to get public feedback. And when we did mess up, we tried extremely hard to publicly and accurately describe our wrongdoing—e.g., Anna and I personally spent hundreds of hours investigating/thinking about the Brent affair, and tried so hard to avoid accidentally doing anti-epistemic reputational control that in my opinion, our writeup about it actually makes CFAR seem much more culpable than I think it was.
I agree that there’s a large difference in both philosophy of how/whether to manage reputation, and amount of control exhibited/attempted about how staff would talk about the organizations, with Leverage doing a lot of that and CFAR doing less of it than most organizations.
As I understand it, there were ~3 staff historically whose job description involved debugging in some way which you, Anna, now feel uncomfortable with/think was fucky. But to the best of your knowledge, these situations caused much less harm than e.g. Zoe seems to have experienced, and the large majority of staff did not experience this—in general staff rarely explicitly debugged each other, and when it did happen it was clearly opt-in, and fairly symmetrical (e.g., in my personal conversations with you Anna, I’d guess the ratio of you something-like-debugging me to the reverse is maybe 3/2?).
I think this understates both how many people it happened with, and how fucky it sometimes was. (Also, it was job but not “job description”, although I think Zoe’s was “job description”). I think this one was actually worse in some of the early years, vs your model of it. My guess is indeed that it involved fewer hours than Zoe, and was overall less deliberately part of a dynamic quite as fucky as Zoe’s, but as I mentioned to you on the phone, an early peripheral staff member left CFAR for a mental institution in a way that seemed plausibly to do with how debugging and trials worked, and definitely to do with workplace stress of some sort, as well as with a preexisting condition they entered with and didn’t tell us about. (We would’ve handled this better later, I think.) There are some other situations that were also I think pretty fucked up, in the sense of “I think the average person would experience some horror/indignation if they took in what was happening.”
I can also think of stories of real scarring outside the three people I was counting.
I… do think it was considerably less weird looking, and less overtly fucked-up looking, than the descriptions I have (since writing my “this post speaks for me” comment) gotten of Leverage in the 2018-2019 era.
Also, most people at CFAR, especially in recent years, I think suffered none or nearly none of this. (I believe the same was true for parts of Leverage, though not sure.)
So, if we are playing the “compare how bad Leverage and CFAR are along each axis” game (which is not the main thing I took the OP to be doing, at all, nor the main thing I was trying to agree with, at all), I do think Leverage is worse than CFAR on this axis but I think the “per capita” damage of this sort that hit CFAR staff in the early years (“per capita” rather than cumulative, because Leverage had many more people) was maybe about a tenth of my best guess at what was up in the near-Zoe parts of Leverage in 2018-2019, which is a lot but, yes, different.
CFAR put really a lot of time and effort into trying to figure out how to teach rationality techniques, and how to talk with people about x-risk, without accidentally doing something fucky to people’s psyches. Our training curriculum for workshop mentors includes extensive advice on ways to avoid accidentally causing psychological harm. Harm did happen sometimes, which was why our training emphasized it so heavily. But we really fucking tried, and my sense is that we actually did very well on the whole at establishing institutional and personal knowledge about how to be gentle with people in these situations; personally, it’s the skillset I’d most worry about the community losing if CFAR shut down and more events started being run by other orgs.
We indeed put a lot of effort into this, and got some actual skill and good institutional habits out.
Perhaps this is an opportunity to create an internal document on “unhealthy behaviors” that would list the screwups and the lessons learned, and read it together regularly, like a safety training? (Analogically to how organizations that get their computers hacked or documents stolen, describe how it happened as a part of their safety training.) Perhaps with anonymous feedback whether someone has a concern that MIRI or CFAR is slipping into some bad pattern again.
Also, it might be useful to hire an external psychologist who would in regular intervals have a discussion with MIRI/CFAR employees. And to provide this document to the psychologist, so they know what risks to focus on. (Furthermore I think the psychologist should not be a rationalist; to provide a better outside view.)
For starters, someone could create the first version of the document by extracting information from this debate.
EDIT: Oops, on second reading of your comment, it seems like you already have something like this. Uhm, maybe a good opportunity to update/extend the document?
*
As a completely separate topic, it would be nice to have a table with the following columns: “Safety concern”, “What happened in MIRI/CFAR”, “What happened in Leverage (as far as we know)”, “Similarities”, “Differences”. But this is much less important, in long term.
Yeah, sorry. I agree that my comment “the OP speaks for me” is leading a lot of people to false views that I should correct. It’s somehow tricky because there’s a different thing I worry will be obscured by my doing this, but I’ll do it anyhow as is correct and try to come back for that different thing later.
Agreed.
I agree that there’s a large difference in both philosophy of how/whether to manage reputation, and amount of control exhibited/attempted about how staff would talk about the organizations, with Leverage doing a lot of that and CFAR doing less of it than most organizations.
I think this understates both how many people it happened with, and how fucky it sometimes was. (Also, it was job but not “job description”, although I think Zoe’s was “job description”). I think this one was actually worse in some of the early years, vs your model of it. My guess is indeed that it involved fewer hours than Zoe, and was overall less deliberately part of a dynamic quite as fucky as Zoe’s, but as I mentioned to you on the phone, an early peripheral staff member left CFAR for a mental institution in a way that seemed plausibly to do with how debugging and trials worked, and definitely to do with workplace stress of some sort, as well as with a preexisting condition they entered with and didn’t tell us about. (We would’ve handled this better later, I think.) There are some other situations that were also I think pretty fucked up, in the sense of “I think the average person would experience some horror/indignation if they took in what was happening.”
I can also think of stories of real scarring outside the three people I was counting.
I… do think it was considerably less weird looking, and less overtly fucked-up looking, than the descriptions I have (since writing my “this post speaks for me” comment) gotten of Leverage in the 2018-2019 era.
Also, most people at CFAR, especially in recent years, I think suffered none or nearly none of this. (I believe the same was true for parts of Leverage, though not sure.)
So, if we are playing the “compare how bad Leverage and CFAR are along each axis” game (which is not the main thing I took the OP to be doing, at all, nor the main thing I was trying to agree with, at all), I do think Leverage is worse than CFAR on this axis but I think the “per capita” damage of this sort that hit CFAR staff in the early years (“per capita” rather than cumulative, because Leverage had many more people) was maybe about a tenth of my best guess at what was up in the near-Zoe parts of Leverage in 2018-2019, which is a lot but, yes, different.
We indeed put a lot of effort into this, and got some actual skill and good institutional habits out.
Perhaps this is an opportunity to create an internal document on “unhealthy behaviors” that would list the screwups and the lessons learned, and read it together regularly, like a safety training? (Analogically to how organizations that get their computers hacked or documents stolen, describe how it happened as a part of their safety training.) Perhaps with anonymous feedback whether someone has a concern that MIRI or CFAR is slipping into some bad pattern again.
Also, it might be useful to hire an external psychologist who would in regular intervals have a discussion with MIRI/CFAR employees. And to provide this document to the psychologist, so they know what risks to focus on. (Furthermore I think the psychologist should not be a rationalist; to provide a better outside view.)
For starters, someone could create the first version of the document by extracting information from this debate.
EDIT: Oops, on second reading of your comment, it seems like you already have something like this. Uhm, maybe a good opportunity to update/extend the document?
*
As a completely separate topic, it would be nice to have a table with the following columns: “Safety concern”, “What happened in MIRI/CFAR”, “What happened in Leverage (as far as we know)”, “Similarities”, “Differences”. But this is much less important, in long term.