my current sense is that the world would probably be better if the CEA Community Health team was disbanded and it was transparent that there is little-to-no institutional protection from bullies in the EA ecosystem, and more people do not get burned by assuming or hoping that it will play that role.
I’m not sure about this, even if the Community Health And Special Projects team did a bad job on this specific case, that doesn’t indicate much about whether they’re valuable for odd jobs, such as external threats to EA. Their website mentions 4 key categories of work, with two of the four being “Reducing risks related to sensitive projects, like work in policy and politics” and “Finding specialists to work on specific problems, for example, improving public communications around EA or risk-reduction in areas with high geopolitical risk”. I’ve met some of the people working on those matters, which absolutely should have a dedicated org, and they’re very professional and serious about at least being available as consultants who can handle sensitive situations and work in volatile environments e.g. journalism.
In addition to geopolitical circumstances, they also could do things like run prediction markets on people researching S-risk, to forecast the odds that they end up going crazy and try to maximize damage to the AI safety community, like Ziz or Emile Torres, rather than the traditional approach, which is going into anaphylactic shock and purging anyone who “seems potentially crazy” (e.g. including people who take things seriously but aren’t good at doing it charismatically).
There’s also issues like epistemic-hijacking attacks that focus on key strategic targets in a group or movement, and external threats like that straight-up require a centralized body with broad powers, just to have the slightest chance of having any deterrence and countermeasures. With slow takeoff, every passing year will make overwhelmingly powerful adversaries and black swan events into a greater and greater hazard to the very survival of the AI safety movement itself.
My takeaway from this is that it seems like the more EA-adjacent people are just not on the same level as Rat-adjacent people at handling power dynamic disputes, where the people making up all of the sides in the dispute instinctively become driven to win at feuds because humans are primates. That is an unsurprising conclusion given that EA-adjacents optimized more for resources/money and rat-adjacents optimized more for mind/skill.
But that doesn’t mean the whole Community Health and Special Projects team should be disbanded.
The PR purpose of the Community Health is to be able to prevent misbehavior within EA. The real purpose is to protect powerful people within EA from outside attacks.
So they are like an HR department of a company. Simply getting rid of the HR department likely isn’t good for the average company.
Maybe the solution isn’t to get rid of them but to somehow be more honest about them being like a normal HR department?
I generally agree with this assessment, but I’m not sure about the HR department analogy, and I don’t think that “protecting powerful people within EA from outside attacks” is the right framing; I as thinking more along the lines of weak points and points of failure. It just so happens that important people are also points of failure, for example, if you persuade people that Yudkowsky is evil, then they don’t read the Sequences. And even for people who already benefitted from reading the sequences, it could possibly de-emphasize the cognitive habits that they gained from reading the Sequences, thus making them more predictably manipulable (and in an easily measured way).
Basically, such an organization would act to prevent any scandals involving important people from coming to light which is roughly the opposite of trying to create transparency and consequences for people who engage in scandalous behavior.
That also matches the behavior when Guzey asked them for confidentiality for his criticism of William MacAskill and they broke their confidentiality promise.
If their purpose is to protect people like William MacAskill then breaking such confidentiality promises to help Will better defend against attacks makes sense.
I was thinking more along the lines of SIGINT than HUMINT, and more along the lines of external threats than internal threats. I suppose that along the HUMINT lines, Facebook AI Labs hires private investigators to follow around Yudkowsky to find dirt on him, or the NSA starts contaminating people’s mail/packages/food deliveries with IQ-reducing neurotoxins, then yes, I’m definitely saying there should be someone to immediately take strategic action. We’re the people who predicted Slow Takeoff, that the entire world (and, by extension, the world’s intelligence agencies, whose various operations will all be interrupted by the revelation that all things revolve around AI alignment) would come apart at the seams, decades in advance; and we also don’t have the option of giving up on being extraordinary and significant, so we should expect things to get really serious at some point or another. On issues more along traditional community health lines, that’s not my division.
I don’t think that the CEA Community Health Team as it exists is an actor that would do a lot in either of those scenarios and be very helpful for dealing with them.
Every retrospective I know of has shown them to do a terrible job. Note the failures are not even obviously ideological. They have protected the normal sort of abuser. But they also protected Kathy Forth who was a serial false accuser (yes they banned her from some events but she was still active in EA spaces until her suicide).
Would you expect to see retrospectives of cases where they did a good job? If an investigation concludes that “X made these accusations about Y but we determined them to be meritless”, then there are good reasons for neither CEA nor X to bring further attention to those accusations by including them in a public retrospective. Or in cases where accusations are determined to have merit, it may still be that the victims don’t want the case to be discussed in public any more than strictly necessary. Or there may be concerns of a libel suit from the wrongdoer, limiting what can be said openly.
If I’m wrong about this specific case, it’s because of idiosyncratic details and because I didn’t do as much research on this particular org relative to other people here. If I was wrong in this specific case, it would be a very weak update against my stance that EA orgs should be robust against sudden public backlash, due to features specific to the Community health team, not a strong update that I’m wrong about the vulnerabilities. The vulnerability that I’m researching is a technical issue, which remains an external threat regardless of specific details about who-did-what, and all I can say about it here is that this internal conflict is a lot more dangerous than it appears to any of the participants initiating it.
We’re basically doomed to continue talking past eachother here. You don’t seem to be willing to give tons of detail here about how, exactly, the Community Health And Special Projects team is too corrupt to function. I’m not willing to give tons of detail here about external threats that are vastly more significant than any internal drama within EA, which means I can’t explain the details demonstrating why external threats to EA actually dominate the calculus of how important is the Community Health And Special Projects team or whether it should be disbanded.
I’m not sure about this, even if the Community Health And Special Projects team did a bad job on this specific case, that doesn’t indicate much about whether they’re valuable for odd jobs, such as external threats to EA. Their website mentions 4 key categories of work, with two of the four being “Reducing risks related to sensitive projects, like work in policy and politics” and “Finding specialists to work on specific problems, for example, improving public communications around EA or risk-reduction in areas with high geopolitical risk”. I’ve met some of the people working on those matters, which absolutely should have a dedicated org, and they’re very professional and serious about at least being available as consultants who can handle sensitive situations and work in volatile environments e.g. journalism.
In addition to geopolitical circumstances, they also could do things like run prediction markets on people researching S-risk, to forecast the odds that they end up going crazy and try to maximize damage to the AI safety community, like Ziz or Emile Torres, rather than the traditional approach, which is going into anaphylactic shock and purging anyone who “seems potentially crazy” (e.g. including people who take things seriously but aren’t good at doing it charismatically).
There’s also issues like epistemic-hijacking attacks that focus on key strategic targets in a group or movement, and external threats like that straight-up require a centralized body with broad powers, just to have the slightest chance of having any deterrence and countermeasures. With slow takeoff, every passing year will make overwhelmingly powerful adversaries and black swan events into a greater and greater hazard to the very survival of the AI safety movement itself.
My takeaway from this is that it seems like the more EA-adjacent people are just not on the same level as Rat-adjacent people at handling power dynamic disputes, where the people making up all of the sides in the dispute instinctively become driven to win at feuds because humans are primates. That is an unsurprising conclusion given that EA-adjacents optimized more for resources/money and rat-adjacents optimized more for mind/skill.
But that doesn’t mean the whole Community Health and Special Projects team should be disbanded.
That sounds like:
So they are like an HR department of a company. Simply getting rid of the HR department likely isn’t good for the average company.
Maybe the solution isn’t to get rid of them but to somehow be more honest about them being like a normal HR department?
I generally agree with this assessment, but I’m not sure about the HR department analogy, and I don’t think that “protecting powerful people within EA from outside attacks” is the right framing; I as thinking more along the lines of weak points and points of failure. It just so happens that important people are also points of failure, for example, if you persuade people that Yudkowsky is evil, then they don’t read the Sequences. And even for people who already benefitted from reading the sequences, it could possibly de-emphasize the cognitive habits that they gained from reading the Sequences, thus making them more predictably manipulable (and in an easily measured way).
Basically, such an organization would act to prevent any scandals involving important people from coming to light which is roughly the opposite of trying to create transparency and consequences for people who engage in scandalous behavior.
That also matches the behavior when Guzey asked them for confidentiality for his criticism of William MacAskill and they broke their confidentiality promise.
If their purpose is to protect people like William MacAskill then breaking such confidentiality promises to help Will better defend against attacks makes sense.
I was thinking more along the lines of SIGINT than HUMINT, and more along the lines of external threats than internal threats. I suppose that along the HUMINT lines, Facebook AI Labs hires private investigators to follow around Yudkowsky to find dirt on him, or the NSA starts contaminating people’s mail/packages/food deliveries with IQ-reducing neurotoxins, then yes, I’m definitely saying there should be someone to immediately take strategic action. We’re the people who predicted Slow Takeoff, that the entire world (and, by extension, the world’s intelligence agencies, whose various operations will all be interrupted by the revelation that all things revolve around AI alignment) would come apart at the seams, decades in advance; and we also don’t have the option of giving up on being extraordinary and significant, so we should expect things to get really serious at some point or another. On issues more along traditional community health lines, that’s not my division.
I don’t think that the CEA Community Health Team as it exists is an actor that would do a lot in either of those scenarios and be very helpful for dealing with them.
“they also could do things like run prediction markets on people researching S-risk, to forecast the odds that they end up going crazy ”
If this is a real concern we should check if fear of hell often drove people crazy.
Every retrospective I know of has shown them to do a terrible job. Note the failures are not even obviously ideological. They have protected the normal sort of abuser. But they also protected Kathy Forth who was a serial false accuser (yes they banned her from some events but she was still active in EA spaces until her suicide).
Would you expect to see retrospectives of cases where they did a good job? If an investigation concludes that “X made these accusations about Y but we determined them to be meritless”, then there are good reasons for neither CEA nor X to bring further attention to those accusations by including them in a public retrospective. Or in cases where accusations are determined to have merit, it may still be that the victims don’t want the case to be discussed in public any more than strictly necessary. Or there may be concerns of a libel suit from the wrongdoer, limiting what can be said openly.
I am extremely, extremely against disposing of important-sounding EA institutions based off of popular will, as this is a vulnerability that is extremely exploitable by outsiders and we should not create precedent/incentives to exploit that vulnerability.
If I’m wrong about this specific case, it’s because of idiosyncratic details and because I didn’t do as much research on this particular org relative to other people here. If I was wrong in this specific case, it would be a very weak update against my stance that EA orgs should be robust against sudden public backlash, due to features specific to the Community health team, not a strong update that I’m wrong about the vulnerabilities. The vulnerability that I’m researching is a technical issue, which remains an external threat regardless of specific details about who-did-what, and all I can say about it here is that this internal conflict is a lot more dangerous than it appears to any of the participants initiating it.
This is a specific claim about what specific people should do
We’re basically doomed to continue talking past eachother here. You don’t seem to be willing to give tons of detail here about how, exactly, the Community Health And Special Projects team is too corrupt to function. I’m not willing to give tons of detail here about external threats that are vastly more significant than any internal drama within EA, which means I can’t explain the details demonstrating why external threats to EA actually dominate the calculus of how important is the Community Health And Special Projects team or whether it should be disbanded.