The PR purpose of the Community Health is to be able to prevent misbehavior within EA. The real purpose is to protect powerful people within EA from outside attacks.
So they are like an HR department of a company. Simply getting rid of the HR department likely isn’t good for the average company.
Maybe the solution isn’t to get rid of them but to somehow be more honest about them being like a normal HR department?
I generally agree with this assessment, but I’m not sure about the HR department analogy, and I don’t think that “protecting powerful people within EA from outside attacks” is the right framing; I as thinking more along the lines of weak points and points of failure. It just so happens that important people are also points of failure, for example, if you persuade people that Yudkowsky is evil, then they don’t read the Sequences. And even for people who already benefitted from reading the sequences, it could possibly de-emphasize the cognitive habits that they gained from reading the Sequences, thus making them more predictably manipulable (and in an easily measured way).
Basically, such an organization would act to prevent any scandals involving important people from coming to light which is roughly the opposite of trying to create transparency and consequences for people who engage in scandalous behavior.
That also matches the behavior when Guzey asked them for confidentiality for his criticism of William MacAskill and they broke their confidentiality promise.
If their purpose is to protect people like William MacAskill then breaking such confidentiality promises to help Will better defend against attacks makes sense.
I was thinking more along the lines of SIGINT than HUMINT, and more along the lines of external threats than internal threats. I suppose that along the HUMINT lines, Facebook AI Labs hires private investigators to follow around Yudkowsky to find dirt on him, or the NSA starts contaminating people’s mail/packages/food deliveries with IQ-reducing neurotoxins, then yes, I’m definitely saying there should be someone to immediately take strategic action. We’re the people who predicted Slow Takeoff, that the entire world (and, by extension, the world’s intelligence agencies, whose various operations will all be interrupted by the revelation that all things revolve around AI alignment) would come apart at the seams, decades in advance; and we also don’t have the option of giving up on being extraordinary and significant, so we should expect things to get really serious at some point or another. On issues more along traditional community health lines, that’s not my division.
I don’t think that the CEA Community Health Team as it exists is an actor that would do a lot in either of those scenarios and be very helpful for dealing with them.
That sounds like:
So they are like an HR department of a company. Simply getting rid of the HR department likely isn’t good for the average company.
Maybe the solution isn’t to get rid of them but to somehow be more honest about them being like a normal HR department?
I generally agree with this assessment, but I’m not sure about the HR department analogy, and I don’t think that “protecting powerful people within EA from outside attacks” is the right framing; I as thinking more along the lines of weak points and points of failure. It just so happens that important people are also points of failure, for example, if you persuade people that Yudkowsky is evil, then they don’t read the Sequences. And even for people who already benefitted from reading the sequences, it could possibly de-emphasize the cognitive habits that they gained from reading the Sequences, thus making them more predictably manipulable (and in an easily measured way).
Basically, such an organization would act to prevent any scandals involving important people from coming to light which is roughly the opposite of trying to create transparency and consequences for people who engage in scandalous behavior.
That also matches the behavior when Guzey asked them for confidentiality for his criticism of William MacAskill and they broke their confidentiality promise.
If their purpose is to protect people like William MacAskill then breaking such confidentiality promises to help Will better defend against attacks makes sense.
I was thinking more along the lines of SIGINT than HUMINT, and more along the lines of external threats than internal threats. I suppose that along the HUMINT lines, Facebook AI Labs hires private investigators to follow around Yudkowsky to find dirt on him, or the NSA starts contaminating people’s mail/packages/food deliveries with IQ-reducing neurotoxins, then yes, I’m definitely saying there should be someone to immediately take strategic action. We’re the people who predicted Slow Takeoff, that the entire world (and, by extension, the world’s intelligence agencies, whose various operations will all be interrupted by the revelation that all things revolve around AI alignment) would come apart at the seams, decades in advance; and we also don’t have the option of giving up on being extraordinary and significant, so we should expect things to get really serious at some point or another. On issues more along traditional community health lines, that’s not my division.
I don’t think that the CEA Community Health Team as it exists is an actor that would do a lot in either of those scenarios and be very helpful for dealing with them.