I signed the secret general release containing the non-disparagement clause when I left OpenAI. From more recent legal advice I understand that the whole agreement is unlikely to be enforceable, especially a strict interpretation of the non-disparagement clause like in this post. IIRC at the time I assumed that such an interpretation (e.g. where OpenAI could sue me for damages for saying some true/reasonable thing) was so absurd that couldn’t possibly be what it meant. [1] I sold all my OpenAI equity last year, to minimize real or perceived CoI with METR’s work. I’m pretty sure it never occurred to me that OAI could claw back my equity or prevent me from selling it. [2]
OpenAI recently informally notified me by email that they would release me from the non-disparagement and non-solicitation provisions in the general release (but not, as in some other cases, the entire agreement.) They also said OAI “does not intend to enforce” these provisions in other documents I have signed. It is unclear what the legal status of this email is given that the original agreement states it can only be modified in writing signed by both parties.
As far as I can recall, concern about financial penalties for violating non-disparagement provisions was never a consideration that affected my decisions. I think having signed the agreement probably had some effect, but more like via “I want to have a reputation for abiding by things I signed so that e.g. labs can trust me with confidential information”. And I still assumed that it didn’t cover reasonable/factual criticism.
That being said, I do think many researchers and lab employees, myself included, have felt restricted from honestly sharing their criticisms of labs beyond small numbers of trusted people. In my experience, I think the biggest forces pushing against more safety-related criticism of labs are:
(1) confidentiality agreements (any criticism based on something you observed internally would be prohibited by non-disclosure agreements—so the disparagement clause is only relevant in cases where you’re criticizing based on publicly available information) (2) labs’ informal/soft/not legally-derived powers (ranging from “being a bit less excited to collaborate on research” or “stricter about enforcing confidentiality policies with you” to “firing or otherwise making life harder for your colleagues or collaborators” or “lying to other employees about your bad conduct” etc) (3) general desire to be researchers / neutral experts rather than an advocacy group.
To state what is probably obvious: I don’t think labs should have non-disparagement provisions. I think they should have very clear protections for employees who wanted to report safety concerns, including if this requires disclosing confidential information. I think something like the asks here are a reasonable start, and I also like Paul’s idea (which I can’t now find the link for) of having labs make specific “underlined statements” to which employees can anonymously add caveats or contradictions that will be publicly displayed alongside the statements. I think this would be especially appropriate for commitments about red lines for halting development (e.g. Responsible Scaling Policies) - a statement that a lab will “pause development at capability level x until they have implemented mitigation y” is an excellent candidate for an underlined statement
Regardless of legal enforceability, it also seems like it would be totally against OpenAI’s interests to sue someone for making some reasonable safety-related criticism.
I would have sold sooner but there are only intermittent opportunities for sale. OpenAI did not allow me to donate it, put it in a DAF, or gift it to another employee. This maybe makes more sense given what we know now. In lieu of actually being able to sell, I made a legally binding pledge in Sep 2021 to donate 80% of any OAI equity.
I also like Paul’s idea (which I can’t now find the link for) of having labs make specific “underlined statements” to which employees can anonymously add caveats or contradictions that will be publicly displayed alongside the statements
I signed the secret general release containing the non-disparagement clause when I left OpenAI. From more recent legal advice I understand that the whole agreement is unlikely to be enforceable, especially a strict interpretation of the non-disparagement clause like in this post. IIRC at the time I assumed that such an interpretation (e.g. where OpenAI could sue me for damages for saying some true/reasonable thing) was so absurd that couldn’t possibly be what it meant.
[1]
I sold all my OpenAI equity last year, to minimize real or perceived CoI with METR’s work. I’m pretty sure it never occurred to me that OAI could claw back my equity or prevent me from selling it. [2]
OpenAI recently informally notified me by email that they would release me from the non-disparagement and non-solicitation provisions in the general release (but not, as in some other cases, the entire agreement.) They also said OAI “does not intend to enforce” these provisions in other documents I have signed. It is unclear what the legal status of this email is given that the original agreement states it can only be modified in writing signed by both parties.
As far as I can recall, concern about financial penalties for violating non-disparagement provisions was never a consideration that affected my decisions. I think having signed the agreement probably had some effect, but more like via “I want to have a reputation for abiding by things I signed so that e.g. labs can trust me with confidential information”. And I still assumed that it didn’t cover reasonable/factual criticism.
That being said, I do think many researchers and lab employees, myself included, have felt restricted from honestly sharing their criticisms of labs beyond small numbers of trusted people. In my experience, I think the biggest forces pushing against more safety-related criticism of labs are:
(1) confidentiality agreements (any criticism based on something you observed internally would be prohibited by non-disclosure agreements—so the disparagement clause is only relevant in cases where you’re criticizing based on publicly available information)
(2) labs’ informal/soft/not legally-derived powers (ranging from “being a bit less excited to collaborate on research” or “stricter about enforcing confidentiality policies with you” to “firing or otherwise making life harder for your colleagues or collaborators” or “lying to other employees about your bad conduct” etc)
(3) general desire to be researchers / neutral experts rather than an advocacy group.
To state what is probably obvious: I don’t think labs should have non-disparagement provisions. I think they should have very clear protections for employees who wanted to report safety concerns, including if this requires disclosing confidential information. I think something like the asks here are a reasonable start, and I also like Paul’s idea (which I can’t now find the link for) of having labs make specific “underlined statements” to which employees can anonymously add caveats or contradictions that will be publicly displayed alongside the statements. I think this would be especially appropriate for commitments about red lines for halting development (e.g. Responsible Scaling Policies) - a statement that a lab will “pause development at capability level x until they have implemented mitigation y” is an excellent candidate for an underlined statement
Regardless of legal enforceability, it also seems like it would be totally against OpenAI’s interests to sue someone for making some reasonable safety-related criticism.
I would have sold sooner but there are only intermittent opportunities for sale. OpenAI did not allow me to donate it, put it in a DAF, or gift it to another employee. This maybe makes more sense given what we know now. In lieu of actually being able to sell, I made a legally binding pledge in Sep 2021 to donate 80% of any OAI equity.
Link: https://sideways-view.com/2018/02/01/honest-organizations/
Also FWIW I’m very confident Chris Painter has never been under any non-disparagement obligation to OpenAI