The four corners of the agreement seem to define ‘disparagement’ broadly, so one might reasonably fear (e.g.) “First author on an eval especially critical of OpenAI versus its competitors”, or “Policy document highly critical of OpenAI leadership decisions” might ‘count’.
Given Altman’s/OpenAI’s vindictiveness and duplicity, and the previous ‘safeguards’ (from their perspective) which give them all the cards in terms of folks being able to realise the value of their equity, “They will screw me out of a lot of money if I do something they really don’t like (regardless of whether it ‘counts’ per the non-disparagement agreement)” seems a credible fear.
It appears Altman tried to get Toner kicked off the board for being critical of OpenAI in a policy piece, after all.
This is indeed moot for roles which require equity to be surrendered anyway. I’d guess most roles outside government (and maybe some within it) do not have such requirements. A conflict of interest roughly along the lines of the first two points makes impartial performance difficult, and credible impartial performance impossible (i.e. even if indeed Alice can truthfully swear “My being subject to such an agreement has never influenced my work in AI policy”, reasonable third parties would be unwise to believe her).
The ‘non-disclosure of non-disparagement’ makes this worse, as it interferes with this conflict of interest being fully disclosed. “Alice has a bunch of OpenAI equity” is one thing, “Alice has a bunch of OpenAI equity, and has agreed to be beholden to them in various ways to keep it” is another. We would want to know the latter to critically appraise Alice’s work whenever it is relevant to OpenAI’s interests (and I would guess a lot of policy/eval/reg/etc. would be sufficiently relevant that we’d like to contemplate whether Alice’s commitments colour her position). Yet Alice has also promised to keep these extra relevant details secret.
I see the concerns as these:
The four corners of the agreement seem to define ‘disparagement’ broadly, so one might reasonably fear (e.g.) “First author on an eval especially critical of OpenAI versus its competitors”, or “Policy document highly critical of OpenAI leadership decisions” might ‘count’.
Given Altman’s/OpenAI’s vindictiveness and duplicity, and the previous ‘safeguards’ (from their perspective) which give them all the cards in terms of folks being able to realise the value of their equity, “They will screw me out of a lot of money if I do something they really don’t like (regardless of whether it ‘counts’ per the non-disparagement agreement)” seems a credible fear.
It appears Altman tried to get Toner kicked off the board for being critical of OpenAI in a policy piece, after all.
This is indeed moot for roles which require equity to be surrendered anyway. I’d guess most roles outside government (and maybe some within it) do not have such requirements. A conflict of interest roughly along the lines of the first two points makes impartial performance difficult, and credible impartial performance impossible (i.e. even if indeed Alice can truthfully swear “My being subject to such an agreement has never influenced my work in AI policy”, reasonable third parties would be unwise to believe her).
The ‘non-disclosure of non-disparagement’ makes this worse, as it interferes with this conflict of interest being fully disclosed. “Alice has a bunch of OpenAI equity” is one thing, “Alice has a bunch of OpenAI equity, and has agreed to be beholden to them in various ways to keep it” is another. We would want to know the latter to critically appraise Alice’s work whenever it is relevant to OpenAI’s interests (and I would guess a lot of policy/eval/reg/etc. would be sufficiently relevant that we’d like to contemplate whether Alice’s commitments colour her position). Yet Alice has also promised to keep these extra relevant details secret.