By “gag order” do you mean just as a matter of private agreement, or something heavier-handed, with e.g. potential criminal consequences?
I have trouble understanding the absolute silence we seem to be having. There seem to be very few leaks, and all of them are very mild-mannered and are failing to build any consensus narrative that challenges OA’s press in the public sphere.
Are people not able to share info over Signal or otherwise tolerate some risk here? It doesn’t add up to me if the risk is just some chance of OA trying to then sue you to bankruptcy, especially since I think a lot of us would offer support in that case, and the media wouldn’t paint OA in a good light for it.
I am confused. (And I grateful to William for at least saying this much, given the climate!)
I would guess that there isn’t a clear smoking gun that people aren’t sharing because of NDAs, just a lot of more subtle problems that add up to leaving (and in some cases saying OpenAI isn’t being responsible etc).
This is consistent with the observation of the board firing Sam but not having a clear crossed line to point at for why they did it.
It’s usually easier to notice when the incentives are pointing somewhere bad than to explain what’s wrong with them, and it’s easier to notice when someone is being a bad actor than it is to articulate what they did wrong. (Both of these run a higher risk of false positives relative to more crisply articulatable problems.)
The lack of leaks could just mean that there’s nothing interesting to leak. Maybe William and others left OpenAI over run-of-the-mill office politics and there’s nothing exceptional going on related to AI.
Rest assured, there is plenty that could leak at OA… (And might were there not NDAs, which of course is much of the point of having them.)
For a past example, note that no one knew that Sam Altman had been fired from YC CEO for similar reasons as OA CEO, until the extreme aggravating factor of the OA coup, 5 years later. That was certainly more than ‘run of the mill office politics’, I’m sure you’ll agree, but if that could be kept secret, surely lesser things now could be kept secret well past 2029?
At least one of them has explicitly indicated they left because of AI safety concerns, and this thread seems to be insinuating some concern—Ilya Sutskever’s conspicuous silence has become a meme, and Altman recently expressed that he is uncertain of Ilya’s employment status. There still hasn’t been any explanation for the boardroom drama last year.
If it was indeed run-of-the-mill office politics and all was well, then something to the effect of “our departures were unrelated, don’t be so anxious about the world ending, we didn’t see anything alarming at OpenAI” would obviously help a lot of people and also be a huge vote of confidence for OpenAI.
It seems more likely that there is some (vague?) concern but it’s been overridden by tremendous legal/financial/peer motivations.
By “gag order” do you mean just as a matter of private agreement, or something heavier-handed, with e.g. potential criminal consequences?
I have trouble understanding the absolute silence we seem to be having. There seem to be very few leaks, and all of them are very mild-mannered and are failing to build any consensus narrative that challenges OA’s press in the public sphere.
Are people not able to share info over Signal or otherwise tolerate some risk here? It doesn’t add up to me if the risk is just some chance of OA trying to then sue you to bankruptcy, especially since I think a lot of us would offer support in that case, and the media wouldn’t paint OA in a good light for it.
I am confused. (And I grateful to William for at least saying this much, given the climate!)
I would guess that there isn’t a clear smoking gun that people aren’t sharing because of NDAs, just a lot of more subtle problems that add up to leaving (and in some cases saying OpenAI isn’t being responsible etc).
This is consistent with the observation of the board firing Sam but not having a clear crossed line to point at for why they did it.
It’s usually easier to notice when the incentives are pointing somewhere bad than to explain what’s wrong with them, and it’s easier to notice when someone is being a bad actor than it is to articulate what they did wrong. (Both of these run a higher risk of false positives relative to more crisply articulatable problems.)
The lack of leaks could just mean that there’s nothing interesting to leak. Maybe William and others left OpenAI over run-of-the-mill office politics and there’s nothing exceptional going on related to AI.
Rest assured, there is plenty that could leak at OA… (And might were there not NDAs, which of course is much of the point of having them.)
For a past example, note that no one knew that Sam Altman had been fired from YC CEO for similar reasons as OA CEO, until the extreme aggravating factor of the OA coup, 5 years later. That was certainly more than ‘run of the mill office politics’, I’m sure you’ll agree, but if that could be kept secret, surely lesser things now could be kept secret well past 2029?
At least one of them has explicitly indicated they left because of AI safety concerns, and this thread seems to be insinuating some concern—Ilya Sutskever’s conspicuous silence has become a meme, and Altman recently expressed that he is uncertain of Ilya’s employment status. There still hasn’t been any explanation for the boardroom drama last year.
If it was indeed run-of-the-mill office politics and all was well, then something to the effect of “our departures were unrelated, don’t be so anxious about the world ending, we didn’t see anything alarming at OpenAI” would obviously help a lot of people and also be a huge vote of confidence for OpenAI.
It seems more likely that there is some (vague?) concern but it’s been overridden by tremendous legal/financial/peer motivations.