There should be points for how the organizations act wrt to legislation. In the SB 1047 bill that CAIS co-sponsored, we’ve noticed some AI companies to be much more antagonistic than others. I think is is probably a larger differentiator for an organization’s goodness or badness.
@Dan H are you able to say more about which companies were most/least antagonistic?
I haven’t followed this in great detail, but I do remember hearing from many AI policy people (including people at the UKAISI) that such commitments had been made.
It’s plausible to me that this was an example of “miscommunication” rather than “explicit lying.” I hope someone who has followed this more closely provides details.
But note that I personally think that AGI labs have a responsibility to dispel widely-believed myths. It would shock me if OpenAI/Anthropic/Google DeepMind were not aware that people (including people in government) believed that they had made this commitment. If you know that a bunch of people think you committed to sending them your models, and your response is “well technically we never said that but let’s just leave it ambiguous and then if we defect later we can just say we never committed”, I still think it’s fair for people to be disappointed in the labs.
(I do think this form of disappointment should not be conflated with “you explicitly said X and went back on it”, though.)