I haven’t followed this in great detail, but I do remember hearing from many AI policy people (including people at the UKAISI) that such commitments had been made.
It’s plausible to me that this was an example of “miscommunication” rather than “explicit lying.” I hope someone who has followed this more closely provides details.
But note that I personally think that AGI labs have a responsibility to dispel widely-believed myths. It would shock me if OpenAI/Anthropic/Google DeepMind were not aware that people (including people in government) believed that they had made this commitment. If you know that a bunch of people think you committed to sending them your models, and your response is “well technically we never said that but let’s just leave it ambiguous and then if we defect later we can just say we never committed”, I still think it’s fair for people to be disappointed in the labs.
(I do think this form of disappointment should not be conflated with “you explicitly said X and went back on it”, though.)
I agree in principle that labs have the responsibility to dispel myths about what they’re committed to. OTOH, in defense of the labs I imagine that this can be hard to do while you’re in the middle of negotiations with various AISIs about what those commitments should look like.
I agree in principle that labs have the responsibility to dispel myths about what they’re committed to
I don’t know, this sounds weird. If people make stuff up about someone else and do so continually, in what sense it’s that someone “responsibility” to rebut such things? I would agree with a weaker claim, something like: don’t be ambiguous about your commitments with the objective of making it seem like you are committing to something and then walk back at the time you should make the commitment.
Yeah fair point. I do think labs have some some nonzero amount of responsibility to be proactive about what others believe about their commitments. I agree it doesn’t extend to ‘rebut every random rumor’.
I haven’t followed this in great detail, but I do remember hearing from many AI policy people (including people at the UKAISI) that such commitments had been made.
It’s plausible to me that this was an example of “miscommunication” rather than “explicit lying.” I hope someone who has followed this more closely provides details.
But note that I personally think that AGI labs have a responsibility to dispel widely-believed myths. It would shock me if OpenAI/Anthropic/Google DeepMind were not aware that people (including people in government) believed that they had made this commitment. If you know that a bunch of people think you committed to sending them your models, and your response is “well technically we never said that but let’s just leave it ambiguous and then if we defect later we can just say we never committed”, I still think it’s fair for people to be disappointed in the labs.
(I do think this form of disappointment should not be conflated with “you explicitly said X and went back on it”, though.)
I agree in principle that labs have the responsibility to dispel myths about what they’re committed to. OTOH, in defense of the labs I imagine that this can be hard to do while you’re in the middle of negotiations with various AISIs about what those commitments should look like.
I don’t know, this sounds weird. If people make stuff up about someone else and do so continually, in what sense it’s that someone “responsibility” to rebut such things? I would agree with a weaker claim, something like: don’t be ambiguous about your commitments with the objective of making it seem like you are committing to something and then walk back at the time you should make the commitment.
Yeah fair point. I do think labs have some some nonzero amount of responsibility to be proactive about what others believe about their commitments. I agree it doesn’t extend to ‘rebut every random rumor’.