Right now, the USG seems to very much be in [prepping for an AI arms race] mode. I hope there’s some way to structure this that is both legal and does not require the explicit consent of the US government. I also somewhat worry that the US government does their own capabilities research, as hinted at in the “datacenters on federal lands” EO. I also also worry that OpenAI’s culture is not sufficiently safety-minded right now to actually sign onto this; most of what I’ve been hearing from them is accelerationist.
Still, I think this letter has value, especially if it has “P.S. We’re making this letter because we think if everyone keeps racing then there’s a noticable risk of everyone dying. We think it would be worse if only we stop, but having everyone stop would be the safest, and we think this opinion of ours should be known publicly”
[disclaimers: they’re not an anti trust lawyer and definitely don’t take responsibility for this opinion, nor do I. This all might maybe be wrong and we need to speak to an actual anti-trust lawyer to get certainty. I’m not going to put any more disclaimers here, I hope I’m not also misremembering something]
So,
Having someone from the U.S government sign that they won’t enforce anti trust laws isn’t enough (even if the president signs), because the (e.g) president might change their mind, or the next president might enforce it retroactively. This is similar to the current situation with Tiktok where Trump said he wouldn’t enforce the law that prevents Google from having Tiktok on their app store, but Google still didn’t put Tiktok back, probably because they’re afraid that someone will change their mind and retroactively enforce the law
I asked if the government (e.g president) could sign “we won’t enforce this, and if we change our mind we’ll give a 3 month notice”.
The former-lawyer’s response was to consider whether, in a case the president would change their mind immediately, this signature would hold up in court. He thinks that not, but couldn’t remember an example of something similar happening (which seems relevant)
If the law changes (for example, to exclude this letter), that works
(but it’s hard to pass such changes through congress)
If the letter is conditional on the law changing, that seems ok
My interpretation of this:
It seems probably possible to find a solution where signing this letter is legal, but we’d have to consult with an anti-trust lawyer.
[reminder that this isn’t legal advice, isn’t confident, is maybe misremembered, and so on]
Sounds like a legit pushback that I’d add to the letter?
“if all other labs sign this letter AND the U.S government approves this agreement, then we’ll stop” ?
Right now, the USG seems to very much be in [prepping for an AI arms race] mode. I hope there’s some way to structure this that is both legal and does not require the explicit consent of the US government. I also somewhat worry that the US government does their own capabilities research, as hinted at in the “datacenters on federal lands” EO. I also also worry that OpenAI’s culture is not sufficiently safety-minded right now to actually sign onto this; most of what I’ve been hearing from them is accelerationist.
US Gov isn’t likely to sign: Seems right.
OpenAI isn’t likely to sign: Seems right.
Still, I think this letter has value, especially if it has “P.S. We’re making this letter because we think if everyone keeps racing then there’s a noticable risk of everyone dying. We think it would be worse if only we stop, but having everyone stop would be the safest, and we think this opinion of ours should be known publicly”
An opinion from a former lawyer
[disclaimers: they’re not an anti trust lawyer and definitely don’t take responsibility for this opinion, nor do I. This all might maybe be wrong and we need to speak to an actual anti-trust lawyer to get certainty. I’m not going to put any more disclaimers here, I hope I’m not also misremembering something]
So,
Having someone from the U.S government sign that they won’t enforce anti trust laws isn’t enough (even if the president signs), because the (e.g) president might change their mind, or the next president might enforce it retroactively. This is similar to the current situation with Tiktok where Trump said he wouldn’t enforce the law that prevents Google from having Tiktok on their app store, but Google still didn’t put Tiktok back, probably because they’re afraid that someone will change their mind and retroactively enforce the law
I asked if the government (e.g president) could sign “we won’t enforce this, and if we change our mind we’ll give a 3 month notice”.
The former-lawyer’s response was to consider whether, in a case the president would change their mind immediately, this signature would hold up in court. He thinks that not, but couldn’t remember an example of something similar happening (which seems relevant)
If the law changes (for example, to exclude this letter), that works
(but it’s hard to pass such changes through congress)
If the letter is conditional on the law changing, that seems ok
My interpretation of this:
It seems probably possible to find a solution where signing this letter is legal, but we’d have to consult with an anti-trust lawyer.
[reminder that this isn’t legal advice, isn’t confident, is maybe misremembered, and so on]