I think this is a great post that lays out the argument well, as I understand it.
Whether or not this is the safest path, the fact that OpenAI thinks it’s true and is one of the leading AI labs makes it a path we’re likely to take.
I want to push against this point somewhat. I don’t think OpenAI’s opinion is going to matter as much as the opinions of governments. OpenAI does influence government opinion, but in case there’s a disagreement, I don’t expect OpenAI to prevail (except perhaps if we’re already deep into a takeoff).
Indeed I expect governments will generally be really skeptical that we should radically transform civilization in the next decade or so. Most people are conservative about new technologies, even if they usually tolerate them. AI is going to cause unusually rapid change, which will presumably prompt a greater-than-normal backlash.
I anticipate that people will think the argument presented in this post is very weak (even if it’s not), and assume that the argument is guided by motivated reasoning. In fact, this has been my experience after trying to explain this argument to EAs several times in the last few months. I could be wrong of course, but I don’t expect OpenAI will have much success convincing skeptical third parties that delaying is suboptimal.
I don’t think OpenAI’s opinion is going to matter as much as the opinions of governments. OpenAI does influence government opinion, but in case there’s a disagreement, I don’t expect OpenAI to prevail (except perhaps if we’re already deep into a takeoff).
But behind the scenes, OpenAI has lobbied for significant elements of the most comprehensive AI legislation in the world—the E.U.’s AI Act—to be watered down in ways that would reduce the regulatory burden on the company, according to documents about OpenAI’s engagement with E.U. officials obtained by TIME from the European Commission via freedom of information requests.
In several cases, OpenAI proposed amendments that were later made to the final text of the E.U. law—which was approved by the European Parliament on June 14, and will now proceed to a final round of negotiations before being finalized as soon as January.
In 2022, OpenAI repeatedly argued to European officials that the forthcoming AI Act should not consider its general purpose AI systems—including GPT-3, the precursor to ChatGPT, and the image generator Dall-E 2—to be “high risk,” a designation that would subject them to stringent legal requirements including transparency, traceability, and human oversight. [...]
“By itself, GPT-3 is not a high-risk system,” said OpenAI in a previously unpublished seven-page document that it sent to E.U. Commission and Council officials in September 2022, titled OpenAI White Paper on the European Union’s Artificial Intelligence Act. “But [it] possesses capabilities that can potentially be employed in high risk use cases.” [...]
OpenAI’s lobbying effort appears to have been a success: the final draft of the Act approved by E.U. lawmakers did not contain wording present in earlier drafts suggesting that general purpose AI systems should be considered inherently high risk. Instead, the agreed law called for providers of so-called “foundation models,” or powerful AI systems trained on large quantities of data, to comply with a smaller handful of requirements including preventing the generation of illegal content, disclosing whether a system was trained on copyrighted material, and carrying out risk assessments. OpenAI supported the late introduction of “foundation models” as a separate category in the Act, a company spokesperson told TIME.
The fact that OpenAI was successful at a narrow lobbying effort doesn’t surprise me. It’s important to note that the dispute was about whether GPT-3 and Dall-E 2 should be considered “high risk”. I think it’s very reasonable to consider those technologies low risk. The first one has been around since 2020 without major safety issues, and the second merely generates images. I predict the EU + US government would win a much higher stakes ‘battle’ against OpenAI.
I think this is a great post that lays out the argument well, as I understand it.
I want to push against this point somewhat. I don’t think OpenAI’s opinion is going to matter as much as the opinions of governments. OpenAI does influence government opinion, but in case there’s a disagreement, I don’t expect OpenAI to prevail (except perhaps if we’re already deep into a takeoff).
Indeed I expect governments will generally be really skeptical that we should radically transform civilization in the next decade or so. Most people are conservative about new technologies, even if they usually tolerate them. AI is going to cause unusually rapid change, which will presumably prompt a greater-than-normal backlash.
I anticipate that people will think the argument presented in this post is very weak (even if it’s not), and assume that the argument is guided by motivated reasoning. In fact, this has been my experience after trying to explain this argument to EAs several times in the last few months. I could be wrong of course, but I don’t expect OpenAI will have much success convincing skeptical third parties that delaying is suboptimal.
OpenAI was recently successful at lobbying against heavy regulation in the E.U.: https://time.com/6288245/openai-eu-lobbying-ai-act/
The fact that OpenAI was successful at a narrow lobbying effort doesn’t surprise me. It’s important to note that the dispute was about whether GPT-3 and Dall-E 2 should be considered “high risk”. I think it’s very reasonable to consider those technologies low risk. The first one has been around since 2020 without major safety issues, and the second merely generates images. I predict the EU + US government would win a much higher stakes ‘battle’ against OpenAI.