If Ilya was willing to cooperate, the board could fire Altman, with the Thanksgiving break available to aid the transition, and hope for the best.
Alternatively, the board could choose once again not to fire Altman, watch as Altman finished taking control of OpenAI and turned it into a personal empire, and hope this turns out well for the world.
Could they not have also gone with option 3: fill the vacant board seats with sympathetic new members, thus thwarting Altman’s power play internally?
Okay, I’m gonna take my skeptical shot at the argument, I hope you don’t mind!
It’s not true that whatever the AI tried to do would happen. What if an AI wanted to travel faster than the speed of light, or prove that 2+2=5, or destroy the sun within 1 second of being turned on?
You can’t just say “arbitrary goals”, you have to actually explain what goals there are that would be realistically achievable by an realistic AI that could be actually built in the near future. If those abilities fall short of “destroy all of humanity”, then there is no x-risk.
This is fictional evidence. Genies don’t exist, and if they did, it probably wouldn’t be that hard to add enough caveats to your wish to prevent global genocide. A counterexample might be the use of laws: sure, there are loopholes, but not big enough that the law would let you off on a broad daylight killing spree.
Well, there is laws of physics and maths that put limits on available computational power, which in turn puts a limit on what an AI can actually achieve. For example, a perfect Bayesian reasoner is forbidden by the laws of mathematics.