It’s definitely true that adding another AGI capabilities org increases the rate of AI capabilities research, and complicates all future negotiations by adding in another party, which dramatically increases the number of 2-org dyads where each dyad is a potential point of failure in any negotiation.
But, at the same time, it also adds in redundancy in the event that multiple AGI orgs are destroyed or rendered defunct by some other means. If Anthropic, Deepmind, and OpenAI are bumped off, but Facebook AI labs and some other AI lab remain on top, then that would be a truly catastrophic situation for humanity.
In models where a rapidly advancing world results in rapid changes and upheavals, X.AI’s existence as a top AI lab (that is simultaneously safety-conscious) adds in a form of redundancy that is absolutely crucial in various scenarios, e.g. where a good outcome could still be attained so long as there is at least one surviving safety-conscious lab running in parallel with various AI safety efforts in the Berkeley area.
One datapoint: they could have gone with a public benefit corporation, but chose not to. And none of the staff, particularly key figures like Szegedy, are, AFAIK, at all interested in safety. (Szegedy in particular has been dismissive of there being anything but the most minor near-term AI-bias-style issues on Twitter, IIRC.)
EDIT: also relevant: Musk was apparently recruiting the first dozen for x.AI by promising the researchers “$200 million” of equity each, under the reasoning that x.AI is (somehow) already worth “$20,000 million” and thus 1% equity each is worth that much.
Yes, the entire redundancy argument hinges on the state of the situation with Hendrycks. Depending on Hendrycks ability to reform X.AI’s current stated alignment plan to a sufficient degree, it would just be another Facebook AI labs, which would reduce, not increase, the redundancy.
In particular, if Hendrycks would just be be removed or marginalized in scenarios where safety-conscious labs start dropping like flies (a scenario that Musk, Altman, Hassabis, Lecun, and Amodei are each aware of), then X.AI would not be introducing any redundancy at all in the first place.
Sure, it’s better for them to have that advice then not have that advice. I will refer you to this post for my guess of how much it counts for. [Like, we can see their stated goal of how they’re going to go about safety!]
It’s definitely true that adding another AGI capabilities org increases the rate of AI capabilities research, and complicates all future negotiations by adding in another party, which dramatically increases the number of 2-org dyads where each dyad is a potential point of failure in any negotiation.
But, at the same time, it also adds in redundancy in the event that multiple AGI orgs are destroyed or rendered defunct by some other means. If Anthropic, Deepmind, and OpenAI are bumped off, but Facebook AI labs and some other AI lab remain on top, then that would be a truly catastrophic situation for humanity.
In models where a rapidly advancing world results in rapid changes and upheavals, X.AI’s existence as a top AI lab (that is simultaneously safety-conscious) adds in a form of redundancy that is absolutely crucial in various scenarios, e.g. where a good outcome could still be attained so long as there is at least one surviving safety-conscious lab running in parallel with various AI safety efforts in the Berkeley area.
What makes you count x.AI as safety-conscious?
One datapoint: they could have gone with a public benefit corporation, but chose not to. And none of the staff, particularly key figures like Szegedy, are, AFAIK, at all interested in safety. (Szegedy in particular has been dismissive of there being anything but the most minor near-term AI-bias-style issues on Twitter, IIRC.)
EDIT: also relevant: Musk was apparently recruiting the first dozen for x.AI by promising the researchers “$200 million” of equity each, under the reasoning that x.AI is (somehow) already worth “$20,000 million” and thus 1% equity each is worth that much.
They are advised by Dan Hendrycks. That counts for something.
Yes, the entire redundancy argument hinges on the state of the situation with Hendrycks. Depending on Hendrycks ability to reform X.AI’s current stated alignment plan to a sufficient degree, it would just be another Facebook AI labs, which would reduce, not increase, the redundancy.
In particular, if Hendrycks would just be be removed or marginalized in scenarios where safety-conscious labs start dropping like flies (a scenario that Musk, Altman, Hassabis, Lecun, and Amodei are each aware of), then X.AI would not be introducing any redundancy at all in the first place.
Sure, it’s better for them to have that advice then not have that advice. I will refer you to this post for my guess of how much it counts for. [Like, we can see their stated goal of how they’re going to go about safety!]