One datapoint: they could have gone with a public benefit corporation, but chose not to. And none of the staff, particularly key figures like Szegedy, are, AFAIK, at all interested in safety. (Szegedy in particular has been dismissive of there being anything but the most minor near-term AI-bias-style issues on Twitter, IIRC.)
EDIT: also relevant: Musk was apparently recruiting the first dozen for x.AI by promising the researchers “$200 million” of equity each, under the reasoning that x.AI is (somehow) already worth “$20,000 million” and thus 1% equity each is worth that much.
Yes, the entire redundancy argument hinges on the state of the situation with Hendrycks. Depending on Hendrycks ability to reform X.AI’s current stated alignment plan to a sufficient degree, it would just be another Facebook AI labs, which would reduce, not increase, the redundancy.
In particular, if Hendrycks would just be be removed or marginalized in scenarios where safety-conscious labs start dropping like flies (a scenario that Musk, Altman, Hassabis, Lecun, and Amodei are each aware of), then X.AI would not be introducing any redundancy at all in the first place.
Sure, it’s better for them to have that advice then not have that advice. I will refer you to this post for my guess of how much it counts for. [Like, we can see their stated goal of how they’re going to go about safety!]
What makes you count x.AI as safety-conscious?
One datapoint: they could have gone with a public benefit corporation, but chose not to. And none of the staff, particularly key figures like Szegedy, are, AFAIK, at all interested in safety. (Szegedy in particular has been dismissive of there being anything but the most minor near-term AI-bias-style issues on Twitter, IIRC.)
EDIT: also relevant: Musk was apparently recruiting the first dozen for x.AI by promising the researchers “$200 million” of equity each, under the reasoning that x.AI is (somehow) already worth “$20,000 million” and thus 1% equity each is worth that much.
They are advised by Dan Hendrycks. That counts for something.
Yes, the entire redundancy argument hinges on the state of the situation with Hendrycks. Depending on Hendrycks ability to reform X.AI’s current stated alignment plan to a sufficient degree, it would just be another Facebook AI labs, which would reduce, not increase, the redundancy.
In particular, if Hendrycks would just be be removed or marginalized in scenarios where safety-conscious labs start dropping like flies (a scenario that Musk, Altman, Hassabis, Lecun, and Amodei are each aware of), then X.AI would not be introducing any redundancy at all in the first place.
Sure, it’s better for them to have that advice then not have that advice. I will refer you to this post for my guess of how much it counts for. [Like, we can see their stated goal of how they’re going to go about safety!]