x.ai has now launched. It seems worthwhile to discuss both what it means for AI safety and whether people interested in AI safety should consider applying for the company.
Some thoughts:
It’s notable that Dan Hendrycks is listed as an advisor (the only advisor listed).
The team is also listed on the page.
I haven’t taken the time to do so, but it might be informative for someone to Google the individuals listed to discover whether they lie in terms of their interest being between capabilities and safety.
The only team member whose name is on the CAIS extinction risk statement is Tony (Yuhuai) Wu.
(Though not everyone who signed the statement is listed under it, especially if they’re less famous. And I know one person in the xAI team who has privately expressed concern about AGI safety in ~2017.)
Igor Babuschkin has also signed it.